Beruflich Dokumente
Kultur Dokumente
MEASUREMENT
About the Authors
Anand K Bewoor
Assistant Professor
Department of Mechanical Engineering
VIIT, Pune
Vinay A Kulkarni
Lecturer
Department of Production Engineering
D Y Patil College of Engineering, Pune
Information contained in this work has been obtained by Tata McGraw-Hill, from sources believed to be reliable. However,
neither Tata McGraw-Hill nor its authors guarantee the accuracy or completeness of any information published herein,
and neither Tata McGraw-Hill nor its authors shall be responsible for any errors, omissions, or damages arising out of use
of this information. This work is published with the understanding that Tata McGraw-Hill and its authors are supplying
information but are not attempting to render engineering or other professional services. If such services are required, the
assistance of an appropriate professional should be sought.
Typeset at Mukesh Technologies Pvt. Ltd., #10, 100 Feet Road, Ellapillaichavadi, Pondicherry 605 005 and printed at
Avon Printers, Plot No. 16, Main Loni Road, Jawahar Nagar Industrial Area, Shahdara, Delhi 110 094
Cover: SDR
RCXCRRCFDXQAA
Preface xii
List of Important Symbols xiv
List of Important Abbreviations xvi
Visual Walkthrough xvii
1. Introduction to Metrology 1
1.1 Definitions of Metrology 2
1.2 Types of Metrology 2
1.3 Need of Inspection 3
1.4 Metrological Terminologies 4
1.5 Principal Aspects of Measurement 7
1.6 Methods of Measurements 8
1.7 Measuring Instruments and their Selection 9
1.8 Errors in Measurement 10
1.9 Units of Measurement 14
1.10 Metric Units in Industry 19
Review Questions 21
2. Measurement Standards 22
2.1 Introduction 23
2.2 The New Era of Material Standards 24
2.3 Types of Standards 25
2.4 Subdivision of Standards 33
2.5 Calibration 34
Review Questions 45
3. Linear Metrology 46
3.1 Introduction 47
3.2 Steel Rule (Scale) 48
3.3 Calipers 49
3.4 Vernier Caliper 51
3.5 Vernier Height Gauge 56
3.6 Vernier Depth Gauge 58
3.7 Micrometers 59
3.8 Digital Measuring Instrument for External and Internal Dimensions 71
3.9 Digital Universal Caliper 72
Review Questions 73
viii Contents
9. Comparator 236
9.1 Introduction 236
9.2 Desirable Features of Comparators 238
9.3 Classification of Comparators 238
Review Questions 264
10. Metrology of Surface Finish 266
10.1 Introduction 267
10.2 Terms Used in Surface-Roughness Measurement 267
10.3 Factors Affecting Surface Finish in Machining 272
10.4 Surface-Roughness Measurement Methods 276
10.5 Precautions for Surface-Roughness Measurement 281
10.6 Surface Texture Parameters 282
10.7 Pocket Surf 295
10.8 Specifying the Surface Finish 296
Review Questions 298
11. Metrology of Screw Threads 300
11.1 Understanding Quality Specifications of Screw Threads 300
11.2 Screw Thread Terminology 302
11.3 Types of Threads 305
11.4 Measurement of Screw Threads 307
11.5 Measurement of Thread Form Angle 316
11.6 Measurement of Internal Threads 318
Review Questions 322
12. Metrology of Gears 324
12.1 Introduction 324
12.2 Types of Gears 326
12.3 Spur Gear Terminology 328
12.4 Forms of Gears 330
12.5 Quality of (Spur) Gear 331
12.6 Errors in Spur Gear 332
12.7 Measurement and Checking of Spur Gear 334
12.8 Inspection of Shrinkage and Plastic Gears 349
12.9 Measurement Over Rollers 349
12.10 Recent Development in Gear Metrology 349
Review Questions 352
13. Miscellaneous Measurements 354
13.1 Measurement of Taper on One Side 354
13.2 Measurement of Internal Taper 355
13.3 Measurement of Included Angle of Internal Dovetail 356
13.4 Measurement of Radius 357
Review Questions 360
14. Study of Advanced Measuring Machines 361
14.1 Concept of Instrument Overlapping 362
14.2 Metrology Integration 362
x Contents
Nowadays, trade is leading to a greater awareness worldwide of the role that dimensional and mechani-
cal measurement plays in underpinning activities in all areas of science and technology. It provides a
fundamental basis not only for the physical sciences and engineering, but also for chemistry, the bio-
logical sciences and related areas such as the environment, medicine, agriculture and food. Laboratory
programmes have been modernized, sophisticated electronic instrumentation has been incorporated
into the programme and newer techniques have been developed. Keeping these views in mind, this
book is written which deals with not only the techniques of dimensional measurement but also the
physical aspects of measurement techniques.
In today’s world of high-technology products, the most important requirements of dimensional
and other accuracy controls are becoming very stringent as a very important aspect in achieving quality
and reliability in the service of any product in dimensional control. Unless the manufactured parts are
accurately measured, assurance of quality cannot be given. In this context, the first part of the book
deals with the basic principles of dimensional measuring instruments and precision measurement tech-
niques. This part of the book starts with discussing the basic concepts in metrology and measurement
standards in the first two introductory chapters. Then, linear, angular, machine tool and geometrical
shape metrology along with interferometry techniques and various types of comparators are explained
thoroughly in the subsequent chapters. Concepts of limits, fits and tolerances and measurement of
surface finish are illustrated in detail. Chapters 11 and 12 discuss the metrology of standard machine
parts like screw threads and gears respectively. Miscellaneous measurement and recent advancements in
the field of metrology are discussed in the last two chapters of the first part of the book.
The second part of this book begins with the explanation of measurement systems and transducers.
The methods of measuring mechanical quantities, viz., force, torque, vibration, pressure, temperature,
strain and flow measurement are discussed subsequently, covering both the basic and derived quantities.
Effort has been made to present the subject in SI units. Some of the recent developments such as use
of laser techniques in measurement have also been included.
The Online Learning Center of the book can be accessed at http://www.mhhe.com/bewoor.mm
and contains the following material:
For Instructors
• Solution Manual
• PowerPoint lecture slides
• Full-resolution figures and photos from the text
• Model syllabi
For Students
• Interactive quiz
• Objective-type questions
Preface xiii
ANAND K BEWOOR
VINAY A KULKARNI
List of Important Symbols
K : Stiffness
Lo : Actual Profile Length/Profile Length Ratio
m : Mass of the body
m : module = (Pitch circle diameter)! (No. of teeth) = 2 R/z
P : Pitch of the thread
p : Constant pitch value
Pc : Peak count
r : Radius at the top and bottom of the threads
R : Resistance at the measured temperature, t
Ro : Resistance at the reference temperature, to
R1, R2, R3, R4 : Resistance
Ra : Average roughness value
Rku : Measure of the sharpness of the surface profile
Rmax : Maximum height of unevenness/maximum peak to valley height within a sample
length
Rp : Maximum peak height
Rq : Root mean square roughness
Rsk : Measurement of skewness
Rv : Maximum valley height
Rz(ISO) : Sum of the height of the highest peak plus the lowest valley depth within a sampling
length
Rz(JIS) : The ISO I O-point height parameter in ISO
S : Number of tooth space contained within space ‘W’
Sk : Skewness
Sm : Mean spacing
T : Dimension under the wires
T0 : Reference temperature generally taken as 298 K (25°C)
Vo : Output voltage
W : Chordal tooth thickness
x : Displacement
z : Number of teeth on gear
σ : Standard deviation
μ : Micron
δθ : Small angle (increment/change)
αβθ : Angles
List of Important Abbreviations
ject matter of the chapter. movement of subassemblies only. should be adequately free from vibra-
tions generated by the central air-condi-
tioning plant vehicular traffic and other
In general, there are four levels of stan-
dards used as references all over the sources. In other words, there should be
world, viz., primary, secondary, tertiary vibration-free operational conditions, the
and working standards. Primary stan- illumination should be 450 lux to 700 lux
dard is the one that is kept in Paris and on the working table with a glass index
secondary is the one kept with NPL of 19 for lab work, a generally dust-free
India; tertiary standard is the standard, atmosphere, temperature should be
which we use in our industries as a ref- controlled between 20 ± 1°C and humid-
erence for calibration purpose. Working ity should be controlled between 50 ±
standards are used on the shop floor. 10%. To avoid any such adverse effect
Hence it could be said that there is an on instruments, a calibration laboratory
unbroken chain for tracing the stan- is required to be set underground.
dards. Every country has a custodian In our opinion, quality should be built
who looks after secondary standards. up at the design stage, which is an
The National Physical Laboratory (NPL) important key factor in designing a
Linear Metrology 47
rather than the sliding scale of the ver- Length Metrology is the measuring hub
nier caliper. This allows the scale to be of metrological instruments and sincere
placed more precisely, and, conse- efforts must be made to understand the
quently, the micrometer can be read to a operating principles of instruments
higher precision. used for various applications.
3.1 INTRODUCTION
Length is the most commonly used category of measurements in the world. In the ancient days, length
measurement was based on measurement of different human body parts such as nails, digit, palm,
handspan, pace as reference units and multiples of those to make bigger length units.
Linear Metrology is defined as the science of linear measurement, for the determination of the dis-
|
tance between two points in a straight line. Linear measurement is applicable to all external and internal
measurements such as distance, length and height-difference, diameter, thickness and wall thickness,
straightness, squareness, taper, axial and radial run-out, coaxiality and concentricity, and mating mea-
surements covering all range of metrology work on a shop floor. The principle of linear measurement
is to compare the dimensions to be measured and aligned with standard dimensions marked on the
Introduction
measuring instruments. Linear measuring instruments are designed either for line measurements or end
measurements discussed in the previous chapter.
Linear metrology follows two approaches:
Each chapter begins with an introduction
1. Two-Point Measuring-Contact-Member Approach Out of two measuring contact
members, one is fixed while the other is movable and is generally mounted on the measuring spindle of
that gives a brief summary of the back-
an instrument, e.g., vernier caliper or micrometer for measuring distance.
ground and contents of the chapter.
2. Three-Point Measuring-Contact-Member Approach Out of three measuring
contact members, two are fixed and the remaining is movable, e.g., To measure the diameter of a bar
held in a V-block, which provides two contact points, the third movable contact point, is of the dial
gauge.
The instruments used in length metrology are generally classified into two types:
In our day-to-day life, we see almost all products made up of different components. The modern
products involve a great deal of complexity in production and such complex products have interchange-
able parts to fit in another component. The various parts are assembled to make a final end product,
which involves accurate inspection. If there are thousands of such parts to be measured, the instruments
will require to be used thousands of times. The instruments in such a case require retaining their accuracy
Metrology of Screw Threads 307
2. Functional Parameters
a. Effective Diameter ---- Screw Threads Micrometer, Two-or Three-wire
methods, Floating Carriage Micrometer
b. Pitch ---- Screw Pitch Gauge, Pitch Error Testing Machine
| Sections and Sub-sections Measurement of screw threads can be done by inspection and checking of various components
of threads. The nut and other elements during mass production are checked by plug gauges or
ring gauges.
Illustrative Examples
Example 1 Design a plug gauge for checking the hole of 70H8. Use i = 0.45 3 D + 0.001D, IT8 = 25i,
Diameter step = 50 to 80 mm.
i = 0.45 3 63.245 + 0.001D = 1.8561 micron Limits, Fits and Tolerances 189
(c) Now consider gaugemaker’s tolerance (refer Article 6.9.4 (c)) = 10% of work tolerance.
Tolerance for IT8 = 25i, = 25. 1.8561 = 46.4036 microns
= 0.03268(0.1) mm
Hole dimensions
= 0.00327 mm
GO limit of hole = 70.00 mm
|
(d ) wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
NO GO limit of hole = 70.00 + 0.04640 = 70.04640 mm
= 0.00327 (0.1) mm = 0.000327 mm
GO plug gauge design
(e ) For designing general-purpose gauge
Workmanship allowance = 10 % hole tolerance = 10/100 × 0.4640 = 0.004640 mm
∴ Size of GO plug gauge after considering wear allowance = (22.06386 + 0.000327) mm
Illustrative Examples
Hole tolerance is less than 87.5 micron. It is necessary to provide wear allowance on a GO plug
= 22.0641 mm
gauge.
Lower limit of GO∴=GO is 22.0641−0.00
size mm
70.000
+0.00327
mm and NO-GO size is 22.965−0.00
+0.00327
mm. Illustrative Examples are provided in suf-
Upper limit of GORefer Fig. 6.49
= 70.0000 + 0.004640 = 70.00464 mm
+0.004640
ficient number in each chapter and at
Sizes of GO = 70 +0.00000
NO GO plug gauge
22.0997
NO-GO appropriate locations, to aid in under-
Workmanship allowance = 0.004640
+0.004640 +0.04640 +0.004640 22.0965
standing of the text material.
+0.04640−0.004640 +0.04176
NO GO Sizes = 70 = 70
Work
tolerance
= 0.0326 mm
22.06737
Example 2 Design and make a drawing of general purpose22.0641
‘GO’ and ‘NO-GO’ plug
GO gauge for inspecting
a hole of 22 D8. Data with usual notations:
Wear
i. i (microns) = i = 0.45 3 D + 0.001D allowance
= 0.003
ii. Fundamental deviations for hole D = 16 0.44. 22.0965
= 25i
iii. Value for IT8Fig. 6.51 Graphical representetion of genral-purpose gauge
Example 3 Design a ‘Workshop’ type GO and NO-GO Gauge suitable for 25 H7 .Data with usual
notations:
1. i (in microns) = i = 0.45 3 D + 0.001D
2. The value for IT7 = 16i.
Solution:
(a ) Firstly, find out the dimension of hole specified, i.e., 25 H7.
Limits, Fits and Tolerances 191
Example 4 Design ‘workshop’, ‘inspection’, and ‘general type’ GO and NO-GO gauges for checking the
assembly φ25H7/f8 and comment on the type of fit. Data with usual notations:
Explanations For a diameter of 25-mm step size (refer Table 6.3) = (18 − 30) mm
∴ D = d1 ×d 2 = 18×30 = 23.2379 mm
lytical treatment, problems (numerical) related to Tolerance value for IT7 = 16 i ….(Refer Table 6.4)
= 16(1.3074) = 20.85 microns ≅ 21microns
those concepts are explained stepwise at the end = 0.021 mm
of the chapters which enable the student to have +0.021
(b) Limits for 25 H7 = 25.00 −0.00 mm
22.37 23.05
16 Eei
Magnetic
base
Height
16 Eea
(e) Height measurement and transfer (f) Specially designed anvils for measurements
i 8 l f l
Fig. 7.28 Set of autocollimator along with square prism and mirror to measure small
angular tilts in the horizontal plane
(Courtesy, Metrology Lab Sinhgad COE, Pune)
Measurement Uncertainties
• Visual setting autocollimator ±0.3 second of an arc
over any interval up to 10 minutes of an arc
• Photoelectric setting autocollimator Typically ±0.10 sec-
ond of an arc over any interval up to 10 minutes
of an arc
• Automatic position-sensing electronic autocollimator Typ-
ically ±0.10 second of an arc over any interval up
to 10 minutes of an arc
Adjustable
tolerance
markers
1 2 3 A B
Pointer
A B
|
form of a full Wheatstone bridge. Fig. 17.7 Load/force cells
Wire of
Dia. ′d′
O
Diameter over
wire Dm
D E
Q
B C
E M
A 530 Metrology and Measurement
θ/2 Effective
diameter 21.6.1 Measurement of Bending Strain
Diameter ′T ′ P Gauge in tension, T
Consider measuring the bending strain in a cantilever.
If the two gauges are inserted into a half-bridge
circuit as shown and remembering that in tension the Gauge in compression, C
Fig. 11.14 Two-wire method resistance will increase by ΔR, and in compression Load
the resistance will decrease by the same amount, we Fig. 21.12 Measuring the bending strain
where T is the dimension under the wires and
can double the sensitivity to bending strain and elimi- in a cantilever
T = DM−2d nate sensitivity to temperature.
112 Metrology and Measurement
d = Diameter of wire
V ΔR
The
For methodsthe
measuring employed are as
dimension T, follows:
the wires are placed over a standard cylinder of diameter greater than The output is given by Vo = ×
2 R T:R + ΔR C:R − ΔR
the diameter
000/000under the wires,
for deviation of and the corresponding
perpendicularity, whichreading noted as r1 and the reading the over the
are theisratios
gauges as r2. (i.e., the output is double that from a quarter bridge circuit).
000 for any length of 000 for deviation of straightness and parallelism—this expression is used for
Then, T = P − (r − r ) Further, you can demonstrate that if the resistance of both Vo
local permissible1 deviation,
2 the measuring length being obligatory
gauges increases (due to temperature or axial strain) then the
where, = the
000p For constantofvalue
deviation which should
straightness be added to theexpression
and parallelism—this diameter under
is usedthe wires for calculating
to recommend a measur- output voltage remains unaffected (try it by putting the resistance
the
ingeffective diameter
length, but in caseand
the which also depends
proportionality upon the
rule comes intodiameter
operation,of the
the measuring
thread andlength
the pitch of from
differs the of gauge C as R + ΔR). R R
thread
those(pitch value)
indicated.
Now refer Fig. 11.14. 21.6.2 Measurement of Axial Strains
Excitation
5.3 MACHINE-TOOL TESTING In practice, four gauges are used, two of which measure the direct voltage,V
BC lies on the effective diameter line.
5.3.1 Alignment Testing of Lathe
strain and are placed opposite each other in the bridge (thereby Fig. 21.13 Circuit diagram
BC = ½ pitch =½p doubling sensitivity). Two more gauges are mounted at right
Table 5.1 Specifications of alignment testing of lathe
d cosecθ / 2 angles (thereby, not sensitive to the axial strain required) or on an
OP = unstrained sample of the same material to provide temperature compensation. The arrangements are
Sl. 2 Measuring Permissible
No. Test Item Figure Instruments Error (mm) shown in Fig. 21.14. Care must be taken in the angular alignment of the gauges on the sample.
d( cosec θ / 2 − 1)
1. Leveling of PA = Precision level or 0.01 to 0.02
machines (Straight- 2 any other optical
ness of sideway— PQ = QC cot θ/2 = p/4 cot θ/2instruments R1 R2
carriage)
(a) Longitudinal direc- (a) p cot θ / 2 d (cos ec θ / 2 − 1) R3
AQ = PQ − AP =
tion—straightness
−
4 2 R1
of sideways in Vo
AQ has a value half plane
vertical of P.
(b) In transverse
direction (b)
R4 R4 R3
R2
2. Straightness of car- Dial gauge and 0.015 to 0.02
riage movement in test mandrel or
horizontal plane or straight edges Excitation
possibly in a plane (a) with parallel faces, voltage, V
defined by the axis of between centres Fig. 21.14 Measurement of axial strains
centres and tool point
( Whenever test ( b) is
carried out, test (a) is
not necessary)
(b)
Illustrations |
Illustrations are essential tools in books
on engineering subjects. Ample illustra-
tions are provided in each chapter to il-
lustrate the concepts, functional relation-
ships and to provide definition sketches
for mathematical models.
Case Studies |
Case Studies are an important part of books
on engineering subjects. Many case studies
are provided in the chapters to explain the
concepts and their practical significances.
1 Introduction to
Metrology
the results they achieve. The science of a common perception of what is meant
measurement, metrology, is probably the by expressions such as metre, kilo-
oldest science in the world and knowl- gram, litre, watt, etc. Mankind has
edge of how it is applied is a fundamental thousands of years of experience to
necessity in practically all science-based confirm that life really does become
professions! Measurement requires easier when people cooperate on
common knowledge. metrology.
Metrology is hardly ostentatious and Metrology is a word derived from two
the calm surface it shows covers vast Greek words: Metro–Measurement,
areas of knowledge that only a few are Logy–Science. Metrology includes all
familiar with, but which most make aspects with reference to measurements,
use of, confident that they are sharing whatever their level of accuracy.
i. Metrology is the field of knowledge concerned with measurement and includes both theoretical
and practical problems with reference to measurement, whatever their level of accuracy and in
whatever fields of science and technology they occur. (Source: BS 5233:1975).
ii. Metrology is the science of measurement.
iii. Metrology is the science of weights and measures.
iv. Metrology is the process of making extremely precise measurements of the relative positions
and orientations of different optical and mechanical components.
v. Metrology is the documented control that all equipment is suitably calibrated and maintained in
order to perform as intended and to give reliable results.
vi. Metrology is the science concerned with the establishment, reproduction, conversion and trans-
fer of units of measurements and their standards.
The principal fields of metrology and its related applications are as follows:
a. Establishing units of measurement and their standards such as their establishment, reproduction,
conservation, dissemination and quality assurance
b. Measurements, methods, execution, and estimation of their accuracy
c. Measuring instruments—Properties examined from the point of view of their intended purpose
d. Observers’ capabilities with reference to making measurements, e.g., reading of instrument in-
dications
e. Design, manufacturing and testing of gauges of all kinds
Metrology is separated into three categories with different levels of complexity and accuracy:
Introduction to Metrology 3
1. Scientific Metrology deals with the organization and development of measurement stan-
dards and with their maintenance (highest level).
3. Legal Metrology is concerned with the accuracy of measurements where these have influ-
ence on the transparency of economical transactions, and health and safety, e.g., the volume of petrol
purchased at a pump or the weight of prepackaged flour. It seeks to protect the public against inaccu-
racy in trade. It includes a number of international organizations aiming at maintaining the uniformity
of measurement throughout the world. Legal metrology is directed by a national organization which is
known as National Service of Legal Metrology.
The functions of legal metrology are to ensure the conversion of national standards and to guaran-
tee their accuracies by comparison with international standards; to regulate, advise, supervise and con-
trol the manufacture and calibration of measuring instruments; to inspect the use of these instruments
with measurement procedures for public interest; to organize training sessions on legal metrology and
to represent a country in international activities related with metrology.
Inspection is necessary to check all materials, products, and component parts at various stages during
manufacturing, assembly, packaging and installation in the customer’s environment. It is the quality-
assurance method that compares materials, products or processes with established standards. When the
production rate is on a smaller scale, parts are made and assembled by a single manufacturing cell. If
the parts do not fit correctly, the necessary adjustments can be made within a short period of time. The
changes can be made to either of the mating parts in such a way that each assembly functions correctly.
For large-scale manufacturing, it is essential to make exactly alike similar parts or with the same accuracy.
These accuracy levels need to be endorsed frequently. The recent industrial mass-production system is
based on interchangeability. The products that are manufactured on a large scale are categorised into
4 Metrology and Measurement
various component parts, thus making the production of each component an independent process.
Many of these parts are produced in-house while some parts are purchased from outside sources and
then assembled at one place. It becomes very necessary that any part chosen at random fits correctly with
other randomly selected mating parts. For it to happen, the dimensions of component parts are made
with close dimensional tolerances and inspected at various stages during manufacturing. When large
numbers of identical parts are manufactured on the basis of interchangeability, actual dimension mea-
surement is not required. Instead, to save time, gauges are used which can assure whether the manufac-
tured part is within the prescribed limits or not. If the interchangeability is difficult to maintain, assorted
groups of the product are formed. In such a case, the products X and Y are grouped according to their
dimensional variations. For example, if shafts are made within the range of 59.95 mm to 60.05 mm,
and if the diameters of bearing holes are made within the range 60.00 mm to 60.1 mm then the shafts
are grouped for sizes of 59.95 mm to 60.00 mm and 60.01 mm to 60.05 mm. Similarly, two bearing-hole
groups are formed as sizes of 60.00 mm to 60.05 mm and 60.06 mm to 60.10 mm. The lower-sized shaft
group gets assembled with the lower-sized hole group, and the higher-sized shaft group gets assembled
with higher-sized hole group. This is known as selective assembly which demands for inspection at every
stage of manufacturing and makes the assemblies feasible for any odd combinations controlling the
assembly variations in terms of loose (clearance) fit or tight (interference) fit.
The inspection activity is required to
i. ensure the material, parts, and components conform to the established standards,
ii. meet the interchangeability of manufacture,
iii. provide the means of finding the problem area for not meeting the established standards,
iv. produce the parts having acceptable quality levels with reduced scrap and wastages,
v. purchase good quality of raw materials, tools, and equipments that govern the quality of finished
products,
vi. take necessary efforts to measure and reduce the rejection percentage for forthcoming production
batches by matching the technical specification of the product with the process capability, and
vii. judge the possibility of rework of defective parts and re-engineer the process.
Many companies today are concerned with quality management or are in the process of introducing
some form of quality system in their work. This brings them into contact with quality standards such
as EN 45001–General Criteria for the Operation of Testing Laboratories, or with the standards in
the ISO 9000 series or the DIN system. A feature common to all quality standards is that they specify
requirements in respect of measurements and their traceability.
The quality context employs a number of measurement technology terms that can cause difficulties if
their meanings are not correctly understood.
Accuracy is the closeness of agreement between a test result and the accepted reference value [ISO 5725].
Bias is the difference between the expectation of the test results and an accepted reference value
[ISO 5725].
Introduction to Metrology 5
Calibration is a set of operations that establish, under specified conditions, the relationship between
values of quantities indicated by a measuring instrument or values represented by a material measure
and the corresponding values realized by standards. The result of a calibration may be recorded in a
document, e.g., a calibration certificate. The result can be expressed as corrections with respect to the
indications of the instrument.
Confirmation is a set of operations required to ensure that an item of measuring equipment is in a state
of compliance with requirements for its intended use. Metrological confirmation normally includes, for
example, calibration, any necessary adjustment or repair and subsequent recalibration, as well as any
required sealing and labelling.
Correction is the value which, added algebraically to the uncorrected result of a measurement, com-
pensates for an assumed systematic error. The correction is equal to the assumed systematic error, but
of the opposite sign. Since the systematic error cannot be known exactly, the correction is subject to
uncertainty.
Drift is a slow change of a metrological characteristic of a measuring instrument.
Error of a measuring instrument is the indication of a measuring instrument minus a ‘true’ value of
the corresponding input quantity, i.e., the error has a sign.
Expectation of the measurable quantity is the mean of a specified population of measurements.
Fiducial error (of a measuring instrument) is the error of a measuring instrument divided by a (fiducial)
value specified for the instrument. Fiducial value can be the span or upper limit of a nominal range of
a measuring instrument.
Group standard is a set of standards of chosen values that, individually or in combination, provide a
series of values of quantities of the same kind.
Inspection involves measurement, investigation or testing of one or more characteristics of a product,
and includes a comparison of the results with specified requirements in order to determine whether the
requirements have been fulfilled.
Magnification In order to measure small difference in dimensions, the movement of the measuring tip
in contact with work must be magnified and, therefore, the output signal from a measuring instrument is
to be magnified many times to make it more readable. In a measuring instrument, magnification may be
either mechanical, electrical, electronic, optical, pneumatic principle or a combination of these.
Measurand is a particular quantity subject to measurement.
National (measurement) standard is a standard recognized by a national decision to serve, in a coun-
try, as the basis for assigning values to other standards of the quantity concerned.
Nominal value is a rounded or approximate value of a characteristic of a measuring instrument that
provides a guide to its use.
Precision is the closeness of agreement between independent test results obtained under stipulated
conditions [ISO 5725].
6 Metrology and Measurement
Trueness is the closeness of agreement between the average value obtained from a large series of test
results and an accepted reference value [ISO 5725]. The measure of trueness is usually expressed in
terms of bias.
Uncertainty of measurement is a parameter, associated with the result of a measurement that charac-
terises the dispersion of the values that could reasonably be attributed to the measurand. It can also be
expressed as an estimate characterizing the range of values within which the true value of a measurand
lies. When specifying the uncertainty of a measurement, it is necessary to indicate the principle on
which the calculation has been made.
Verification is an investigation that shows that specified requirements are fulfilled.
Accuracy Accuracy is the degree to which the measured value of the quality characteristic agrees
with the true value. The accuracy of a method of measurement is referred to its absence of bias to the
conformity of results to the true value of quality characteristics being measured. As the exact measure-
ment of a true value is difficult, a set of observations are made whose mean value is taken as the true
value of the quantity to be measured. The various attributes of the workpiece such as dimensions,
hardness, tensile strength and other quality characteristics may creep in while measuring. Therefore, the
measured value is the sum of the quantity measured and the error of the instrument. As both of them
are independent of each other, the standard deviation of the measured value is the square root of the
square of the standard deviation of the true value (σtrue ) and the square of the standard deviation of
the error of measurement (σerror ).
For example, a micrometer measures a part dimension as 10 mm and if the selected accuracy is
±0.01 mm then the true dimension may lie between 9. 99 mm to 10.01 mm. Thus, the accuracy of the
micrometer is ±0.01 mm means that the results obtained by the micrometer are inaccurate between ±0.01
mm or there is an uncertainty of ±0.01 mm of the measured value (1% error in the instrument).
Precision Precision is the degree of repeatability in the measuring process. Precision of a method
of measurement refers to its variability when used to make repeated measurements under carefully
controlled conditions. A numerical measure of a precision is the standard deviation of the frequency
distribution that would be obtained from such repeated measurements. This is referred as σerror .
Precision is mainly achieved by selecting a correct instrument technology for application. The general
guideline for determining the right level of precision is that the measuring device must be ten times
more precise than the specified tolerances, e.g., if the tolerance to be measured is ±0.01 mm, the mea-
suring device must have a precision of ±0.001 mm. The master gauge applied should be ten times more
precise than the inspection device.
8 Metrology and Measurement
Measurement is a set of operations done with the aim of determining the value of a quantity which
can be measured by various methods of measurements depending upon the accuracy required and the
amount of permissible error.
The methods of measurements are classified as follows:
1. Direct Method This is the simplest method of measurement in which the value of the quan-
tity to be measured is obtained directly without any calculations, e.g., measurements by scales, vernier
calipers, micrometers for linear measurement, bevel protractor for angular measurement, etc. It involves
contact or non-contact type of inspections. In case of contact type of inspections, mechanical probes
make manual or automatic contact with the object being inspected. On the other hand, the non-contact
type of method utilizes a sensor located at a certain distance from the object under inspection. Human
insensitiveness can affect the accuracy of measurement.
2. Indirect Method The value of the quantity to be measured is obtained by measuring other
quantities, which are frequently related with the required value, e.g., angle measurement by sine bar,
three-wire method for measuring the screw pitch diameter, density calculation by measuring mass and
dimensions for calculating volume.
3. Absolute Method This is also called fundamental method and is based on the measurement of
the base quantities used to define a particular quantity, e.g., measuring a quantity (length) directly in
accordance with the definition of that quantity (definition of length in units).
6. Coincidence Method It is also called the differential method of measurement. In this, there
is a very small difference between the value of the quantity to be measured and the reference. The refer-
ence is determined by the observation of the coincidence of certain lines or signals, e.g., measurement
by vernier calipers (LC × vernier scale reading) and micrometer (LC × circular scale reading).
the value of the quantity to be measured is PQ , e.g., determination of a mass by means of balance
and known weights, using the Gauss double weighing method.
8. Deflection Method The value of the quantity to be measured is directly indicated by the
deflection of a pointer on a calibrated scale, e.g., dial indicator.
The error in measurement is the difference between the measured value and the true value of the mea-
sured dimension. Error may be absolute or relative.
Error in Measurement = Measured Value − True Value
The actual value or true value is a theoretical size of dimension free from any error of measurement
which helps to examine the errors in a measurement system that lead to uncertainties. Generally, the
errors in measurements are classified into two testing types—one, which should not occur and can be
eliminated by careful work and attention; and the other, which is inherent in the measuring process/
system. Therefore, the errors are either controllable or random in occurrence.
Absolute Error
It is divided into two types:
Introduction to Metrology 11
True Absolute Error It is defined as the algebraic difference between the result of measurement
and the conventional true value of the quantity measured.
Apparent Absolute Error It is defined as the algebraic difference between the arithmetic mean and
one of the results of measurement when a series of measurements are made.
Absolute Error (EA)
∴ Absolute Error = Actual Value − Approximate Value
If, absolute value = x and
approximate value = x + dx, then
Absolute Error = dx
Relative Error
It is the quotient of the absolute error and the value of comparison (may be true value or the arithmetic
mean of a series of measurements) used for calculation of the absolute error.
It is an error with respect to the actual value.
Static Error
These are the result of physical nature of the various components of a measuring system, i.e., intrinsic
imperfection or limitations of apparatus/instrument. Static error may occur due to existence of either
characteristic errors or reading errors or environmental errors, as the environmental effect and other
external factors influence the operating capabilities of an instrument or inspection procedure. This
error can be reduced or eliminated by employing relatively simple techniques.
a. Reading Error These types of errors apply exclusively to instruments. These errors may be the
result of parallax, optical resolution/readability, and interpolation.
Parallax error creeps in when the line of sight is not perpendicular to the measuring scale. The mag-
nitude of parallax error increases if the measuring scale is not made flush to the component. This may
be one of the common causes of error. It occurs when either the scale and pointer of an instrument
are not in the same plane or the line of vision is not in line of the measuring scale.
In Fig. 1.1, let, Y be the distance between the pointer and the
eye of the observer, X be the separation distance of the scale
and the pointer, and θ be the angle between the line of sight and X
E
the normal to the scale. B Scale
A
Now, [(PA)/(NE )] = {X/(X − Y )}
And the error will be Y
θ
(PA) = { X/(X − Y )} {( X − Y ) tan θ}
Error = X tan θ
D
C
Generally, is very small.
Fig. 1.1 Parallax error
∴ tan θ = θ and E = X θ
For least error, X should be as minimal as possible. This error can be eliminated by placing a mirror
behind the pointer, which helps to ensure normal reading of the scale.
b. Alignment Error This occurs if the checking of an instrument is not correctly aligned with the
direction of the desired measurement. In Fig. 1.2 (a), the dimension D is being measured with a dial
indicator. But the dial indicator plunger is not held vertical and makes an angle θ with the line of mea-
surement. This leads to misalignment error getting introduced in the measurement, which has a value
equal to D(1 – cos θ). To avoid the alignment error, Abbe’s alignment principle is to be followed. It
states that the axis or line of measurement should coincide with the axis of the measuring instrument or the line of the
measuring scale.
Now consider Fig. 1.2 (b). While measuring the length of a workpiece, the measuring scale is inclined
to the true line of dimension being measured and there will be an error in the measurement. The length L
measured will be more than the true length, which will be equal to L cos θ. This error is called cosine
error. In many cases the angle θ is very small and the error will be negligible.
Introduction to Metrology 13
Dial indicator
θ
L cos θ
L
D
(a) (b)
Fig. 1.2 Alignment error
c. Characteristic Error It is the deviation of the output of the measuring system from the theo-
retical predicted performance or from the nominal performance specifications. Linearity, repeatability,
hysteresis and resolution error are the examples of characteristic error.
d. Environmental Error These are the errors arising from the effect of the surrounding tempera-
ture, pressure and humidity on the measuring system. Magnetic and electric fields, nuclear radiations,
vibration or shocks may also lead to errors. Environmental error can be controlled by controlling the
atmospheric factors.
Loading Error The part to be measured is located on the surface table (datum for comparison
with standards). If the datum surface is not flat or if foreign matter like dirt, chips, etc., get entrapped
between the datum and workpiece then an error will be introduced while taking readings, as shown
in Fig. 1.3.
Also, poor contact between the working gauge or a the instrument and workpiece causes an error as
shown in Fig. 1.4. To avoid such error, an instrument with a wide area of contact should not be used
Error
Dirt
Fig. 1.3 Instrument surface displacement Fig. 1.4 Error due to poor contact
14 Metrology and Measurement
while measuring irregular or curved surfaces, and the correct contact pressure must be applied. Therefore,
instrument loading error is the difference between the value of the measurand before and after the
measuring system is connected or contacted for measurement.
Dynamic Error It is caused by time variation in the measurand. It is the result of incapability of
the system to respond reliably to time-varying measurement. Inertia, damping, friction or other physical
constraints in sensing or readout or the display system are the main causes of dynamic errors.
Analysis of accumulation of error by the statistical method categorizes errors as controllable and
random errors.
Controllable Error
These are controllable in both magnitude and sense. These types of errors are regularly repetitive in
nature and are of similar form after systematic analysis is reduced effectively. These errors are also
called systematic errors.
Controllable errors include the following:
a. Calibration Error These are caused due to the variation in the calibrated scale from its normal
indicating value. The length standard, such as the slip gauge, will vary from the nominal value by a small
amount. This will cause a calibration error of constant magnitude.
b. Stylus Pressure Error The too small or too large pressure applied on a workpiece while mea-
suring, causes stylus pressure. This error causes an appreciable deformation of the stylus and the work-
piece.
c. Avoidable Error These errors occur due to parallax, non-alignment of workpiece centres, incor-
rect location of measuring instruments for temporary storage, and misalignment of the centre line of
a workpiece.
Random Error Random errors are accidental, non-consistent in nature and as they occur ran-
domly, they cannot be eliminated since no definite cause can be located. It is difficult to eliminate such
errors that vary in an unpredictable manner. Small variations in the position of setting standards and
the workpiece, slight displacement of lever joints in instruments, transit fluctuations in friction in mea-
suring instruments and pointer-type display, or in reading engraved scale positions are the likely sources
of this type of error.
On 23 September, 1999, the Mars Climate Orbiter was lost during an orbit injection maneuver when
the spacecraft crashed onto the surface of Mars. The principal cause of the mishap was traced to a
thruster calibration table in which British units were used instead of metric units. The software for
Introduction to Metrology 15
celestial navigation at the Jet Propulsion Laboratory expected the thruster impulse data to be expressed
in newton seconds, but Lockheed Martin Astronautics in Denver, which built the orbiter, provided the
values in pound-force seconds, causing the impulse to be interpreted as roughly one-fourth its actual
value. This reveals the importance of the requirement of using a common unit of measurement. The
historical perspective in this effect must be seen for further study of metrology.
The metric system was one of the many reforms introduced in France during the period between
1789 and 1799, known for the French Revolution. The need for reform in the system of weights and
measures, as in other affairs, had long been recognized and this aspect of applied science affected the
course of human activity directly and universally.
Prior to the metric system, there had existed in France a disorderly variety of measures, such as for
length, volume, or mass, that were arbitrary in size and varied from one town to the next. In Paris, the
unit of length was the Pied de Roi and the unit of mass was the Livre poids de marc. However, all attempts
to impose the Parisian units on the whole country were fruitless, as the guilds and nobles who benefited
from the confusion opposed this move.
The advocates of reform sought to guarantee the uniformity and permanence of the units of mea-
sure by taking them from properties derived from nature. In 1670, the abbe Gabriel Mouton of Lyons
proposed a unit of length equal to one minute of an arc on the earth’s surface, which he divided into
decimal fractions. He suggested a pendulum of specified period as a means of preserving one of these
submultiples.
The conditions required for the creation of a new measurement system were made possible by
the French Revolution. In 1787, King Louis XVI convened the Estates General, an institution that
had last met in 1614, for the purpose of imposing new taxes to avert a state of bankruptcy. As
they assembled in 1789, the commoners, representing the Third Estate, declared themselves to be
the only legitimate representatives of the people, and succeeded in having the clergy and nobility
join them in the formation of the National Assembly. Over the next two years, they drafted a new
constitution.
In 1790, Charles-Maurice de Talleyrand, Bishop of Autun, presented to the National Assembly a
plan to devise a system of units based on the length of a pendulum beating seconds at latitude 45. The
new order was envisioned as an ‘enterprise whose result should belong some day to the whole world.’
He sought, but failed to obtain, the collaboration of England, which was concurrently considering a
similar proposal by Sir John Riggs Miller.
The two founding principles were that the system would be based on scientific observation and
that it would be a decimal system. A distinguished commission of the French Academy of Sciences,
including J L Lagrange and Pierre Simon Laplace, considered redefining the unit of length. Rejecting
the seconds pendulum as insufficiently precise, the commission defined the unit, given the name metre
in 1793, as one ten-millionth of a quarter of the earth’s meridian passing through Paris. The proposal
was accepted by the National Assembly on 26 March, 1791.
The definition of the metre reflected the extensive interest of French scientists in the shape of
the earth. Surveys in Lapland by Maupertuis in 1736 and in France by LaCaille in 1740 had refined
16 Metrology and Measurement
the value of the earth’s radius and established definitively that the shape of the earth was oblate. To
determine the length of the metre, a new survey was conducted by the astronomers Jean Baptiste
Delambre and P F A Mechain between Dunkirk in France on the English Channel, and Barcelona,
Spain, on the coast of the Mediterranean Sea. This work was begun in 1792 and completed in 1798,
with both the astronomers enduring the hardships of the ‘reign of terror’ and the turmoil of revo-
lution. The quadrant of the earth was found to be 10 001 957 metres instead of exactly 10 000 000
metres as originally proposed. The principal source of error was the assumed value of the numeric
quantity that was earth’s used for oblateness correction, taking into account the earth’s flattening
at the poles.
The unit of volume, the pinte (later renamed the litre), was defined as the volume of a cube having a
side equal to one-tenth of a metre. The unit of mass, the grave (later renamed the kilogram), was defined
as the mass of one pinte of distilled water at the temperature of melting ice. In addition, the centigrade
scale for temperature was adopted with fixed points at 0°C and 100°C representing the freezing and
boiling points of water. These have now been replaced by Celsius scales.
The work to determine the unit of mass was begun by Lavoisier and Hauy. They discovered that the
maximum density of water occurs at 4°C and not at 0°C as had been supposed. So the definition of the
kilogram was amended to specify the temperature of maximum density. The intended mass was 0.999972 kg,
i.e., 1000.028 cm3 instead of exactly 1000 cm3 for the volume of 1 kilogram of pure water at 4°C.
The metric system was officially adopted on 7 April, 1795. The government issued a decree (Loi du
18 germinal, an III) formalizing the adoption of the definitions and terms that are in use today. A brass
bar was made by Lenoir to represent the provisional metre, obtained from the survey of LaCaille, and
a provisional standard for the kilogram was derived.
In 1799, permanent standards for the metre and kilogram, made from platinum, were constructed
based on the new survey by Delambre and Mechain. The full length of the metre bar represented the
unit. These standards were deposited in the Archives of the Republic. They became official by the act
of 10 December, 1799.
The importance of a uniform system of weights and measures was recognized in the United States,
as in France. Article I, Section 8, of the US Constitution provides that the Congress shall have the
power “to coin money ... and fix the standard of weights and measures.” However, although the pro-
gressive concept of decimal coinage was introduced, the early American settlers both retained and cul-
tivated the customs and tools of their British heritage, including the measures of length and mass.
A series of international expositions in the middle of the nineteenth century enabled the French
government to promote the metric system for world use. Between 1870 and 1872, with an interrup-
tion caused by the Franco-Prussian War, an international meeting of scientists was held to consider the
design of new international metric standards that would replace the metre and kilogram of the French
Archives. A Diplomatic Conference on the Metre was convened to ratify the scientific decisions. Formal
Introduction to Metrology 17
international approval was secured by the Treaty of the Metre, signed in Paris by the delegates of 17
countries, including the United States, on 20 May, 1875.
The treaty established the International Bureau of Weights and Measures (BIPM). It also provided
for the creation of an International Committee for Weights and Measures (CIPM) to run the Bureau
and the General Conference on Weights and Measures (CGPM) as the formal diplomatic body that
would ratify changes as the need arose. The French government offered the Pavillon de Breteuil, once
a small royal palace, to serve as headquarters for the Bureau in Sevres, France, near Paris. The grounds
of the estate form a tiny international enclave within the French territory.
A total of 30 metre bars and 43 kilogram cylinders were manufactured from a single ingot of an
alloy of 90 per cent platinum and 10 per cent iridium by Johnson, Mathey and Company of London.
The original metre and kilogram of the French Archives in their existing states were taken as the points
of departure. The standards were intercompared at the International Bureau between 1886 and 1889.
One metre bar and one kilogram cylinder were selected as the international prototypes. The remaining
standards were distributed to the signatories. The First General Conference on Weights and Measures
approved the work in 1889.
The United States received metre bars 21 and 27 and kilogram cylinders 4 and 20. On 2 January, 1890
the seals to the shipping cases for metre 27 and kilogram 20 were broken in an official ceremony at the
White House with President Benjamin Harrison presiding the meeting. The standards were deposited
in the Office of Weights and Measures of the US Coast and Geodetic Survey.
The US customary units were tied to the British and French units by a variety of indirect comparisons.
The troy weight was the standard for minting of coins. The Congress could be ambivalent about
non-uniformity in standards for trade, but it could not tolerate non-uniformity in its standards for
money. Therefore, in 1827 the ambassador to England and former Secretary of the Treasury, Albert
Gallatin secured a brass copy of the British troy pound of 1858. This standard was kept in the Phila-
delphia mint, and identical copies were made and distributed to other mints. The troy pound of
the Philadelphia mint was virtually the primary standard for commercial transactions until 1857 and
remained the standard for coins until 1911.
The semi-official standards used in commerce for a quarter century may be attributed to Ferdinand
Hassler, who was appointed superintendent of the newly organized Coast Survey in 1807. In 1832, the
Treasury Department directed Hassler to construct and distribute to the states the standards of length,
mass, and volume, and balances by which masses might be compared. As the standard of length,
Hassler adopted the Troughton scale, an 82-inch brass bar made by Troughton of London for the
Coast Survey, that Hassler had brought back from Europe in 1815. The distance between the 27th and
63rd engraved lines on a silver inlay scale down the centre of the bar was taken to be equal to the British
yard. The system of weights and measures in Great Britain had been in use since the reign of Queen
Elizabeth I. Following a reform begun in 1824, the imperial standard avoirdupois pound was made the
standard of mass in 1844, and the imperial standard yard was adopted in 1855. The imperial standards
18 Metrology and Measurement
were made legal by an Act of Parliament in 1855 and are preserved in the Board of Trade in London.
The United States received copies of the British imperial pound and yard, which became the official
US standards from 1857 until 1893.
In 1893, under a directive from Thomas C Mendenhall, Superintendent of Standard Weights and
Measures of the Coast and Geodetic Survey, the US customary units were redefined in terms of the
metric units. The primary standards of length and mass adopted were the prototype metre No. 27
and the prototype kilogram No. 20 that the United States had received in 1889 as a signatory to
the Treaty of the Metre. The yard was defined as 3600/3937 of a metre and the avoirdupois pound-
mass was defined as 0.4535924277 kilogram. The conversion for mass was based on a comparison
performed between the British imperial standard pound and the international prototype kilogram
in 1883. These definitions were used by the National Bureau of Standards (now the National
Institute of Standards and Technology) from its founding in 1901 until 1959. On 1 July, 1959, the
definitions were fixed by international agreement among the English-speaking countries to be 1 yard
= 0.9144 metre and 1 pound-mass = 0.45359237 kilogram exactly. The definition of the yard is
equivalent to the relations 1 foot = 0.3048 metre and 1 inch = 2.54 centimetres exactly.
A fundamental principle was that the system should be coherent. That is, the system is founded upon
certain base units for length, mass, and time, and derived units are obtained as products or quotients
without requiring numerical factors. The metre, gram, and mean solar second were selected as base
units. In 1873, a second committee recommended a centimetre-gram-second (CGS) system of units
because in this system, the density of water is unity.
In 1889, the international prototype kilogram was adopted as the standard for mass. The prototype
kilogram is a platinum–iridium cylinder with equal height, a diameter of 3.9 cm and slightly rounded
edges. For a cylinder, these dimensions present the smallest surface-area-to-volume ratio to minimize
wear. The standard is carefully preserved in a vault at the International Bureau of Weights and Measures
and is used only on rare occasions. It remains the standard till today. The kilogram is the only unit still
defined in terms of an arbitrary artifact instead of a natural phenomenon.
Historically, the unit of time, the second, was defined in terms of the period of rotation of the earth
on its axis as 1/86 400 of a mean solar day. Meaning ‘second minute’, it was first applied to timekeep-
ing in about the seventeenth century when pendulum clocks were invented that could maintain time to
this precision.
By the twentieth century, astronomers realized that the rotation of the earth is not constant.
Due to gravitational tidal forces produced by the moon on the shallow seas, the length of the day
increases by about 1.4 milliseconds per century. The effect can be measured by comparing the
computed paths of ancient solar eclipses on the assumption of uniform rotation with the recorded
locations on earth where they were actually observed. Consequently, in 1956 the second was rede-
fined in terms of the period of revolution of the earth about the sun for the epoch 1900, as rep-
resented by the Tables of the Sun computed by the astronomer Simon Newcomb of the US Naval
Observatory in Washington, DC. The operational significance of this definition was to adopt the
linear coefficient in Newcomb’s formula for the mean longitude of the sun to determine the unit
of time.
Introduction to Metrology 19
The International System of Units (SI) has become the fundamental basis of scientific measurement
worldwide. The United States Congress has passed legislation to encourage use of the metric system,
including the Metric Conversion Act of 1975 and the Omnibus Trade and Competitiveness Act of
1988. The space programme should have been the leader in the use of metric units in the United States
and would have been an excellent model for education, had such an initiative been taken. Burt Edelson,
Director of the Institute for Advanced Space Research at George Washington University and former
Associate Administrator of NASA, recalls that “in the mid-‘80s, NASA made a valiant attempt to
convert to the metric system” in the initial phase of the international space station programme. Eco-
nomic pressure to compete in an international environment is a strong motive for contractors to use
metric units. Barry Taylor, head of the Fundamental Constants Data Centre of the National Institute
of Standards and Technology and US representative to the Consultative Committee on Units of the
CIPM, expects that the greatest stimulus for metrication will come from industries with global markets.
“Manufacturers are moving steadily ahead on SI for foreign markets,” he says. Indeed, most satellite-
design technical literature does use metric units, including metres for length, kilograms for mass, and
newtons for force, because of the influence of international partners, suppliers, and customers.
Unit
Quantity Name Symbol
Length metre m
Mass kilogram kg
Time second s
Electric current ampere A
Thermodynamic temperature kelvin K
Amount of substance mole mol
Luminous intensity candela cd
Unit
Quantity Special Name Symbol Equivalent
Plane angle radian rad 1
Solid angle steradian sr 1
Angular velocity rad/s
Angular acceleration rad/s2
Frequency hertz Hz s-1
Speed, velocity m/s
Acceleration m/s2
Force newton N kg m/s2
Pressure, stress pascal Pa N/m2
Energy, work, heat joule J kg m2 /s2, N m
Power watt W kg m2/s3, J/s
Power flux density W/m2
Linear momentum impulse kg m/s Ns
Electric charge coulomb C As
Celsius temperature degree Celsius K, C
1.10.3 SI Prefixes
Table 1.3 SI prefixes used
The SI system is now being adopted throughout the world as its main feature is newton (unit of force),
which is independent of the earth’s gravitation.
Review Questions
1. Define the term metrology and also discuss the types of metrology.
2. Differentiate between accuracy and precision.
3. List down the methods of measurement and explain any three of them in detail.
4. What are the different bases used for selection of measuring instruments?
5. State the different types of errors and explain relative error and parallax error.
6. Differentiate between systematic and random errors.
7. Explain the term cosine error with an example.
8. Write a short note on static error.
9. State the main difference between indicating and recording instruments.
10. Discuss the need for precision measurements in an engineering industry.
11. A cylinder of 80-mm diameter was placed between the micrometer anvils. Due to inaccurate placement,
the angle between the micrometer and cylinder axis was found to be 1 minute. Calculate the amount
of error in the measured diameter of the above cylinder if the micrometer anvil diameter is 6 mm. Use
suitable approximations.
12. Explain with a neat sketch the effect of poor contact, impression, expansion of workpiece and
distortion of workpiece on accuracies of measurement.
13. A test indicator is used to check the concentricity of a shaft but its stylus is so set that its movement
makes an angle of 350 with the normal to the shaft. If the total indicator reading is 0.02 mm, calculate
the true eccentricity.
14. What do you understand by the terms ‘readability’ and ‘range’, ‘repeatability’ and ‘reproducibility’,
and ‘drift’ and ‘error’?
2 Measurement Standards
2.1 INTRODUCTION
In ancient Egypt, around 3000 years BC, the death penalty was inflicted on all those who forgot or
neglected their duty to calibrate the standard unit of length at each full-moon night. Such was the peril
courted by royal architects responsible for building the temples and pyramids of the Pharaohs. The first
royal cubit was defined as the length of the forearm (from the elbow to the tip of the extended middle
finger) of the ruling Pharaoh, plus the breadth of his hand.
The original measurement was transferred to and carved in black granite. The workers at the building
sites were given copies in granite or wood and it was the responsibility of the architects to maintain them.
Even though we have come a long way from this starting point, both in law-making and in time, people
have placed great emphasis on correct measurements ever since.
In 1528, the French physician J Fernel proposed the distance between Paris and Amiens as a general
length of reference. In 1661, the British architect Sir Christopher Wren suggested the reference unit should
be the length of pendulum with a period of half second, and this was also referred as a standard.
In 1799 in Paris, the Decimal Metric System was created by the deposition of two platinum stan-
dards representing the metre and the kilogram—the start of the present International System of Units
(SI system). These two standards of length were made of materials (alloys), and hence are referred as
material standards.
The need for establishing standards of length arose primarily for determining agricultural land areas
and for erection of buildings and monuments.
A measurement standard, or etalon, is a material measure, measuring instrument, reference material
or measuring system intended to define, realize, conserve or reproduce a unit or one or more values
of a quantity to serve as a reference. Any system of measurement must be related to known standards
so as to be of commercial use. The dictionary meaning of standard is ‘something that is set up and
established by authorities as a rule for the measurement of quantity, weight, value quality, etc.
Length is of fundamental importance as even angles are measured by a combination of linear
measurements. All measurements of length are fundamentally done in comparison with standards of
length. In the past, there have been large numbers of length standards, such as cubit, palm and the
digit. The Egyptian unit, known as cubit, was equal to the length of the forearm, from the elbow to the
tip of the middle finger of the ruling Pharaoh, plus the breadth of his hand. The cubit was of various
24 Metrology and Measurement
lengths ranging from 450 mm to 670 mm. Even in the 18th century, a map of Australia showed miles of
three different lengths. The first accurate standard was developed in England, known as the Imperial
Standard Yard, in 1855 and was followed by the International Prototype Metre made in France in 1872.
These developments are summarized in Table 2.1.
Table 2.1 Interesting facts of development of measurement standards through the ages
To avoid confusion in the use of the standards of length, an important decision towards a definite
length standard, metre (Greek word-Metron meaning measure), was established in 1790 in France. In the
nineteenth century, the rapid advancement made in engineering was due to improved materials available
and more accurate measuring instruments.
Measurement Standards 25
After realizing the importance and advantage of the metric system, most of the countries in the
world have adopted the metre as the fundamental unit of linear measurement. In recent years, the
wavelength of monochromatic light, which never changes its characteristics in any environmental
condition is used as the invariable fundamental unit of measurement instead of the previously devel-
oped material standards such as metre and yard. A metre is defined as 1650763.73 wavelengths of the
orange radiation in vacuum of krypton-86. The yard is defined as 0.9144 metre, which is equivalent
to 1509458.35 wavelengths of the same radiations. Hence, three types of measurement standards are
discussed below.
i. Line standard
ii. End standard
iii. Wavelength standard
a. The Imperial Standard Yard This standard served its purpose from 1855 to 1960. It is
made of a one-inch square cross section bronze bar (82% copper, 13% tin, 5% zinc) and is 38 inches
long. The bar has a ½-inch diameter x ½-inch deep hole, which are fitted with a 1/10th-inch diameter
gold plug. The highly polished top surfaces of these plugs contain three transversely and two longitudi-
nally engraved lines lying on the natural axis of the bronze bar as shown in Fig. 2.1.
The yard is defined as the distance between two central transverse lines on the plugs when the
temperature of the bar is constant at 62°F and the bar is supported on rollers in a specified manner to
prevent flexure, the distance being taken at the point midway between the two longitudinal lines at 62°F
for occasional comparison. Secondary standards were also made as a copy of the above international
yard. To protect the gold plug from accidental damage, it is kept at the neutral axis, as the neutral axis
remains unaffected even if the bar bends.
Natural axis 1″
SQ.
36″
38″
Bronze Metal
Enlarged view of
gold insert
16 mm
Graduation on
neutral plane of bar
16 mm
1000 mm
International Prototype
Metre C/S by Tresca
(a)
Fig. 2.2(a) International standard prototype metre
According to this standard, the length of one metre is defined as the straight line distance, at 0°C between
the centre portion of a pure platinum – iridium alloy of a total length of 1000-mm and having a web cross section.
Measurement Standards 27
Figure 2.2(b) ( Plate 1) shows the actual International Standard Prototype Metre and Historical
Standard platinum–iridium metre bar. The 1889 definition of the metre, based upon the international
prototype of platinum–iridium, was replaced by the 11th CGPM (Conférence Générale des Poids et
Mesures, 1960) using a definition based upon the wavelength of krypton-86 radiations. This definition
was adopted in order to improve the accuracy with which the metre may be realized. This was replaced
in 1983 by the 17th CGPM as per Resolution 1.
The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792
458 of a second.
The effect of this definition is to fix the speed of light at exactly 299 792 458 m·s–1. The original
international prototype of the metre, which was sanctioned by the 1st CGPM in 1889 (CR, 34–38),
is still kept at the BIPM under conditions specified in 1889. The metre is realized on the primary level
by the wavelength from an iodine-stabilized helium–neon laser. On sub-levels, material measures like
gauge blocks are used, and traceability is ensured by using optical interferometry to determine the
length of the gauge blocks with reference to the above-mentioned laser light wavelength. Accuracy of
measurement using this standard is limited up to ±0.2 mm. For higher accuracy, scales along with a
magnifying glass on the microscope may be used which makes measurement quick and easy. Scale
markings are not subjected to wear even after periodic use but parallax error may get introduced while
measuring. The example of line standard includes metre, yard, steel rule (Scale).
a. End Bar End bars made of steel having cylindrical cross section of 22.2-mm diameter with
the faces lapped and hardened at the ends are available in sets of various lengths. Parallelity of the
ends is within few tenths of micrometres. Reference- and calibration-grade end bars have plane
end faces, but the set of inspection- and workshop-grade end bars can be joined together by studs,
screwed into a tapped hole in their ends. Although from time to time, various types of end bars have
been constructed with some of them having flat, spherical faces, but flat and parallel-faced end bars
are firmly established as the most practical end standard used for measurement. It is essential to
retain their accuracy while measuring when used in a horizontal plane, by supporting them, keeping
end faces parallel.
End bars are made from high-carbon chromium steel, ensuring that faces are hardened to 64 RC
(800 HV). The bars have a round section of 30 mm for greater stability. Both the ends are threaded,
recessed and precision lapped to meet requirements of finish, flatness, parallelism and gauge length.
28 Metrology and Measurement
These are available up to 500 mm in grades 0,1,2 in an 8-piece set. Length bars can be combined by
using an M6 stud. End bars are usually provided in sets of 9 to 12 pieces in step sizes of 25 mm up to
a length of 1 m. (See Fig. 2.3, Plate 1.)
b. Slip Gauges Slip gauges are practical end standards and can be used in linear measurements
in many ways. These were invented by the Swedish Engineer C E Johnson. Slip gauges are rectangular
blocks of hardened and stabilized high-grade cast steel or the ceramic compound zirconium oxide
(ZrO2 ) having heat expansion coefficients of 11.5 × 10−6 K−1 and 9.5 × 10−6 K−1 respectively and are
available with a 9-mm wide, 30 to 35-mm-long cross section. The length of a slip gauge is strictly the
dimension which it measures—in some slip gauges it is the shortest dimension and in the larger slip
gauges, it is the longest. The blocks, after being manufactured to the required size, are hardened to resist
wear and are allowed to stabilize to release internal stresses, which prevent occurrence of subsequent
size and shape variations. (See Fig. 2.4, Plate 1.)
Slip gauges are made from select grade of carbide with a hardness of 1500 Vickers, are checked for
flatness and parallelism at every stage and calibrated in our NABL accredited laboratory. Slip gauges are
available in five grades of accuracies as discussed in Table 2.2.
Slip gauge sets are made according to the following standards:
IS 2984-1981, Metric BS-4311: 1968, Imperial BS.888.1950, DIN: 861-1988, JIS B 7506-1978.
According to accuracy, slip gauges are classified as follows in Table 2.3.
After hardening, the blocks are carefully finished on the measuring faces to the required fine degree
of surface finish, flatness and accuracy. The standard distance is maintained by the mirrorlike sur-
face finish obtained by the surfinishing process, lapping. IS: 2984–1966 specifies three grades of slip
gauges:
Grade 0 used for laboratories and standard rooms for checking subsequent grade gauges
Grade I having lower accuracy than Grade 0 and used in the inspection department
Grade II to be used in the workshop during actual production of components.
Measuring faces of slip gauges are forced and wrung against each other so that the gauges stick
together. This is known as wringing of slip gauges, as shown in Fig. 2.5. Considerable force is required
to wrung the slip gauges. The effect is caused partly by molecular attraction and partly by atmospheric
pressure. To wring two slip gauges together, they are first cleaned and placed together at right angles.
Then they are rotated through 90° while being pressed together.
According to IS: 2984–1966, the size of a slip gauge is the distance L between the plane mea-
suring faces, being constituted by the surface of an auxiliary body with one of the slip-gauge faces
in wrung position and the other exposed. Slip gauges are supplied as a set, comprising of rectan-
gular steel blocks of different dimensions with opposite faces flat and parallel to a high degree of
accuracy.
Sliding
D
(a) (b) (c)
Fig. 2.5 Wringing of slip gauges: (a) Parallel wringing of slip gauges (b) Cross wringing of slip
gauges (c) Wringing complete
30 Metrology and Measurement
25,30,40–100 10
1.0005 1 1.005 1
25–100 25 4
10–100 10 10 10–100 10 10
Total 88 Total 46
Measurement Standards 31
Table 2.7
1–4 1 4 0.050 1
0.100–0.900 0.1 9
1–4 1 4
Total 81 Total 41
a wavelength of light, and established the Comité Consultatif pour la Définition du Mètre (The Consultative
Committee for Length) for this purpose.
The CGPM (Conférence Générale des Poids et Mesures) adopted a definition of the metre in terms of the
wavelength in vacuum of the radiation corresponding to a transition between specified energy levels of
the krypton-86 atom. At the BIPM, measurement of linescales in terms of this wavelength replaced com-
parisons of linescales between themselves and to avoid it in future, new equipment was installed for doing
this by optical interferometry. In 1960, orange radiation of the isotope krypton-86 used in a hot-cathode
discharge lamp maintained at a temperature of 63 K, was selected to define the metre. The metre was then
defined as equal to 1650763.73 wavelengths of the red-orange radiation of the krypton isotope-86 gas.
1 metre = 1650763.73 wavelengths, and
1 yard = 0.9144 metre
= 0.9144 × 1650763.73 wavelengths
= 1509458.3 wavelengths
The CGPM recommended a value for the speed of light in vacuum as a result of measurements of the
wavelength and frequency of laser radiation in 1975. The CGPM redefined the metre as the length of the
path travelled by light in vacuum during a specific fraction of a second. It invited the CIPM to draw up
instructions for the practical realization of the new definition. The CIPM outlined general ways in which
lengths could be directly related to the newly defined metre. These included the wavelengths of five rec-
ommended laser radiations as well as those of spectral lamps. The wavelengths, frequencies and associ-
ated uncertainties were specified in the instructions for the practical realization of the definition. At the
BIPM, comparison of laser frequencies by beat-frequency techniques supplemented the measurement of
linescales in terms of wavelengths of the same lasers started in 1983. The metre is the length of the path
travelled by light in vacuum during a time interval of 1/299 792 458 of a second. In order to check the
accuracy of practical realizations of the metre based upon the new definition, a new round of international
comparisons of laser wavelengths by optical interferometry and frequency by beat-frequency techniques
was begun at the BIPM. These international comparisons comprised comparisons of individual compo-
nents of the laser, the absorption cells containing the atoms or molecules upon which the laser is stabilized
in particular, as well as comparisons of whole laser systems (optics, gas cells and electronics).
In the early days of stabilized laser systems, it was almost always necessary for lasers to be brought
to the BIPM for measurements to be made. This was not always convenient; so the BIPM developed
small, highly stable and accurate laser systems. As a result, the reference values maintained by the BIPM
could be realized away from environmental factors. In these ‘remote’ comparisons, it became relatively
easy for a number of ‘regional’ laboratories to bring their lasers for a joint comparison.
From the early inception of stabilized lasers, the BIPM offered member states of the Metre Conven-
tion the opportunity to compare their laser standards against reference systems. This service was based
on heterodyne beat-frequency measurements, largely concentrated on two types of stabilized lasers:
i. Iodine-stabilized He–Ne systems operating at wavelengths of 515 nm, 532 nm, 543 nm, 612 nm,
or (most commonly) 633 nm
ii. A methane-stabilized He–Ne laser operating at 3.39 µm
Measurement Standards 33
For the standard at 633-mm wavelength, three He–Ne/ I2 laser set-ups have been built such that
their frequencies are locked to the transition of I2 molecules. The I2 cells, which are placed in the
He–Ne laser resonators, provide the interaction between the He–Ne laser beam and I2 molecules.
Absorption signals are detected by the tuning laser frequency around the energy transition of I2 mol-
ecules. By using an electronic servo system, these absorption signals of the I2 molecules are used to lock
the laser frequency to the energy transition of the I2 molecules with a stability of 1×10−13 in an average
time interval of 1000 s. In addition to its substantial programme related to the He–Ne stabilized lasers,
the BIPM also carried out a small research programme in the performance and metrological qualities
of the frequency-doubled Nd-YAG laser at 532 nm. This relatively high-power system turned out to
have excellent short-term stability and it is often used in a number of applications. The BIPM’s com-
parison programme therefore included Nd-YAG systems by heterodyne and, more recently, by absolute
frequency measurements.
For the standard at 532-mm wavelength, two Nd-YAG laser frequencies tuned to the energy transi-
tions of I2 molecules are locked. In the establishment of these standards, lasers with wavelengths of
532 nm and with an output power of 50 mW are used. In the locking process of the laser frequency
I2 cells are used outside the resonator. At present, the frequencies of each of the two lasers are tuned
to the energy transition of I2 molecules and fluorescent signals are observed as the result of the inter-
action between the laser and the molecules in the cells. The frequencies of two Nd-YAG lasers are
changed in the range of the absorption spectrum of the I2 molecules by using a servo system. So the
third deviation of the resonance absorption signal is obtained by the affection of the iodine molecules
with the laser beam. The CIPM-recommended value is 473 612 353 604 ±10,0 kHz for He–Ne/I2
lasers using beat-frequency methods. The international comparison of the portable optical frequency
standard of He–Ne/CH4 (λ = 3.39 μm) with PTB was realized in Braunschweig between the dates of
15th and 30th December 2000. The absolute frequency value is measured as 88 376 181 000 253 ±23 Hz.
The 3.39-µm laser programme dealt with a well-characterized system that was a critical element in
the frequency chains used in the earlier measurements of the speed of light. They also have applica-
tions in infrared spectroscopy. The BIPM has, therefore, maintained a high-performance system and
participated in a number of comparisons with several NMIs. A similar facility was provided for 778 nm
Rb-stabilized systems, which were of interest to the telecommunications industry. Both programmes
are now drawing to a close in the light of the frequency-comb technique. With the introduction of
the new comb techniques allowing direct frequency measurements of optical laser frequencies, the
activity of heterodyne frequency comparisons between laser standards has been reduced and as such,
nonphysical wave standards are least affected by environmental conditions and remain practically
unchanged, making it convenient to reproduce them with a great degree of accuracy.
The International Prototype Metre cannot be used for every general-purpose application. The original
international prototype of the metre, which was sanctioned by the first CGPM in 1889, is still kept
under the specified conditions by BIPM in 1889. Therefore, a practical hierarchy of working standards
has been created depending upon the importance of the accuracy required.
34 Metrology and Measurement
i. Primary standards
ii. Secondary standards
iii. Tertiary standards
iv. Working standards
1. Primary Standards To define a unit most precisely, there is only one material standard
which is preserved under very specifically created conditions. Such type of a material standard is
known as a primary standard. The International Metre is the example of a primary standard. This should
be used only for comparison with secondary standards and cannot be used for direct application.
2. Secondary Standards Secondary standards should be exactly alike the primary standards
by all aspects including design, material and length. Initially, they are compared with primary standards
after long intervals and the records of deviation are noted. These standards should be kept at a number
of places in custody for occasional comparison with tertiary standards.
3. Tertiary Standards The primary and secondary standards are applicable only as ultimate
controls. Tertiary standards are used for reference purposes in laboratories and workshops. They
can again be used for comparison at intervals with working standards.
4. Working Standards Working standards developed for laboratories and workshops are derived
from fundamental standards. Standards are also classified as
i. Reference standards
ii. Calibration standards
2.5 CALIBRATION
a. In-house Calibration Lab These labs are set up within a company itself for calibration of
in-house instruments.
b. Professional Calibration Labs These are set up by professionals whose main business is
calibration of measuring instruments and who use all dedicated and sophisticated calibrating instru-
ments, e.g., Kudale Calibration Lab in Pune, India.
3. Only For Indication [OFI] Instruments with this status can’t be used for any measure-
ment purpose, but can be used as non-measuring devices, e.g., a height gauge with OFI status can be
used as a stand.
4. Rework This status indicates that the instrument should be reworked before use to get a correct
reading, e.g., surface plate, base plate, etc.
5. Reject This status is provided to indicate that the error in the reading shown by the measuring
instrument is not within the allowable limits.
36 Metrology and Measurement
2. Determination of Error The next step is to determine the errors in the instrument by
various methods.
3. Check for Tolerable Limits After determination of error, the error is to be compared with
the allowable tolerance.
4. Minor Changes These are made in the instrument, if possible, to minimize the error in the
reading indicated by the instrument.
5. Allotment of Calibration Set Up Each instrument is allotted the set up as per its condition.
6. Next Calibration Date The instruments that are allotted an active status are also given the
next calibration date as per standards.
A measuring instrument’s normally allotted calibration interval based on guidelines is given in
Table 2.8.
Table 2.9 shows the type of instruments generally calibrated to maintain their accuracy over a longer
period of time.
toolmaker’s square, angle gauge, ring gauge, optical projector, comparator, snap gauge, toolmaker’s
microscope, test indicator, optical flat, dial indicator, surface plate slot and groove gauge, screw pitch
gauge, tapered hole gauge.
a. Introduction The manufacturing tolerances in almost all the industries are becoming stringent
due to increased awareness of quality. This also calls for high accuracy components in precision assem-
blies and subassemblies. The quality control department therefore is loaded with the periodic calibration
of various measuring instruments. Since the accuracy of the components depends largely on the accu-
racy of measuring instruments like plunger-type dial gauges, back-plunger-type dial gauges, lever-
type dial gauges and bore gauges, periodic calibration is inevitable and is a regular feature in many
companies of repute. The practice of periodic calibration is of vital importance for quality assurance
as well as cost reduction. The set of dial calibration tester enables us to test four different kinds of
precision-measuring instruments and all the required accessories are included in the set. The habit of
periodic calibration has to be cultivated right from the stage of technical education, viz., engineering
colleges, polytechnics and other institutes.
Why is periodic calibration required?
i. To grade a dial according to its accuracy and thereby to choose the application where it can be
safely used
ii. To determine the worn-out zone of travel facilitating full utilization of dials
iii. To inspect the dial after repairs and maintenance
iv. To ascertain the exact point of discardation
b. Scope This procedure is to cover the dial calibration tester for the following range.
Range = 0–25 mm and LC = 0.001 mm
c. Calibration Equipment
Electronic Probe – Maximum Acceptable Error = 3.0 μm
Slip Gauges = 0 Grade
d. Calibration Method
i. Clean the measuring faces of the dial calibration tester with the help of CTC.
ii. Place the micrometer drum assembly and dial holder on the stem, one above the other.
iii. Hold the electronic probe in the dial holder of the dial calibration tester.
iv. Set the zero of the electronic probe by rotating the drum in the upward direction.
v. Adjust the cursor line at the zero on the drum.
vi. With these settings, the micrometer drum should be at the 25-mm reading on the main scale. The
micrometer drum is at the topmost position after this setting.
vii. After the above setting in Step 6, rotate the micrometer drum to the downward direction till it
reaches zero on the main scale. The micrometer drum is at the lowermost position at this point.
viii. Set the main scale zero and the zero on the micrometer drum across the cursor line.
ix. Place the 25-mm slip gauge between the micrometer head tip and the contact point of the elec-
tronic probe.
x. Take the readings in the upward direction from 0.5 mm to 25 mm in a step size of 0.5 mm.
xi. Calculate the uncertainty as per NABL guideline 141.
e. Uncertainty Calibration for A type-B component can be calculated as per the following
guidelines:
Measurement Standards 41
measuring positions. The measuring table is supported by wear-resistant hardened guide bars. The
inductive probes are vacuum lifted. Accessories for calibration are shown in Fig. 2.10.
b. Measuring Process The gauge block to be tested and the reference gauge block are placed
one behind the other into the mounting device. Due to the round hardened guide bars, the gauge
blocks can be moved with low friction. Measurement is carried out with two inductive probes (sum
measurement). One measuring point on the reference gauge block and five measuring points on the
test piece are collected. Whenever the gauge blocks are moved, the inductive probes are lifted by
means of an electrical vacuum pump. The measuring values are calculated and displayed by the com-
pact Millitron 1240 instrument. Via a serial interface, the measuring values can be transferred to a PC
or laptop.
d. Application EMP 4W software system, the program realizes the computer-aided evaluation as per
DIN EN ISO 3650.
It offers the following options:
• Selection and determination of measuring sequences
• Management of test piece and standard gauge blocks
• Management of individual gauge blocks
• Measuring program to perform gauge-block tests
• Control of all operations and inputs
• Automatically assigning the sequence of nominal dimensions for set tests
• Organizati on of the measurement process for testing multiple sets
• Printer program for test records and for the printout of standard gauge-block sets
• Printout of DKD records
The QMSOFT system is a modern, modular software package for measuring, storing, and document-
ing standard test instruments such as gauges, plug gauges, dial indicators, or snap gauges. Computer-aided
gauge calibration is only efficient, if all of the three necessary steps are at least in part controlled by the
PC. QMSOFT includes a variety of matched routines (QMSOFT modules) that may be used for practical
gauge calibration tasks and cover the above-mentioned steps (measurement, tolerances, management).
These routines ideally supplement the length measuring, gauge block and dial-indicator testing
instruments used for this purpose. (See Fig. 2.9, Plate 2.)
2. Optical Flat Dia. = 150 mm (5.91 in); for checking and aligning horizontal X-axis, flatness er-
ror = 0.2 µm (7.87 µin). Approx. mass = 2 kg (4.41 lb).
3. Universal Cylindrical Square, high-accuracy cylinder with two surfaces for dynamic probe
calibration. Dia. = 20 mm (0.787 in); length = 150 mm (5.91 in).
4. Cylindrical Squares for Checking and Aligning Spindle Axis Parallel to the Col-
umn Dia. = 80 mm (3.15 in); length = 250 mm (9.84 in); max. cylindricity error = 1 µm (39.37 µin);
approx. mass = 11.5 kg (25.35 lb).
5. Cylindrical Squares for Checking and Aligning Spindle Axis Parallel to the Col-
umn Dia. = 100 mm (3.94 in); length = 360 mm (14.17 in); max. cylindricity error = 1 µm (39.37 µin);
approx. mass = 13 kg (28.66 lb). (Accessories for calibration are shown in Fig. 2.10, Plate 2.)
G. Work-Holding Fixtures
1. Rim Chuck with 6 Jaws Dia. = 70 mm (2.76 in); includes 124-mm dia. (4.88 in) mount-
ing flange and reversible jaws for external and internal chucking. External range = 1 mm to 73 mm
44 Metrology and Measurement
3. Rim Chuck with 8 Jaws Dia. = 150 mm (5.91 in); includes 198-mm dia. (7.80 in) mounting
flange and separate sets of jaws for external and internal chucking. External range = 1 mm to 152 mm
(.0394 in to 5.98 in); internal range = 24 mm to 155 mm (.945 in to 6.10 in). Total height including
flange = 52 mm (2.05 in); mass approx. 6.1 kg (13.45 lb).
4. Three-Jaw Chuck Dia. 110 mm (4.33 in); includes 164-mm dia. (6.46 in) mounting flange.
External chucking range = 3 mm to 100 mm (.118 in to 3.94 in); internal range = 27 mm to 100 mm
(1.06 in to 3.94 in). Total height including flange = 73 mm (2.87 in); approx. mass = 3 kg (6.61 lb).
h. Set of Clamping Disks These are adjustable devices for pre-centering and clamping a
workpiece for series measurements and are suitable for workpiece diameters ranging from 36 mm to
232 mm (1.42 in to 9.13 in), depending on the machine type. The set includes two fixed disks with an
elongated hole and one eccentric locking disk with an approximate mass of 0.4 kg (.88 lb).
For technical and legal reasons, the measuring instruments used in the production process must
display ‘correct’ measuring results. In order to guarantee absolute accuracy, they must be calibrated at
regular intervals and must be traceable to national standards. Paragraph 4.11 of the quality standards
of DIN EN ISO 9000 states that the supplier shall identify all inspection, measuring and test equipment which
can affect product quality, and calibrate and adjust them at prescribed intervals, or prior to use, against certified equip-
ment having a known valid relationship to internationally or nationally recognized standards.
The Mahr Calibration Service provides and guarantees this sequence due to the operation of the
Calibration Laboratories DKD-K-05401 and DKD-K-06401 accredited by the Physikalisch- Tech-
nische Bundesanstalt PTB for linear measurement.
Measurement Standards 45
Review Questions
rather than the sliding scale of the ver- Length Metrology is the measuring hub
nier caliper. This allows the scale to be of metrological instruments and sincere
placed more precisely, and, conse- efforts must be made to understand the
quently, the micrometer can be read to a operating principles of instruments
higher precision. used for various applications.
3.1 INTRODUCTION
Length is the most commonly used category of measurements in the world. In the ancient days, length
measurement was based on measurement of different human body parts such as nails, digit, palm,
handspan, pace as reference units and multiples of those to make bigger length units.
Linear Metrology is defined as the science of linear measurement, for the determination of the dis-
tance between two points in a straight line. Linear measurement is applicable to all external and internal
measurements such as distance, length and height-difference, diameter, thickness and wall thickness,
straightness, squareness, taper, axial and radial run-out, coaxiality and concentricity, and mating mea-
surements covering all range of metrology work on a shop floor. The principle of linear measurement
is to compare the dimensions to be measured and aligned with standard dimensions marked on the
measuring instruments. Linear measuring instruments are designed either for line measurements or end
measurements discussed in the previous chapter.
Linear metrology follows two approaches:
In our day-to-day life, we see almost all products made up of different components. The modern
products involve a great deal of complexity in production and such complex products have interchange-
able parts to fit in another component. The various parts are assembled to make a final end product,
which involves accurate inspection. If there are thousands of such parts to be measured, the instruments
will require to be used thousands of times. The instruments in such a case require retaining their accuracy
48 Metrology and Measurement
of measurement during inspection. The precision measuring instruments have a high degree of repeat-
ability in the measuring process. If the dimensions measured by the instrument are less than 0.25, it is
said to be a precision instrument, and the error produced by such an instrument must not be more than
0.0025 mm for all measured dimensions.
It is the simplest and most commonly used linear measuring instrument. It is the part replica of the
international prototype metre shown in Fig. 3.1 (a). It measures an unknown length by comparing
it with the one previously calibrated. Steel rules are marked with a graduated scale whose smallest
intervals are one millimetre. To increase its versatility in measurement, certain scales are marked with
0.5 millimetres in some portion. Some steel rules carry graduation in centimetres on one side and
inches on the other side. In a workshop, scales are used to measure dimensions of components of
limited accuracy.
The marks on a rule vary from a width of 0.12 mm to 0.18 mm to obtain a degree of accuracy much
closer than within 0.012 mm. The steel rules are manufactured with different sizes and styles and can be
made in folded form for keeping in a pocket. The steel rules can be attached with an adjustable shoul-
der to make them suitable for depth measurement. These are available in lengths of 150, 300, 600
or 1000 mm. In case of direct measurement, a scale can be used to compare the length of a workpiece
directly with a graduated scale of the measuring rule while in indirect measurement, intermediate devices
such as outside or inside calipers are used to measure the dimension in conjunction with a scale.
Steel rules of contractor grade have an anodized profile with minimum thickness and wear-resistant
ultraviolet curved screen-printing. A steel rule should be made of good-quality spring steel and should
be chrome-plated to prevent corrosion. A steel rule is made to high standards of precision and should
be carefully used to prevent damage of its edges from wear, as it generally forms a basis for one end of
the dimension. Scales should not be used for cleaning and removing swarf from machine-table slots.
The sharpness of graduations should be cleaned and maintained by using grease-dissolving fluids.
One of the problems associated with the use of a rule is parallax error. It results when the observer
making the measurement is not in line with the workpiece and the rule. To avoid parallax error while
making measurements, the eye should be directly opposite and 90° to the mark on the part to be mea-
sured. To get an accurate reading of a dimension, the rule should be held in such a way that the gradu-
ation lines are perfectly touching or as close as possible to the faces being measured.
The battery-operated digital scale shown in Fig. 3.1 (b) is especially used to measure the travels of
machines, e.g., upright drilling and milling machines. It has a maximum measuring speed of 1.5 m/s and
is equipped with a high-contrast 6-mm liquid-crystal display.
Linear Metrology 49
3.3 CALIPERS
A caliper is an end-standard measuring instrument to measure the distance between two points. Calipers
typically use a precise slide movement for inside, outside, depth or step measurements. Specialized
slide-type calipers are available for centre, depth and gear-tooth measurement. Some caliper types such
as spring/fay or firm-joint calipers do not usually have a graduated scale or display and are only used for
comparing or transferring dimensions as secondary measuring instruments for indirect measurements.
The caliper consists of two legs hinged at the top, with the ends of the legs spanning the part to be mea-
sured. The legs of a caliper are made from alloy steels and are identical in shape, with the contact points
equidistant from the fulcrum. The measuring ends are suitably hardened and tempered. The accuracy
of measurement using calipers depends on the sense of feel that can only be acquired by experience.
Calipers should be held gently near the joint and square to the work by applying light gauging pressure
to avoid disturbance during setting for accurate measurement.
Spring Spring
Nut
Screw
Legs
Outside Transfer
Inside
Firm joint
bed, table, or stage. These gauges are typically mounted on a machine or are built into a product includ-
ing machine tools, microscopes, and other instruments requiring precision dimensional measurement
or position control. Nib-shaped jaws facilitate measurement of inside features (ID), outside features
(OD), grooves, slots, keyways or notches. Compared to the blade edge typically found on standard
calipers, the nib is more easily and accurately located on an edge or groove. Small, pocket-sized calipers
are usually designed for low-precision gauging applications. Rolling-mill calipers are usually simple rugged
devices for quick gauging of stock in production environments. Sliding calipers use a precise slide move-
ment for inside, outside, depth or step measurements. While calipers do not typically provide the preci-
sion of micrometers, they provide a versatile and broad range of measurement capability: inside (ID),
outside (OD), depth, step, thickness and length. Spring, fay, firm-joint or other radially opening-type
calipers have jaws that swing open with a scissor or plier-type action. These calipers are commercially
available in non-graduated versions.
Measurement units for calipers can be either English or metric. Some calipers are configured to
measure both. The display on calipers can be non-graduated (meaning that the caliper has no display)
dial or analog display, digital display, column or bargraph display, remote display, graduated scale display
Linear Metrology 51
or vernier scale display. Important specifications for calipers include the range and the graduation or
resolution. The range covers the total range of length or dimension that the caliper can measure. The
graduation or resolution is the best or minimum graduations for scaled or dial-indicating instruments.
Common features of calipers include depth attachments or gauges and marking capabilities. A depth
attachment is a gauge specialized for depth measurements usually consisting of a solid base with a
protruding rod or slide. The solid depth base provides a reference and support across the opening.
Marking capabilities include gauges that accommodate a scribe or other device for accurately marking
a component at a specific measurement along a particular dimension.
Vernier calipers (Invented by Frenchman Pierre Vernier) is a measuring tool used for finding or transfer-
ring measurements (internal or external). Internal calipers are used to check the inside diameters of pipes
and of bows being turned on a lathe. External calipers are used to determine the diameter of a round pipe
or a turned spindle. A vernier caliper is a combination of inside and outside calipers and has two sets of
jaws; one jaw (with a depth gauge) slides along a rule. With a rule, measurements can be made to the near-
est 1/64th or 1/100th in., but often this is not sufficiently accurate. The vernier caliper is a measuring tool
based on a rule but with much greater discrimination. Pierre Vernier devised the principle of the vernier
for precise measurement in 1631 and found that the human eye cannot discern the exact distance between
two lines, but can tell when two lines coincide so as to form one straight line. Based on this observation, he
developed the principle of the vernier caliper which states that the difference between two scales or divisions which
are near, but not alike are required for obtaining a small difference. It enhances the accuracy of a measurement.
The first instrument developed following Vernier’s principle was the sliding caliper, as shown in Fig. 3.3.
Steel and brass were used for the production of a sliding caliper manufactured in 1868. It included scales
for the Wurttenberger inch, Rhenish inch, the Viennese inch and the millimetre, already used in France.
The vernier caliper essentially consists of two steel rules and these can slide along each other. A solid
L – shaped frame (beam) is engraved with the main scale. This is also called the true scale, as each millimetre
marking is exactly 1 millimetre apart. The beam and fixed measuring jaw are at 90° to each other. If,
centimetre graduations are available on the line scale then it is divided into 20 parts so that one small
division equals 0.05 cm. On the movable measuring jaw, the vernier scale is engraved which slides on the
beam. The function of the vernier scale is to subdivide minor divisions on the beam scale into the small-
est increments that the vernier instrument is capable of measuring. Most of the longer vernier calipers
have a fine adjustment clamp roll [Fig 3.7 (b)] for precise adjustment of the movable jaw. The datum of
measurement can be made to coincide precisely with one of the boundaries of distance to be measured.
A locking screw makes the final adjustment depending on the sense of correct feel. The movable jaw
achieves a positive contact with the object boundary at the opposite end of the distance to be measured.
The measuring blades are designed to measure inside as well as outside dimensions. The depth bar is an
additional feature of the vernier caliper to measure the depth.
Vernier
scale Depth bar
Moveable measuring Line scale
jaw (Main scale)
Fixed measuring jaw
Fig. 3.4 Vernier caliper
(Mahr Gmbh Esslingen)
The vernier and main scale is polished with satin-chrome finish for glare-free reading. The slide and
beam are made of hardened steel with raised sliding surfaces for the protection of the scale. The mea-
suring faces are hardened and ground. IS: 3651 –1974 specifies three types of vernier calipers generally
used to meet various needs of external and internal measurement up to 2000 mm with an accuracy of
0.02, 0.05 and 0.1 mm. The recommended measuring ranges are 0–125, 0–200, 0–300, 0–500, 0–750,
0–1000, 750–1500 and 750–2000 mm. The beam for all types and ranges of vernier calipers is made flat
throughout its length. The nominal lengths and their corresponding tolerances are given below.
Beam guiding surfaces are made straight within 10 microns for a measuring range of 200 mm and 10
microns for every next 200 mm recommended in measuring ranges of larger sizes.
2 3 4
Main scale
Vernier scale
0 10
Alignment
Fig. 3.5 Scale comparison
xi. Examine the vernier scale to determine which of its divisions coincide or are most coincident with
a division on the main scale. The number of these divisions is added to the main scale reading.
xii. In Fig. 3.5, the third tick mark on the sliding scale is in coincidence with the one above it.
Sl. No. Main Scale Reading Vernier Scale Reading Total Reading (mm)
(MSR) (mm) (VSR) C (mm) = LC × VSR = MSR + C
1. 21 3 0.3 21.30
54 Metrology and Measurement
xiii. The error in reading the vernier scale with a least count of 0.1 mm, 0.05 mm, 0.02 mm should
not exceed the value obtained by ±(75 + 0.05 UL ) microns, ±(50 + 0.05 UL ) microns, ±(20 +
0.02 UL ) microns respectively, where UL is the upper limit of the measuring range in mm.
xiv. In this case, UL = 200 mm. Therefore, the error in reading is 85 microns (0.085 mm). Total reading
is (21.30 ± 0.085) mm.
xv. If two adjacent tick marks on the sliding scale look equally aligned with their counterparts on
the fixed scale then the reading is halfway between the two marks. In Fig. 3.5, if the third and
fourth tick marks on the sliding scale looked to be equally aligned then the reading would be
(21.35 ± 0.05) mm.
xvi. On those rare occasions when the reading just happens to be a ‘nice’ number like 2 cm, don’t
forget to include the zero decimal places showing the precision of the measurement and the
reading error. So the reading is not 2 cm, but rather (2.000 ± 0.005) cm or (20.00 ± 0.05) mm.
The digital vernier-caliper version [Fig. 3.7 (a)] has special features such as LCD display, on/off
and reset adjustment with storage of measuring values and data-transmission capabilities. Plastic
is good for artifacts since it reduces the chance of scratching. The plastic types are inexpensive.
In case of a vernier caliper having a circular scale (dial caliper), read the dial or the scale for reading
the measured value. It has dial graduations of 0.02 mm, and one-hand use with thumb-operated fine
adjustment clamp roll is possible. The arrangement of lock screw for dial bezel and sliding jaw is also
provided. Figure 3.8 shows the applications of vernier calipers.
7.0 mm
0 10 20 30 40
0 1 2 3 4 5 6 7 8 9 10 0.1 mm
0.5 mm
7.5 mm
Fig. 3.6 Illustration of measurement using vernier caliper
Possible Errors and Precautions to be taken into Account while Using Vernier
Caliper The errors occurring in the vernier instrument are mainly due to manipulation or mishan-
dling of its jaws on the workpiece. Some of the causes may be due to play between the sliding jaw on
the scale, wear and warping of jaws. Due to this, the zero line on the main scale may not coincide with
the zero on the vernier scale, which is referred as zero error. Incorrect reading of the vernier scale
results may be due to parallax error or difficulty in reading the graduated marks. Owing to its size and
weight, getting a correct feel is difficult.
Care should be taken to minimize the error involved in coinciding correctly the line of measure-
ment with the line of scale, and the plane measuring tips of the caliper must be perpendicular to
Linear Metrology 55
Lock screw
Fine adjustment
clamp roll
25.30
26.56
22.37 23.05
Inside Outside
16 Eei
Magnetic
base
Height
16 Eea
(e) Height measurement and transfer (f) Specially designed anvils for measurements
the central line of the workpiece. Grip the instrument near or opposite to the jaws and not by the
overhanging projected main bar of the caliper. Without applying much pressure, move the caliper
jaws on the work with a light touch. To correctly measure the reading, know the exact procedure of
measurement.
This is one of the most useful and versatile instruments used in linear metrology for measuring, inspect-
ing and transferring the height dimension over plane, step and curved surfaces. It follows the principle
of a vernier caliper and also follows the same procedure for linear measurement. It is equipped with a
wear-resistant special base block in which a graduated bar is held in the vertical position.
The vernier height gauge as shown in Fig. 3.10 (a) consists of a vertical graduated beam or column
on which the main scale is engraved. The vernier scale can move up and down over the beam. The bracket
Linear Metrology 57
Screw for
Vertical bar adjusting zero
error
Main scale
Fine adjustment
Clamping screw screw
Vernier scale
Magnifying glass
Bracket
Clamp
Scriber
Sturdy base
carries the vernier scale which slides vertically to match the main scale. The bracket also carries a rect-
angular clamp used for clamping a scriber blade. The whole arrangement is designed and assembled in
such a way that when the tip of the scriber blade rests on the surface plate, the zero of the main scale
and vernier scale coincides. The scriber tip is used to scribe horizontal lines for preset height dimen-
sions. The scriber blade can be inverted with its face pointing upwards which enables determination of
heights at inverted faces. The entire height gauge can be transferred on the surface plate by sliding its
base. The height gauges can also be provided with dial gauges instead of a vernier, which makes reading
of bracket movement by dial gauges easy and exact.
The electronic digital vernier height gauge shown in Fig. 3.10(b) provides an immediate digital
readout of the measured value. It is possible to store the standard value in its memory, which could
be used as datum for further readings, or for comparing with given tolerances. Digital pre-setting is
possible in which reference dimensions can be entered digitally and automatically, allowed during each
58 Metrology and Measurement
Hand crank
Granite base
Scriber points
Cast-iron base
(a) (b)
Fig. 3.10 Vernier height gauge
(Mahr Gmbh Esslingen)
measurement. Via a serial interface, the measured data can be transmitted to an A4 printer or computer
for evaluation. Fine setting is provided to facilitate the setting of the measuring head to the desired
dimensions especially for scribing jobs enabling zero setting at any position. By means of a hand crank
on the measuring head with a predetermined measuring force, the measuring head is balanced by a
counterweight inside the column, which can be locked at any position for scribing purpose, making the
instrument suitable for easy operation. (See Fig. 3.11, Plate 3.)
A vernier depth gauge is used to measure depth, distance from plane surface to a projection, recess,
slots and steps. The basic parts of a vernier depth gauge are base or anvil on which the vernier scale is
calibrated along with the fine adjustment screw. To make accurate measurements, the reference surface
must be flat and free from swarf and burrs. When the beam is brought in contact with the surface being
measured, the base is held firmly against the reference surface. The measuring pressure exerted should
be equivalent with the pressure extended when making a light dot on a piece of paper with a pencil. The
reading on this instrument follows the same procedure as that of a vernier caliper.
The vernier and main scale have a satin-chrome finish for glare-free reading with a reversible beam
and slide. The beam is made of hardened stainless steel, while the sliding surface is raised for protec-
tion of the scale. The battery-operated digital vernier caliper is also available with a high contrast 6-mm
liquid crystal display having a maximum measuring speed of 1.5 m/s.
Linear Metrology 59
90
3.7 MICROMETERS
Next to calipers, micrometers are the most frequently used hand-measuring instruments in linear metrol-
ogy. Micrometers have greater accuracy than vernier calipers and are used in most of the engineering pre-
cision work involving interchangeability of component parts. Micrometers having accuracy of 0.01 mm
are generally available but micrometers with an accuracy of 0.001 mm are also available. Micrometers are
used to measure small or fine measurements of length, width, thickness and diameter of a job.
60 Metrology and Measurement
Principle of Micrometer A micrometer is based on the principle of screw and nut. When a
screw is turned through one revolution, the nut advances by one pitch distance, i.e., one rotation of the
screw corresponds to a linear movement of the distance equal to the pitch of the thread. If the circum-
ference of the screw is divided into n equal parts then its rotation of one division will cause the nut to
advance through pitch/n length. The minimum length that can be used to measure in such a case will be
pitch/n and by increasing the number of divisions on the circumference, the accuracy of the instrument
can be increased considerably. If the screw has a pitch of 0.5 mm then after every rotation, the spindle
travels axially by 0.5 mm and if the conical end of the thimble is divided by 50 divisions, the rotation of
the thimble of one division on the micrometer scale will cause the axial movement of the screw equal to
0.5/50 mm = 0.01 mm, which is the least count of the micrometer and is given by the formula
Figure 3.13 illustrates the design features of an outside (external micrometer). It is used to measure
the outside diameter, length and thickness of small parts. Outside micrometers having an accuracy of
0.01 mm are generally used in precision engineering applications.
Barrel scaled
Carbide tipped Reference lines Thimble
meas, faces
Spindle
Ratchet
Anvil
Locking device
Frame
Fig. 3.13 Outside micrometer with a measuring range of 0–25 mm and accuracy of 0.01 mm
(Mahr Gmbh Esslingen)
Linear Metrology 61
3. Locking Device A locking device is provided on a micrometer spindle to lock it in exact posi-
tion. This enables correct reading without altering the distance between the two measuring faces, thus
retaining the spindle in perfect alignment.
4. Barrel A barrel has fixed engraved graduation marks on it and is provided with satin-chromium
finish for glare-free reading. The graduations are above and below the reference line. The upper gradu-
ations are of 1-mm interval and are generally numbered in multiples of five as 0, 5, 10, 15, 20 and 25.
The lower graduations are also at 1-mm interval but are placed at the middle of two successive upper
graduations to enable the reading of 0.5 mm.
Reference line
Reading = 5.00 mm
Fig. 3.14 Graduations marked on barrel and thimble
62 Metrology and Measurement
5. Thimble It is a tubular cover fastened and integrated with a screwed spindle (Fig. 3.14). When
the thimble is rotated, the spindle moves in a forward or reverse axial direction, depending upon the
direction of rotation. The conical edge of the spindle is divided into 50 equal parts as shown in Fig.
3.14. The multiples of 5 and 10 numbers are engraved on it and the thickness of graduations is between
0.15 to 0.20 mm.
6. Ratchet A ratchet is provided at the end of the thimble. It controls the pressure applied on
the workpiece for accurate measurement and thereby avoids the excessive pressure being applied
to the micrometer, thus maintaining the standard conditions of measurement. It is a small extension
of the thimble. When the spindle reaches near the work surface which is to be measured, the operator
uses the ratchet screw to tighten the thimble. The ratchet gives a clicking sound when the workpiece
is correctly held and slips, thereafter preventing damage of the spindle tips. This arrangement is very
important as a variation of finger efforts can create a difference of 0.04 to 0.05 mm of the measured
readings.
Micrometers are available in various sizes and ranges as shown in Table 3.3.
Limits of Error
Measuring Range Least Count (DIN 863) Pitch of Spindle Thread
0–25 mm 0.01 mm 4 µm 0.5 mm
25–50 mm 0.01 mm 4 µm 0.5 mm
50–75 mm 0.01 mm 5 µm 0.5 mm
75–100 mm 0.01 mm 5 µm 0.5 mm
100–125 mm 0.01 mm 6 µm 0.5 mm
125–150 mm 0.01 mm 6 µm 0.5 mm
150–175 mm 0.01 mm 7 µm 0.5 mm
175–200 mm 0.01 mm 7 µm 0.5 mm
Linear Metrology 63
7 3 5 1 6 9 8 10
4
D
A
2 13 12
11
B Mahr
0 - 25
14
Measurement Range
15
1–Spindle with tungsten carbide, 2–Body support, 3–Push on sleeve, 4–Space to accomodate
the object under measurement, 5–Thimble, 6–Conical-setting nut, 7–Anvil with tungsten carbide,
8–Sealing disk, 9–Clamping cone, 10–Ratchet stop, 11–Clamping lever, 12–Clamping screw,
13–Clamping piece, 14–Raised cheese head screw, 15–Curved spring washer
• Every revolution of the knob will expose another tick mark on the barrel, and the jaws will open
another half millimetre.
• Note that there are 50 tick marks wrapped around the moving barrel of the micrometer. Each
of these tick marks represents 1/50th millimetre (total of 50 divisions are engraved) and note the
reading as per the observation table given below (Table 3.4).
0 1 2
15
10 Vernier Scale
5
0.5 mm scale
Main Scale
• The total reading for this micrometer will be (2.62 ± 0.004) mm, where 4 microns is the error of
instrument.
• The micrometer may not be calibrated to read exactly zero when the jaws are completely closed.
Compensate for this by closing the jaws with the rachet knob until it clicks. Then read the
micrometer and subtract this offset from all measurements taken. (The offset can be positive or
negative.)
• On those rare occasions when the reading just happens to be a ‘nice’ number like 2 mm, don’t
forget to include the zero decimal places showing the precision of the measurement and
the reading error. So the reading should be recorded as not just 2 mm, but rather (2.000 ±
0.004) mm.
40
0 5
0.30 mm
35
5.5 mm
DATA (Data
transmission)
Limit of error
5 Maximum error
4
Error ∝m 3
2
1
0
−1
−2
Limit of error
−3
−4
−5
0 2.5 5.1 7.7 10.3 12.9 15 17.6 20.2 22.8 25 mm
Length of gauge block
Fig. 3.18 Limits of error for a micrometer with a measuring
range of 0–25 mm, set at zero
(Mahr Gmbh Esslingen)
Fig. 3.20(b) Micrometers with sliding spindle and measuring probes, reduced measuring faces
(Mahr Gmbh Esslingen)
4. Micrometer with Spherical Anvil This type of micrometer is used for measuring pipe-
wall thicknessess and is available in the standard range of 25–50 mm. It consists of a carbide ball dis.
of 5 ± 0.002 mm.
5. Micrometers with Sliding Spindle and Disc-type Anvils This type of microm-
eter is used for measuring soft materials such as felt, rubber, cardboard, etc., and has a chrome-plated
steel frame with a spindle and anvil made of hardened steel, carbide-tipped measuring faces with
operating and scale parts of satin-chrome finish and heat insulators. This is available in the range of
0–25 mm.
6. Micrometers with Disc-type Anvils This type of micrometer is used for measurements
of tooth spans Wk, as of module 0.8 as indirect determination of tooth thickness on spur gears with
straight and helical teeth; to measure shoulders on shafts, undercut dimensions, registers and for mea-
suring soft materials such as rubber, cardboard, etc.
Linear Metrology 67
5
15 10 15
Mahr
a. Interchangeable Anvils for Thread Micrometers For measuring pitch, root and
outside diameters, anvils made up of hardened wear-resistant special steels are used with a cylindrical
mounting shank and retainer ring which ensures locking while permitting rotation in the bore of spin-
dle and anvil.
(a) (b)
Fig. 3.25 Setting standards for thread micrometers
b. V and Tapered Anvils for Pitch Diameters The set of thread micrometers consists of
V-anvils and tapered anvils for measuring pitch diameters.
Linear Metrology 69
For metric threads (60°), V-anvils covering a wide range of 0.2–9 mm pitches are available. For
Whitworth threads (55°), V-anvils covering a wide pitch range of 40 to 3 tpi are available, while for
American UST threads (60°), V-anvils covering a pitch range of 60 to 3 tpi are available.
e. Ball Anvils and Roller Blades These are used for gears and ball anvils are used for spe-
cial applications. A ball anvil, consists of a carbide ball with a cylindrical mounting shank and retainer
ring, for mounting into mounting bores of thread micrometers. Figure 3.28 shows a ball and roller
blade anvil of 3.5-mm shank diameter, 15.5-mm shank length and an accuracy of ±2 µm.
H H
0.15
15.5
(a) (b)
Fig. 3.28 (a) Ball anvil, and (b) Roller-blade anvil
(Mahr Gmbh Esslingen)
5
0
05
Inside micrometers have high accuracy of 4 µm + 10 × 10−6 L where L is the length of combination
in mm. Some inside micrometers are provided with cylindrical gauge rods spring-mounted in protec-
tive sleeves, which are chrome finished. The procedure for taking the measurement is same as that of
outside micrometers.
A self-centering inside micrometer is used to measure through holes, blind holes and registers. In
this type, a ratchet stop is integrated with a coupler and a self-centering measuring head with three
anvils on the side being placed at 120° intervals (Fig. 3.31).
The self-centering inside micrometer is equipped with all digital functions such as On/Off, Reset
(Zero Setting), mm/inch, HOLD (storage of measuring value), DATA (data transmission), PRESET
(set buttons can be used to enter any numerical value), TOL (tolerance display) and is as shown in
Fig. 3. 32.
Fig. 3.31 Self-centering inside micrometer Fig. 3.32 Self-centering inside digital micrometer
(Mahr Gmbh Esslingen) (Mahr Gmbh Esslingen)
Linear Metrology 71
45
45
0
5
0
5
45
5 5
0
5
0 0
5 5 5
1,00
A B
It is used for the measurement of external and internal dimensions, external and internal threads, regis-
ters, narrow collars, recesses and grooves, outside and inside tapers, external and internal serrations and
other related application. Refer Fig. 3.34.
734.20
Dial indicator
Anvils
Measuring arms
Fig. 3.34 Universal measuring instrument for external and internal dimensions
(Mahr Gmbh Esslingen)
72 Metrology and Measurement
It consists of a rugged design with a ground and hard-chromium-plated column while a movable
arm holder is mounted in a precision ball guide to eliminate play and friction. The stationary arm holder
can be moved on the column for rough setting and has high sensitivity and accuracy due to stability
provided by the movable arm holder with a constant measuring force as a result of a built-in spring. A
reversible measuring force direction is possible for both outside and inside measurements. Reversible
arms can be located at any extent of the measuring range.
The digital universal caliper (Fig. 3.35) is used for measurement of outside and inside dimensions, registers,
narrow calipers, external and internal tapers, dovetails, grooves, distances between hole-centres and for
scribing the workpiece. This instrument has an outside measuring range of 0–300 mm and an inside mea-
suring range of 25–325 mm, with a resolution of 0.01 mm within the error limit (DIN 862) of 0.03 mm.
The digital universal caliper provides functions such as On/Off, RESET (zero setting), mm/inch,
HOLD (storage of measuring values), DATA (data transmission), PRESET (set buttons can be used
to enter any numerical value) and TOL (tolerance display). The maximum measuring speed of the
instrument is 1.5 m/s and a high-contrast 6-mm liquid crystal display is used with interchangeable arms.
The arms are reversible for extending measuring range and both the arms can be moved on the beam,
thus well-balancing the distribution of weight on small dimensions. The slide and beam are made up
of hardened steel and the instrument is operated by battery. The following table explains the different
anvils used for various applications.
At the beginning of the technological era, Carl Mahr, a mechanical engineer from Esslingen, real-
ized that machines were becoming more and more accurate and required measuring tools to ensure
the accuracy of their components. So he founded a company that dealt with the production of
length-measuring tools. At that time, the individual German states used different units of measure. For
this reason, his vernier calipers and scales were manufactured for all sorts of units, such as the Wurt-
temberger inch, the Rhenish inch, the Viennese inch, and the millimetre that already applied to France.
Carl Mahr made a valuable contribution to the metric unit introduced after the foundation of the
German Empire in 1871. He supplied metre rules, which were used as standards, first by the Weights
and Measures offices in Wurttemberg and shortly thereafter in all German states. Measuring instru-
ments for locomotives and railroad construction were a particular speciality. As the system of railroads
in Europe expanded, demand was particularly great. The technology continued to develop and the
demands in measuring tools and instruments increased. They were refined and gained accuracy. When
the company was founded, the millimetre was accurate enough to use as a unit, but soon, everything
had to be measured in lengths and hundredths and later in thousandths of a millimetre in order to keep
abreast with the technological development. Nowadays, even fractions of those units are measured. In
addition to the traditional precision measuring tools, the Mahr Group now manufactures high-preci-
sion measuring instruments, special automatic measuring units, measuring machines, and gear testers.
Many of these systems operate with support of modern electronic components and computers.
Review Questions
1. Define linear metrology and explain its application areas.
2. List various instruments studied in linear metrology and compare their accuracies.
3. Sketch a vernier caliper and micrometer and explain their working.
4. Discuss the function of ratchet stop in case of a micrometer.
5. Explain the procedure to check micrometers for errors.
6. Sketch different types of anvils used in micrometers along with their applications.
7. Explain the working of a depth micrometer gauge by a neat sketch along with its application.
8. Explain the features of a digital vernier caliper and compare it with a sliding vernier caliper.
9. Explain which instruments you will use for measuring
a. Diameter of a hole of up to 50 mm
b. Diameters of holes greater than 50 mm
c. Diameters of holes less than 5 mm
10. Discuss the precautions to be taken while measuring with a vernier caliper and micrometer to mini-
mize errors.
11. List the length metrology equipment manufactures and prepare a brief report on it.
12. What is the accuracy of a vernier caliper and micrometer? Also, explain the difference between 1
MSD and 1 VSD.
13. Draw a diagram which indicates a reading of 4.32 mm on vernier scales by explaining principles of
a vernier caliper.
14. What is the accuracy of a vernier height gauge? Also, discuss with a neat sketch its most important
feature.
15. Draw line diagrams and explain the working of a bench micrometer.
16. Describe the attachments used to measure the internal linear dimensions using linear measuring
instruments.
Straightness,
4 Introduction to
Flatness, Squareness,
Metrology
Parallelism, Roundness,
and Cylindricity
Measurements
“At a particular stage, in order to search for dimensional accuracy it becomes necessary to
measure geometric features…”
Dr L G Navale, Principal, Cusrow Wadia Inst. of Tech., Pune, India
4.1 INTRODUCTION
The most important single factor in achieving quality and reliability in the service of any product is
dimension control, and demand for the above-said qualitative aspect of a product is increasing day by
day with emphasis on the geometric integrity. Straightness, flatness, squareness, parallelism, roundness
and Cylindricity are important terms used to specify the quality of a product under consideration. The
process of inspection could quantify these qualitative aspects. This chapter discusses different methods
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 75
on how to measure straightness, flatness, squareness, parallelism, roundness and cylindricity of a part/
job and instruments used for the same.
Perfect straightness is one of the important geometrical parameters of many of the surfaces on an
object/part of machine in order to serve its intended function. For example, in case of a shaping
machine, it is required that a tool must move in a straight path to perfectly cut the material (shape) and
to get this, the surfaces of guideways must be straight.
It is very easy to define a straight line as the shortest distance between two lines. But it is very dif-
ficult to define straightness exactly. A ray of light, though it is affected by environmental conditions
(temperature, pressure and humidity in the air), for general purposes is straight. Also, for small areas,
liquid level is considered as straight and flat.
In the broader sense, straightness can be defined as one of the qualitative representations of a sur-
face in terms of variation/departure of its geometry from a predefined straight line or true mean line.
Refer Fig. 4.1 which shows a very exaggerated view of a surface under consideration. A line/surface is
said to be straight if the deviation of the distance of the points from two planes perpendicular to each
other and parallel to the general direction of the line remains within a specific tolerance limit.
Tolerance on
straightness
The tolerance for the straightness of a line is defined as maximum deviation in relation to the ref-
erence line joining the two extremities of the line to be checked. The fundamental principle used to
measure the straightness is Bryan’s principle. Bryan states that a straightness-measuring system should
be in line with the functional point at which straightness is to be measured. If it is not possible, either
slide-ways that transfer the measurement must be free of angular motion or angular motion data must
be used to calculate the consequences of the offset.
the length of its bearing surface. (A short level may be more sensitive than a long coarse one. However,
it is advisable to use spirit levels which are so short that small deviations are obtained rather than mean
values). The sensitivity E of the spirit level is the movement of the bubble in millimetres, which cor-
responds to the change in slope of 1 mm per 1000 mm.
Movement of bubble
E=
1mm/metre
An auto-collimator can also be used to test the straightness. Spirit levels can be used only to measure/test
straightness of horizontal surfaces while auto-collimators can be used on a surface in any plane. To test the
surface for straightness, first of all draw a straight line on the surface. Then divide the line into a number of
sections (in case of a spirit level, it is equal to the length of the spirit level base and length of the reflector’s
base in case of auto-collimator). Generally, bases of these instruments are fitted with two feet in order to
get the line contact of feet with a surface instead of its whole body. In case of a spirit level, the block is
moved along the marked line in steps equal to the pitch distance between the centrelines of the feet. The
angular variations of the direction of the block are measured by the sensitive level on it, which ultimately
gives the height difference between two points by knowing the least count of the spirit level. Figure 4.2
( Plate 4) shows a spirit level (only 63 mm long) is that perfectly useful, despite its small size, when it is placed
on a carpenter’s square or a steel rule. The screws do not exert any direct pressure on the rule. Steel balls are
set in the level so that (a) the surface of the ruler is not damaged, and (b) the unit does not shift when it is
fixed on the temporary base. The thickness of square or ruler is up to 2 mm.
2. Straight Edges In conjunction of surface plates and spirit levels, straight edges are used
for checking straightness and flatness. It is a narrow/thin, deep and flat-sectioned measuring instru-
ment. Its length varies from several millimetres to a few metres. These are made up of steels (available
up to 2 m), cast iron (available up to 3 m). As shown in Fig. 4.3, straight edges are ribbed heavily and
manufactured in bow shapes. The deep and narrow shape is provided to offer considerable resistance
to bending in the plane of measurement without excessive weight. Straight edges with wide working
Length L
Support feet
Fig. 4.3 Straight edges
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 77
edges are used for testing large areas of surfaces with large intermediate gaps or recesses. An estimation
of the straightness of an edge or the flatness of a surface very often is made by placing a true straight
edge in contact with it and viewing against the light background. A surface can also be tested by means
of straight edges by applying a light coat of Prussian Blue on the working edges and then by drawing
them across the surface under test. The traces of marking compounds are rubbed in this way on the
tested surfaces and the irregularities on the surface are coated in spots with different densities, as high
spots are painted more densely and low spots are partly painted. (This scraping process is repeated until
a uniform distribution of spots on the whole surface is obtained.)
IS: 2200 recommends two grades, viz., Grade A and Grade B. Grade A is used for inspection
purposes [error permitted (2 + 10L)μ] and Grade B is used for workshop general purpose [error
permitted (5 + 20L)μ]. The acceptable natural deflection due to weight is 10μ/m. The side faces
of straight edges should be parallel and straight. Different types of straight edges are shown in
Fig. 4.4.
Flatness is simply a minimum distance between two planes, which covers all irregularities of the surface
under study. In other words, determining flatness means to determine the best-fit plane between two
reference planes i.e., one above and one below the plane of surface under consideration. Flatness, a
qualitative term, can be quantified by determining the distance ‘d ’. Refer Fig. 4.6.
Flatness is the deviation of the surface from the best-fitting plane, i.e., the macro-surface topogra-
phy. It can be defined as an absolute total value; for example—a 50-mm diameter disc is required to be
flat to 0.003 mm (i.e., 3 microns). However, it is more frequently specified as deviation per unit length;
i.e., the disc above would be specified to be flat to 0.0006 mm per cm. Flatness could also be defined in
terms of wavelengths of light (see measurement of flatness).
According to IS: 2063–1962, a surface is deemed to be flat within a range of measurement when
the variation of perpendicular distance of its points from a geometrical plane (to be tested, it should be
exterior to the surface under study) parallel to the general trajectory of the plane to be tested remains
78 Metrology and Measurement
1.
2.
3.
4.
6.
8.
Accuracy ( μ) 1 1 1 2 2 3 3
Best-fit plane
Reference planes
d
below a given value. The geometrical plane may be represented either by means of a surface plane or by
a family of straight lines obtained by the displacement of a straight edge or spirit level or a light beam.
Flat testing is possible by comparing the surface under study with an accurate surface. On many round-
ness systems, it is possible to measure flatness. This is done by rotating the gauge so that the stylus deflection
is in a vertical direction. This can apply equally for both upper and lower surfaces. All spindle movement
and data-collection methods are the same for that in roundness mode. So filtering and harmonic techniques
of analysis are the same for those of roundness. Flatness can be analyzed by quantifying deviations from a
least-squares reference plane. A least-squares reference plane is a plane where the areas above and below
the plane are equal and are kept to a minimum separation. Flatness is calculated as the highest peak to the
deepest valley normal to a reference plane. Geometrical tolerance of flatness is as shown in Fig. 4.7.
Tol. 0, 2 0, 2
Tol. 0, 2
Possible surface
Fig. 4.7 Geometrical tolerance of flatness
80 Metrology and Measurement
Flatness can also be analyzed by a minimum zone calculation, defined as two parallel planes that
totally enclose the data and are kept to a minimum separation. The flatness error can be defined as the
separation of the two planes.
It consists of two outer legs to accommodate the maximum dimension of the surface under test.
First, it is to be placed on the master plate and then on the surface under checking. The reading is to be
read from the indicator for every turn of comparison. If any difference between two readings exists, it
indicates directly the error in the flatness in the plate surface under test over the span considered. The
alternative method to this is by using a precision-level instrument or an auto-collimator.
Optical flat
Reference
surface
Sample
Typical fringes
Resulting fringe
patterns
Convex surface
Concave surface
Saddle shaped
surface
sample is separated by several millimetres from the optical reference flat. The fringes are produced by
a telescope/eye-safe laser system and are viewed through the telescope eyepiece. They can also be pho-
tographed or displayed on a CCTV system. Samples can be measured whilst they remain in position on
a precision-polishing jig. The fringes follow the direction of the arrows when the optical flat is pressed
in closer contact with the surface of the sample.
For many interferometry situations, the interferometer mainframe and the optics accessories may
all sit on one vibration isolation table, with the measurement beam oriented horizontally. However, in
many other cases, the set-up illustration will show the interferometer in a vertical orientation, either
upward or downward looking. This popular set-up, as shown in Fig. 4.10, that is conducive to ergo-
nomic requirement, changes test pieces very fast, and uses less space.
Flatness measurement interferometry set-ups such as this are used for the metrology of surface flat-
ness of plane elements such as mirrors, prisms, and windows up to 150 mm. The test object must be
held so that the surface under test can be aligned in two axes of tilt.
The transmission flat, which should be of known flatness and shape, serves to shape the beam to
its own shape, and provides a reference wavefront, which is compared to the returning, reflected light
from the test object. Each spatial point in the combined beams is evaluated for the variation between
the wavefront of the transmission flat and the test object. These differences are expressed as interfer-
ence between the two beams.
82 Metrology and Measurement
Test object
Phase shifter 3-jaw chuck
MiniFiz
Interferometer Tip/tilt
Transmission flat Right-angle base
Fig. 4.10 Set-up of interferometer
The test object must be held so that the surface under test can be
aligned in two axes of tilt. Using the two-axis mount controls (or ‘tip/
tilt’), adjust tilt to optimize the number of fringes. When aligned, the
interferometer monitor will display black, grey and white bands, (‘fringes’)
as shown in Fig. 4.11 which represents a concave surface.
If this instrument, namely, MiniFIZ includes a zoom capability,
Fig. 4.11 Fringe zoom in or out from the test piece to make the object as large as pos-
pattern sible in the test view, without clipping the image. This adjustment opti-
mizes the lateral resolution of the measurement, essentially ensuring
the largest number of data sampling points.
A DE phase shift also recommends using a phase-shifting MiniFIZ, which, combined with the power
of a computer and our surface analysis software, will provide greater height detail, point-by-point in the
data set. Flatness can be estimated by eye, if the user is experienced and trained; but precision measure-
ments of the highest order require phase-shifting the interference fringes.
the size of the surface to be tested and the required number of points to be taken. The angular beam-
splitter is mounted on the flatness mirror base. (See Fig. 4.12, Plate 5.)
Before making any measurements, a ‘map’ of the measurement lines should be marked out on the
surface. The length of each line should be an integer multiple of the foot-spacing base selected. There
are two standard methods of conducting flatness measurements:
b. Grid Method in which any number of lines may be taken in two orthogonal directions across
the surface.
Micromount
Wafer
Jig-mounting plate
Jig-conditioning
ring
Fig. 4.13 ULTRA TEC Precision Gauge Micromount UM1245
Cast-Iron Surface Plates These are used after rough machining is done and then followed
by seasoning or ageing for a suitable period. Then heat treatment (annealing up to 500°C for about
84 Metrology and Measurement
three hours) is done on the seasoned plates to relive the internal stresses. The rough finished surface
is scrapped suitably until a fairly uniform spotting of the marker is obtained all over the surface. Then
it is followed by a finishing process like snowflaking. The accuracy of this surface plate is ±0.002 to
±0.005 mm for the surface plate diagonal of 150 mm. CI Surface plates are available in two grades:
Granite Surface Plates They have as advantage over CI surface plates and have more rigidity
for the same depth with the absence of corrosion. They provide a high modulus of rigidity and are
moisture free. Metallic objects can easily slide on their surface and they are also economical in use. Sizes
are available from 400 × 250 × 50 mm to 2000 × 1000 × 250 mm.
Glass Surface Plates They are also commercially available. These are comparatively light in
weight and free from burr and corrosion. Accuracy varies in the range 0.004 to 0.008 mm. They are
available in sizes of 150 × 150 mm to 600 × 800 mm.
4.4 PARALLELISM
Parallelism is one of the important geometrical relationships to access the qualitative aspect of a work/
job geometry. Two entities (line, plane) are said to be parallel to each other when the perpendicular dis-
tance measured from each other anywhere on the surfaces under test and at least in two directions does
not exceed an agreed value over a specified time. Parallelism defines the angle between two surfaces of
a sample. It can be specified as a thickness difference per unit length or as an angular deviation, e.g., a
thickness difference of 1 micron per cm is equivalent to 20 seconds of an arc, or a 100-micron radians
angle.
i. Parallelism of Two Planes The distance between two planes (surfaces) at any position
should not deviate beyond a minimum value agreed between the manufacturer and the user.
ii. Parallelism of Two Axes (of Two Cylinders) The maximum deviation between the
axes of a cylinder at any point may be determined by gently rocking the dial indicator in a direction
perpendicular to the axis.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 85
Plane A
Dial indicator
Fig. 4.18 Parallelism of two straight lines, each formed by the intersection of two planes
Three-ball plane
Probe of electro
mechanical gauge
Sample
Three-ball plane
Autocollimator
telescope
Tilt
screw
Light
path
Sample
Sample
Jig- mounting
conditioning plate
ring
Three-ball plane
Fig. 4.20 Auto collimator
Angular measurement requires no absolute standards for measurement as a circle can be divided into
any number of equal parts. Also, there is a demand of ability of an instrument to access/obtain the
quality angular measurement or checking. For example, in case of a column and knee-type milling
machine, the cross slide must move exactly 90° to the spindle axis in order to get an exact flat sur-
face during the face-milling operation. In a number of cases, the checking of right angles is of prime
importance while measuring and/or checking geometrical parameters of the work. For example, the
sliding member of a height gauge (carrying the scriber) must be square to the locating surfaces in order
to avoid errors in measurement. Two entities (two lines, or two planes or one line and one plane) are
said to have parallelism relationship between each other, if they do not exceed an agreed value over a
specified time. For this, the reference square may be a right-angle level or may be a selected plane or a
line or an optical square. Permissible errors are specified as errors relating to right angles (in ± microns
or millimetres) for a given length. For determining this error, another part of the machine under test
is considered as reference along with the specification of direction of error. Squareness measurement
determines the out-of-squareness of two nominally orthogonal axes, by comparing their straightness
88 Metrology and Measurement
Datum surface
Possible
surface
Possible
median
Tol. 0,1
plane
D
B
Datum
Tol. 0,1 surface
Possible median plane Tol. 0,15
Datum surface
Datum surface
Datum surface
Possible
Tol. 0,15 axis Tol. φ 0,1
values. Squareness errors could be the result of poor installation, wear in machine guideways, an acci-
dent that may have caused damage, poor machine foundations or a misaligned home position sensor
on gantry machines. Squareness errors can have a significant effect on the positioning accuracy and
contouring ability of a machine. Figure 4.21 gives some of the representations of squareness.
63 mm
(2.46in) 46 mm
21 mm (1in)
(0.8in)
ε 63 mm
(3.45in)
22 mm
(10.86in)
Sq
uar
e(0
.3 m
) mm )
70 in) 0m
(0.7 ( 3.2 )
mm (0.3m
.5
83 0mm
10
the parallelism of the faces AC and BD (refer Fig. 4.17). Then the squareness of these faces
with the face CD is to be checked. This instrument consists of a framework with a flat base on
which a knife-edge carrying some indicating unit is mounted. In Fig. 4.23 a dial gauge indicator
is shown.
A B
Indicator unit
Block
(dial gauge)
Knife edge
C D
Fig. 4.23 Square block (indicator method)
It is arranged on an accurately horizontal surface, i.e., surface plate of inspection grade in such a way
that the knife-edge is placed in contact with an approximately vertical surface and the dial gauge height
is adjusted to make a contact near the top of the side of the block. The knife-edge is pushed and slightly
pressed against the side of the block say, AC. The reading is obtained on the indicator. Now face BD
is brought in contact with the instrument set-up. The difference between the readings is twice the error
in sequences for the distance between the knife-edge and the dial.
2. Using NPL Tester Figure 4.24 illustrates the NPL square tester. It consists of a tilting frame
mounted on a knife edge or roller supported at the end of an arm by the micrometer head. The frame
90 Metrology and Measurement
Straight edge
Tilting edge
Micrometer head
Knife edge
carries a vertical straight edge with two parallel sides. This instrument is used to test the engineer’s
square. For the testing, it is kept on the surface plate. The angle of the straight edge with respect to the
surface plate could be changed using the micrometer. The movement on the micrometer drum will tilt
the entire frame and, in turn, the measuring surface of the straight edge. The square-ender test is placed
against the surface of the straight edge. To get the contact along the total length of the straight edge,
the micrometer height is to be adjusted. If the same reading is obtained on both the sides of the straight
edges, the blade is truly square. If the two readings are not the same then half the difference between
the two readings gives the error in squareness.
4. Square Master This is an ideal instrument for standard rooms and machine shops, which
involves single-axis measurement. Measurement of squareness, linear height, centre distance, diam-
eters, step measurements are possible with this instrument. This is also an optional linear scale for
vertical measurement.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 91
180°
180°
Axis of
rotation
Dial-
indicator
positions
Plane
Fig. 4.25 Checking squareness of an axis of rotation with Fig. 4.26 Square master
the given plane
Optical Interferometer
Optical reference Optical
Straightness
square path reference
reflector Straightness
Optical path
reflector
square
Interferometer
Laser Laser
head head
the checking of squareness between two horizontal axes. However, it is also possible to check the
squareness between the horizontal and vertical axis with the addition of a special retroreflector and
turning mirror.
When a very high accuracy is wanted in measuring squareness, where we need to get an even higher
accuracy than for a laser transmitter, we can make use of a method where the laser transmitter is indexed
at 180°. The method is suitable for measurement of Squareness compared to two points on a reference
plane, or for measuring a plumb where we use the vials on the laser transmitter as reference.
Measuring differences in diameter is not sufficient to measure roundness. For example, a 50-sphere
φ has a constant diameter when measured across its centre, but is clearly not round. To measure any
component for roundness, we require some form of datum.
In case of a cylinder, cone or sphere, round-
ness is a condition of a surface of revolution
where all points of the surface increased by
a plane perpendicular to the common axis
(in case of a cylinder) and passing through a
common centre (in case of a sphere) are equi-
distant from the axis (or centre). Roundness
is usually assessed by rotational techniques by D
Fig. 4.30 LSRC 1. Least Square Reference Circle (LSRC) The least
squares reference circle is a circle where the sum of areas inside this
circle are equal to the sum of the areas outside the circle and kept to
a minimum separation.
The out-of-roundness value is the difference between the maxi-
mum and minimum radial departure from the reference circle centre
(in Fig. 4.30, it is ( P + V )). This is a very convenient reference circle
to derive, as it is mathematically precise.
(a) (b)
Fig. 4.34 Component rotation
i. Instrument error
ii. Component set-up error
iii. Component form error
By using high-precision mechanics and stable electronics, instrument error (which is too small to be
significant) and component set-up error is minimized firstly by accurate centering and leveling and then
the residual error is removed by electronic or software means. Form error is the area of interest and
once, the first two types of errors are excluded, this error can be highly magnified and used to derive a
measure of the out-of-roundness.
b. Rotating Stylus An alternative method is to rotate the stylus while keeping the compo-
nent stationary. This is usually performed on small high-precision components but is also useful for
measuring large, non-circular components; for example, measurement of a cylinder bore using this
method would not require rotation of the complete engine block. This type of measuring system
tends to be more accurate due to continuous loading on the spindle, but is however limited by the
reach of the stylus and spindle.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 95
component. The clamp should also have a three-point location wherever possible. For components
that are very small and fragile, care must be taken when clamping, and it is often necessary to consider
a reduction in stylus force to prevent measurement errors.
b. Stylus must be Central to the Workpiece The centre of the stylus tip and the centre
of the component ideally should be in line with the measuring direction of the stylus. Any errors in the
alignment of the component centre to the stylus tip centre will cause cosine errors.
If we look at the drawing in Fig. 4.37 (a), we can see that the stylus tip is in line with the component
centre. In Fig. 4.37 (b) we have a cresting error causing a cosine error. Cosine errors cause a number
of problems, as can be seen that the stylus is presumed to be measuring at the 0° position on the table,
whereas in actual fact the actual angular position is a few degrees off the centre. This will cause prob-
lems when calculating the eccentricity position and the amplitude of the deviations of the profile. For
large components, small cresting errors will have small effect. If we look at Fig. 4.37 (c), we can see
that for components with smaller diameters, the cosine errors are significantly larger. Therefore, for
components with small diameters, good cresting is extremely critical.
+ + +
c. Maintaining the Required Stylus Force This depends on the component. Wherever
possible, the lowest stylus force should be used without causing any detriment to the measurement. Too
light a stylus force may cause the stylus tip to bounce and leave the surface, especially on surfaces with
large form errors or surfaces with holes or other interruptions. Too large a stylus force may damage the
component, or cause a movement during measurement or cause a ringing phenomenon, which appears
as a high-frequency noise on the measurement data.
d. Need to Centre and Level the Component Centering and leveling is critical to the
measurement process. Any large eccentricities will affect results. However, centering and leveling is
not always easy or practical especially when trying to centre and level with manual devices. Although
mathematics can be used to remove some of the effects of eccentricity, it is always best to centre and
level as accurately as possible. In general, the more off-centre the component, the greater the residual
eccentricity error even after mathematical correction.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 97
e. Cleaning the Workpiece Most roundness systems measure extremely small deviations and
any dirt on the workpiece will show as deviations and affect the results. In all cases, it is important to
clean the workpiece before any measurement is completed. Below is an example (refer Fig. 4.38) of a
component that has been measured without being cleaned.
There are various methods of cleaning—some are not as effective as others. Ultrasonic cleaning is
good except that the component will be warm and needs a normalizing time. Even then finger marks
must still be removed using very fine tissue paper, such as lens tissue, which is lint free.
f. Preventing Stylus Damage A stylus stop attachment can be used. This usually consists
of some form of mechanical device that prevents the stylus from reaching its full range of movement
in the negative direction (i.e., down a hole). However, this is purely a mechanical device that prevents
damage to the stylus. Some deviation will still show on the results where the stylus falls in and out of
the hole and is ‘resting’ on the stop.
h. Requirement of Long Stylus On some types of components such as deep internal bores,
it may be necessary to use a longer stylus in order to reach the measurement area. Using long styli, fac-
tors such as stylus force may need adjustment to allow for the extra leverage and weight of the stylus.
98 Metrology and Measurement
Increasing the stylus length will also decrease the resolution of the results. This is not always a problem
but may be on higher precision surfaces. On some systems, it is possible to increase the reach of the
gauge connected to the stylus rather than increase the length of the stylus. These are sometimes known
as gauge extension tubes.
The ability to analyze harmonics is very useful in order to predict a component’s function or to con-
trol the process by which the component is manufactured. If there is data missing, it becomes difficult
to determine the harmonic content of the surface. However, there are methods of calculating harmon-
ics on interrupted surfaces but they are not widely used.
The fundamental basis of the instrument’s design is to use a spindle with a highly reproducible
rotation and then use a novel error-separation technique to reduce significantly the errors associated
with the lack of perfection of the spindle geometry. The instrument used to make the measurements is
capable of collecting 2000 data points per revolution.
In operation, the component to be measured is placed on a rotary stage and data is collected at
several orientations of the stage. The Fourier-series representation of each measured trace is deter-
mined. A mathematical model, which relates the Fourier representations of the component errors and
the spindle errors to those of the traces, is then solved. The resulting Fourier representation of the
component error is used to determine the roundness of the component and to provide values of the
component error at points around the circumference.
4.7 CYLINDRICITY
Cylindricity values are becoming more important in the measurement of components, particularly as
an aid to improve the efficiency and cost-effectiveness of systems, for example, in an automotive fuel
injection, the need for greater economy demands greater precision in components. But to describe it,
we require a minimum of two roundness planes, which form a cylinder. However, in a majority of cases
this would not be enough information. It is often necessary to increase the number of measured planes.
The amount of planes depends on the component and application. There are many ways of defining
cylindricity of a component.
A best way to describe cylindricity of a component is by the minimum-zone method of analysis.
This can be described as the radial separation of two co-axial cylinders fitted to the total measured
surface under test such that their radial difference is at a minimum. For the purposes of inspection, a
tolerance may be put to the cylindricity analysis, and in the above case, it may be written as the surface
of the component is required to lie between two co-axial cylindrical surfaces having a radial separation of the specified
tolerance. Refer Fig. 4.43.
0,01
Tol. 0,01
Actual surface
Fig. 4.42 Representation of cylindricity
reference circles. All reference circles are used to establish the centre of the component. Roundness
is then established as the radial deviations from the component centre. There are four internationally
recognized reference cylinders. These are the Least Squares, Minimum Zone, Maximum Inscribed and
Minimum Circumscribed cylinders.
a. Least Squares The least squares cylinder is constructed from the average radial departure of
all the measured data from the least-squares axis.
b. Minimum Zone The minimum-zone cylinder can be described as the total separation of two
concentric cylinders, which totally enclose the data and are kept to a minimum separation.
d. Maximum Inscribed The maximum inscribed cylinder is the largest cylinder that is enclosed
by the data.
+
+ +
+
+
+ +
+
+
+
+
+
Least
squares
line
through
profile
data
at cross
section
R
+
R
Common axis
Datum φ Datum φ
Datum surface Datum surface
A R
(i) (ii) (iii)
Fig. 4.49 Examples of runout
4.8 COAXIALITY
Coaxiality is the relationship of one axis to another. There are two recognized methods of calculating
coaxiality.
i. ISO has defined coaxiality as the diameter of a cylinder that is coaxial with the datum axis and
will just enclose the axis of the cylinder referred for coaxiality evaluation.
ii. DIN Standard has defined coaxiality as the diameter of a cylinder of defined length, with its
axis co-axial to the datum axis that will totally enclose the centroids of the planes forming the
cylinder axis under evaluation.
Datum axis
Datum axis
+
Component
Axis
+ +
Component
Axis
+ +
Coaxiality axis
Coaxiality axis
Fig. 4.50 Coaxiality
104 Metrology and Measurement
Eccentricity is the term used to describe the position of the centre of a profile relative to some datum
point. It is a vector quantity in that it has magnitude and direction. The magnitude of the eccentricity is
expressed simply as the distance between the datum point and profile centre. The direction is expressed
simply as an angle from the datum point to the profile centre. Concentricity is twice the eccentricity and
is the diameter of a circle traced by the component centre orbiting about the datum axis.
1. Form-tester On-site measuring instruments for assessing form and location deviations as per
DIN ISO 1101, e.g., roundness errors are indispensable today for rapidly determining and eliminating
manufacturing errors and obtaining less rework and fewer rejects. Mahr meets this challenge with its
easy-to-operate and flexible MMQ10 form-measuring station shown in Fig. 4.51. One can get this high-
performance, high-quality measuring station at an incredibly low price. Benefit from our competence
to increase your precision.
• Universal form and positional tolerance check system for roundness measurements on the shop-
floor and in the measuring room
• Evaluation either with the FORM-PC or with an integrated evaluation PC
• FORM-PC as powerful evaluation system running under Windows 98 or Windows NT
• Comfortable software for the evaluation of deviation of form and position as per DIN ISO 1101:
roundness, roundness within segments, radial and axial runout, concentricity, coaxiality flatness,
straightness, parallelism, and perpendicularity.
Description The instrument shown in Fig. 4.52 ( Plate 6) consists of long slotted CI base, which
carries a hardened vertical column. The vertical column carries a lead screw and a nut, which in turn
traverses up and down with a floating C-Frame, as the hand wheel is rotated by hand. The floating
C-Frame carries a fixed ball point on one side and a sliding ball point with a dial gauge on the other.
The diameter of the piston is checked in between these two points. The taper over the entire length
of a piston is checked as the C-Frame traverses up and down. The piston is mounted on the register
diameter by pushing on a hardened ground and lapped seat, which carries a circulated disc, graduated
in angles and rotates around a vertical shaft. The piston is rotated by hand and the ovality is noted on
the dial gauge in the C-Frame. The angular difference between major and minor axes is noted on a disc
against a curser line.
Application This instrument is useful for checking the piston ovality (d-m), piston major
diameter(d ), piston minor diameter(m), taper over the total length of piston (l) and the angular differ-
ence between the major and minor axes.
3. Roundcyl 500 It has been designed to meet the manufacturer’s requirements for speed, high
accuracy, simple operation and at a price that can be justified. It is a rugged instrument which can be
used in the laboratory as well as on the shop floor.
It has a measuring capacity that can accommodate the majority of the components needed to be
analyzed for geometric form. It meets the stringent demands of quality assurance in the global environ-
ment. Using this instrument, cylindricity can be measured by collecting the data of 3 to 10 levels and
then ploting the graph. Roundcyl-500 uses an IBM-compatible computer with standard peripherals.
The user has an option to use his own hardware, provided it meets the specification criteria. Communi-
cation between the computer and the operator is via a simple drop-down menu. Figure 4.53(a) ( Plate
6) shows the view of Roundcyl-500, and 4.53(b) (Plate 6) shows a user-friendly optional menu bar.
Some of the measured profiles are also shown. Once a component is measured, results can further be
analyzed by changing filters, magnification, eliminating centering correction. Results can be stored on a
PC hard disk for reanalyzing at a later date. Different measuring sequences can be saved on a hard disk
enabling the Roundcyl-500 to be used in semi-automatic mode.
It has got measuring capacity of Maximum Diameter: 500 mm, Maximum workpiece height: 25 mm. Its
vertical and horizontal travels are 320 mm and 250 mm respectively. For measurement, the instrument uses
106 Metrology and Measurement
a lever-type gauge head of minimized friction movement probe having a measurement range of ±300 mm
with a standard stylus. For measurement, contact with the workpiece surface is made with a 2-mm diameter
steel ball with gauging pressure of approximately 0.16 N. The rotating spindle has an air bearing and is
mounted directly on a granite plate. With the use of this instrument, the geometrical parameters measured
are cylindricity, roundness, concentricity, coaxiality, circularity, flatness, squareness, and parallelism.
It accommodates vertical and horizontal style indicators with either a 5/32 diameter or a dovetail adap-
tation. The indicator holder can be interchanged from side to side on the posts for transfer uses,. and can
be inverted on the posts for taller parts. It is manufactured from hardened tool steel for durability.
Review Questions
1. List down methods of straightness measurement. Discuss straightness measurement using spirit
level.
2. Explain the concept of a reference plane.
3. Explain the following terms:
(a) Straightness (b) Flatness (c) Squareness (d) Roundness
4. State the importance of geometric tolerance of manufacturing components.
5. Describe the following methods of checking straightness of a surface:
a. Parallelism of two planes
b. Parallelism of two axes to a plane
c. Parallelism of two axes
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 107
1m
2m
‘Machine tool metrology is necessary to ensure that a machine tool is capable of producing
products with desired accuracy and precision…’
D Y Kulkarni, Inteltek Co. Ltd., Pune
Geometric accuracy largely influences the product quality and precision to be maintained during the
service life of a machine tool. The distinct field of metrology, primarily concerned with geometric
tests (alignment) of machine tools under static and dynamic conditions, is defined as machine tool
metrology. Geometric tests are carried out to check the grade of manufacturing accuracy describ-
ing the degree of accuracy with which a machine tool has been assembled. Alignment tests check the
relationship between various elements such as forms and positions of machine-tool parts and displace-
ment relative to one another, when the machine tool is unloaded. Various geometrical checks generally
carried out on machine tools are as follows.
i. Work tables and slideways for flatness
ii. Guideways for straightness
iii. Columns, uprights and base plates for deviation from the vertical and horizontal planes
iv. True running and alignment of shafts and spindles relative to other areas at surfaces
v. Spindles for correct location and their accuracy of rotation
vi. Ensuring the accuracy of rotation involves checking eccentricity, out of roundness, periodical
and axial slip, camming
vii. Parallelism, equidistance, alignment of sideways, and axis of various moving parts with respect
to the reference plane
viii. Checking of lead screws, indexing devices and other subassemblies for specific errors
1. Dial Gauges These are mostly used for alignment tests. The dial gauges used should have a
measuring accuracy in the order of 0.01 mm. The initial plunger pressure should vary between 40 to
100 grams and for very fine measurements, a pressure as low as 20 grams is desirable. Too low a spring
pressure on the plunger is the source of error in case of swingover measurements as the upper-position
spring pressure and plunger weight acts in the same direction, while in a lower position they act in the
opposite direction. The dial gauge is fixed to a robust and stiff base (e.g., magnetic base) and bars to
avoid displacements due to shock or vibration.
2. Test Mandrels These are used for checking the true running of the spindle. Test mandrels
deliver quality checking such as straightness and roundness during the acceptance test. There are two
types of test mandrels, namely, a) mandrel with a cylindrical measuring surface and taper shank that
can be inserted into the taper bore of the main spindle, and b) cylindrical mandrels that can be held
between centres. Test mandrels are hardened, stress-relieved and ground to ensure accuracy in testing.
The deflection caused by the weight of the mandrel is known as ‘natural sag’, which is not affordable to
get overlooked. Slag occurs when the mandrel is fixed between centres and is more marked when it is
supported at one end only by the taper shank, while the outer end is free to overhang. To keep the slag
within permissible limits, mandrels with a taper shank vary between 100 and 500 mm.
3. Spirit Levels Spirit levels are used in the form of bubble tubes, which are mounted on a cast-
iron bases. Horizontal and frame are the two types of spirit levels used for alignment tests. Spirit levels
are used for high-precision measurements having a tolerance of 0.02 mm to 0.04 mm per 1 m, and
having a sensitivity of about 0.03 mm to 0.05 mm per 1 m for each division. A bubble movement of one
division corresponds to a change in slope of 6 to 12 seconds.
4. Straight Edges and Squares Straight edges are made up of heavy, well ribbed cast iron or
steel and are made free of internal stresses. Their bearing surfaces are as wide as possible. The error at
the top of a standard square should be less than ±0.01 mm. A steel square is a precision tool used for
engraving the lines and also for comparing the squareness of two surfaces with each other.
6. Waviness-Meter It is used for recording and examining the surface waviness with a magni-
fication of 50:1.
7. Autocollimator This can be used for checking deflections of long beds in horizontal, vertical
or an inclined plane, owing to its sensitivity in measuring.
The sequence in which the alignment/geometrical tests are given is related to the subassemblies of a
machine and does not define the practical order of testing. In order to make checking or mounting of
Metrology of Machine Tools 111
instruments easier, tests are carried out in any convenient sequence. When inspecting a machine, it is
necessary to carry out all the tests described below, except for alignment test, which may be omitted in
mutual agreement between the buyer and the manufacturer.
Alignment tests alone are inadequate for machine testing as they do not include variations in rigidity
of machine-tool components, quality of their manufacture and assembly, the influence of the machine-
fixture, cutting tool-workpiece, and system rigidity on accuracy of machining. It consists of checking
the accuracy of a finished component under dynamic loading. Performance/practical test is carried out
to know whether the machine tool is capable of producing the parts within the specified limits or not.
These tests should be carried out after the primary idle running of the machine tool with essential parts
of the machine having a stabilized working temperature. Moreover, these performance tests are carried
out only with the finishing cuts and not with roughing cuts, which are liable to generate appreciable cut-
ting forces. The manufacturer specifies the details of test pieces, cutting and test conditions.
Now let us consider the Indian Machine Tool Manufacturers Associations’ Standard—IMTMAS:
5-1988, which describes both geometrical and practical tests for CNC turning centres with a horizontal
spindle up to and including a 1250-mm turning diameter having corresponding permissible deviations with
reference to the IS: 2063-1962-Code for testing machine tools. (For conducting a performance test, the
specimens to be manufactured are also standardized, one such standard specimen is shown in Fig. 5.1.)
When establishing the tolerance for a measuring range different from that indicated in standards IS:
2063–1962, it is taken into consideration that the minimum tolerance is 0.002 mm for any proportional
value, and the calculated value is rounded off to the nearest 0.001 mm. However, the least count of
all measuring instruments need not be finer than 0.001 mm. The testing instruments are of approved
type and are to be calibrated at a recognized temperature confirming to the relevant published Indian
Standards. Whatever alternate methods of testings are suggested, the choice of a manual method of
testing is left to the manufacturers.
L2 L4 TAPER 3 L5
L3
R1
1× 45°
R1
R1
∅ D ROUGH
∅ D2
∅ D3
∅ D3
∅ D1
∅ D1
45° 60° R2
TAPER 1
L1
(Continued )
114 Metrology and Measurement
Table 5.2 Specifications of alignment testing of column and knee type of milling machine
A B
A B
D D D
C C C
9 10 6 5 4 7
2
3
8
tool. It also has a work-holding spindle which can be oriented and driven discretely and/or as a feed
axis. Machine size range—turning diameter (maximum diameter that can be turned over the bed) up to
160 mm, 160 mm to 315 mm, 315 mm to 630 mm, 630 mm to 1250 mm. While preparing this standard,
IMTMAS considered assistance from UK proposal ISO TC 39/SC2 (Secr. 346) N-754, JIS B 6330 and
JIS B 6331, ISO 1708 and ISO 6155 Part-I.
Table 5.4 Specifications of a CNC turning centre
(a) (b)
Carriage
2.
(a)
Wire
Deviation
(b)
Metrology of Machine Tools 119
L- Constant
a
F
5.
6.
7.
a b
(Continued )
120 Metrology and Measurement
9.
10.
11. 50
Alternate
12. A B
b
a
Metrology of Machine Tools 121
b
50
14. A B
Alternate
15.
16. b
Alternate
(Continued )
122 Metrology and Measurement
18.
19.
b
b a
21.
22. a
b
Bed
Metrology of Machine Tools 123
Cross slide
24.
Axial Tool
25.
Radial Tool
∅ d1
∅ D3
∅ D5
∅ D6
∅ D4
∅ D1
∅ D2
40 110
155
(Continued )
124 Metrology and Measurement
∅ 10
min
∅ D2
10
max
P3 ∅D
P4 L
Review Questions
6. Describe the set-up for testing the following in case of a horizontal milling machine.
a. Work-table surface parallel with the longitudinal movement
b. True running of the axis of rotation of labour
7. Explain the procedure with a neat sketch to check the alignment of both centres of a lathe machine
in a vertical plane.
8. Explain the principle of alignment, as applied to measuring instruments and machine tools.
9. State the geometrical checks made on machine tools before acceptance.
10. Distinguish between ‘alignment test’ and ‘performance test’.
11. Name the various instruments required for performing the alignment tests on machine tools.
12. Name the various alignment tests to be performed on the following machines. Describe any two of
them in detail using appropriate sketches.
a. Lathe
b. Drilling Machine
6 Limits, Fits and
Tolerances
(Limit Gauge and its Design)
6.1 INTRODUCTION
The proper functioning of a manufactured product for a designed life depends upon its correct size rela-
tionship between the various components of the assembly. This means that components must fit with
each other in the required fashion. (For example, if the shaft is to slide in a hole, there must be enough
clearance between the shaft and the hole to allow the oil film to be maintained for lubrication.) If the
clearance between two parts is too small, it may lead to splitting of components. And if clearance is
too large, there would be vibration and rapid wear ultimately leading to failure. To achieve the required
conditions, the components must be produced with exact dimensions specified at the design stage in
part drawing. But, every production process involves mainly three elements, viz., man, machine and
materials (tool and job material). Each of these has some natural (inherent) variations, which are due to
chance causes and are difficult to trace and control, as well as some unnatural variations which are due
to assignable causes and can be systematically traced and controlled. Hence, it is very difficult to pro-
duce extremely similar or identical (sized) components. Thus, it can be concluded that due to inevitable
inaccuracies of manufacturing methods, it is not possible to produce parts to specified dimensions but
they can be manufactured economically to the required size that lies between two limits. The terms shaft
and hole are referred for external and internal dimensions. Then by specifying a definite size for one and
varying the other, we could obtain the desired condition of the relationship of the fitment between
the shaft and the hole. Practically, it is impossible to do so. Hence, generally the degree of tightness or
looseness between the two mating parts, which is called fit, is specified.
The concept of mass production originated with the automobile industry. MODEL-T of Ford Motors
was the first machine to be mass-produced. The concept of interchangeability was introduced first in
the United States. But in the early days, it was aimed at quick and easy replacement of damaged parts by
attaining greater precision in manufacture and not at achieving chip products in large quantities. Till the
1940’s, every component was manufactured in-house. After the 1940’s, however, the automobile com-
panies started outsourcing for carrying out roughing operations. Slowly and gradually, the outsourcing
moved on from roughing components to finished components and from finished components to fin-
ished assemblies. The automobile industry started asking suppliers to plan for the design, development
and manufacture of products to be used in producing cars and trucks.
In mass production, the repetitive production of products and their components entirely depends
upon interchangeability. When one component assembles properly (and which satisfies the functional-
ity aspect of the assembly/product) with any mating component, both chosen at random, then it is
known as interchangeability. In other words, it is a condition which exists when two or more items pos-
sess such functional and physical characteristics so as to be equivalent in performance and durability;
and are capable of being exchanged one for the other without alteration of the items themselves, or
of adjoining items, except for adjustment, and without selection for fit and performance. As per ISO-
IEC, interchangeability is the ability of one product, process or service to be used in place of another
to fulfill the same requirements.
128 Metrology and Measurement
This condition that exists between devices or systems that exhibit equivalent functionality,
interface features and performance to allow one to be exchanged for another, without alteration,
and achieve the same operational service is called interchangeability. Moreover, we could say, it is
an alternative term for compatibility. And hence it requires the uniformity of the size of compo-
nents produced, which ensures interchangeability. The manufacturing time is reduced and parts, if
needed, may be replaced without any difficulty. For example, if we buy a spark plug of a scooter
from the market and then we find that it fits in the threaded hole positioned in a cylinder head of
a scooter automatically. We just need to specify the size of the spark plug to the shopkeeper. The
threaded-hole and spark-plug dimensions are standardized and designed to fit with each other.
Standardization is necessary for interchangeable parts and is important for economic reasons. Some
examples are shown in Fig. 6.1.
In mass production, since the parts need to be produced in minimum time, certain variations are
allowed in the sizes of parts. Shafts and hole sizes are specified and acceptable variation in the size
is specified. This allows deviation from size in such a way that any shaft will mate with any hole and
functions correctly for the designed life of the assembly. But the manufacturing system must have the
ability to interchange the system components with minimum effect on the system accuracy. And inter-
changeability ensures the universal exchange of a mechanism or assembly. Another parallel terminol-
ogy, ‘exchangeability’ is the quality of being capable of exchange or interchange.
Rolled-ball screw
assembly
Stud
Roller bearing
assembly
(a) (b)
Drill chuck
assembly
(c)
Fig. 6.1 Examples of interchangeability
Limits, Fits and Tolerances 129
Using interchangeability, the production of mating parts can be carried out at different places by
different operators, which reduce assembly time considerably along with reducing skill requirements at
work. Proper division of labour can be done. One important advantage is the replacement of worn-out
or defective parts and repair becomes very easy.
A product’s performance is often influenced by the clearance or, in some cases, by the preload of its
mating parts. Achieving consistent and correct clearances and preloads can be a challenge for assem-
blers. Tight tolerances often increase assembly costs because labour expenses and the scrap rate go up.
The tighter the tolerances, the more difficult and costly the component parts are to assemble. Keeping
costs down while maintaining tight assembly tolerances can be made easier by a process called selective
assembly, or match gauging.
The term selective assembly describes any technique used when components are assembled from sub-
components such that the final assembly satisfies higher tolerance specifications than those used to
make its subcomponents. The use of selective assembly is inconsistent with the notion of interchange-
able parts, and the technique is rarely used at this time. However, certain new technologies call for
assemblies to be produced to a level of precision that is difficult to reach using standard high-volume
machining practices.
To match gauge for selective assembly, one group of components
is measured and sorted into groups by dimension, prior to the assem-
bly process. This is done for both mating parts. One or more compo-
nents are then measured and matched with a presorted part to obtain an
optimal fit to complete the assembly. It results in complete protection
against defective assemblies and reduces the matching cost. Consider the Shaft
case of bearing assembly on shaft, (shown in Fig. 6.2) done by selective
assembly method. Pick and measure a shaft. If it is a bit big, pick a big
bearing to get the right clearance. If it is a bit small, pick a small bear- Bearing Clearance is
ing. For this to work over a long stretch, there must be about the same important
number of big shafts as big bearings, and the same for small ones. Fig. 6.2 Bearing assembly
on a shaft
By focusing on the fit between mating parts, rather than the
absolute size of each component, looser component tolerances
can be allowed. This reduces assembly costs without sacrificing product performance. In addi-
tion, parts that fall outside their print tolerance may still be useable if a mating part for it can be
found or manufactured, thus reducing scrap.
Consider the example of a system in the assembly of a shaft with a hole. Let the hole size be 25±0.02
and the clearance required for assembly be 0.14 mm on the diameter. Let the tolerance on the hole and
shaft be each equal to 0.04. Then, dimension range between hole diameter (25±0.02 mm) and shaft diam-
eter (24.88±0.02 mm) could be used. By sorting and grading, the shafts and holes can be economically
selectively assembled with the clearance of 0.14 mm as combinations given as follows.
130 Metrology and Measurement
The system of tolerances and fits, ISO, can be applied in tolerances and deviations of smooth parts and
for fits created by their coupling. It is used particularly for cylindrical parts with round sections. Toler-
ances and deviations in this standard can also be applied in smooth parts of other sections. Similarly,
the system can be used for coupling (fits) of cylindrical parts and for fits with parts having two parallel
surfaces (e.g., fits of keys in grooves).
The primary aim of any general system of standard fits and limits is to give guidance to the user for
selecting basic fundamental clearances and interferences for a given application; and for a fit, to deter-
mine tolerances and deviations of parts under consideration according to the standard ISO 286:1988.
This standard is identical with the European standard EN 20286:1993 and defines an internationally
recognized system of tolerances, deviations and fits. The standard ISO 286 is used as an international
standard for linear dimension tolerances and has been accepted in most industrially developed coun-
tries in identical or modified wording as a national standard ( JIS B 0401, DIN ISO 286, BS EN 20286,
CSN EN 20286, etc.). In India, we follow Indian Standards (i.e., IS: 919). This standard specifies the 18
grades of fundamental tolerances, which are the guidelines for accuracy of manufacturing. The Bureau
of Indian Standards (BIS) recommends a hole-basis system and the use of a shaft-basis (unilateral or
bilateral) system is also included. This standard uses terms for describing a system of limits and fits.
These terminologies can be well explained using the conventional diagram shown in Fig. 6.3.
1. Shaft The term ‘shaft’, used in this standard has a wide meaning and serves for specification of
all outer elements of the part, including those elements, which do not have cylindrical shapes.
2. Hole The term ‘hole’ can be used for specification of all inner elements regardless of their
shape.
3. When an assembly is made of two parts, one is known as the male (outer element of the part) sur-
face and the other one as the female (inner element of the part) surface. The male surface is referred as
‘shaft’ and the female surface is referred as ‘hole’.
4. Basic Size The basic size or normal size is the standard size for the part and is the same both
for the hole and its shaft. This is the size which is obtained by calculation for strength.
Limits, Fits and Tolerances 131
Lower deviantion
Upper deviantion
Lower deviantion
Upper deviantion
Tolerance
Zero line
Tolerance
Clearance fit
Zero line
Hole Hole
Shaft
Shaft
Min DIA
Max DIA
Max DIA
Basic size
Min DIA
Line of zero
deviation
5. Actual Size Actual size is the dimension as measured on a manufactured part. As already men-
tioned, the actual size will never be equal to the basic size and it is sufficient if it is within predetermined
limits.
6. Limits of Size These are the maximum and minimum permissible sizes of the part (extreme
permissible sizes of the feature of the part).
7. Maximum Limit The maximum limit or high limit is the maximum size permitted for the part.
8. Minimum Limit The minimum limit or low limit is the minimum size permitted for the part.
9. Zero Line In a graphical representation of limits and fits, a zero line is a straight line to which the
deviations are referred to. It is a line of zero deviation and represents the basic size. When the zero line is
drawn horizontally, positive deviations are shown above and negative deviations are shown below this line.
10. Deviation It is the algebraic difference between a size (actual, limit of size, etc.) and the cor-
responding basic size.
11. Upper Deviation It is designated as ES (for hole) and es (for shaft). It is the algebraic
difference between the maximum limit of the size and the corresponding basic size. When the maximum
132 Metrology and Measurement
limit of size is greater than the basic size, it is a positive quantity and when the minimum limit of size
is less than the basic size then it is a negative quantity.
12. Lower Deviation It is designated as EI (for hole) and ei (for shaft). It is the algebraic
difference between the minimum limits of size and the corresponding basic size. When the minimum
limit of size is greater than the basic size, it is a positive quantity and when the minimum limit of size
is less than the basic size then it is a negative quantity.
13. Fundamental Deviations (FD) This is the deviation, either upper or the lower deviation,
which is the nearest one to the zero line for either a hole or a shaft. It fixes the position of the tolerance
zone in relation to the zero line (refer Fig. 6.4).
14. Actual Deviation It is the algebraic difference between an actual size and the corresponding
basic size.
15. Mean Deviation It is the arithmetical mean between the upper and lower deviation.
Lower
deviation
Upper Tolerance
deviation
Zero line
Max. limit of
Basic size
Min. limit of
size
size
16. Tolerance It is the difference between the upper limit and the lower limit of a dimension. It
is also the maximum permissible variation in a dimension.
17. Tolerance Zone It is a function of basic size. It is defined by its magnitude and by its posi-
tion in relation to the zero line. It is the zone bounded by the two limits of size of a part in the graphical
presentation of tolerance.
18. Tolerance Grade It is the degree of accuracy of manufacturing. It is designated by the let-
ters IT (stands for International Tolerance). Numbers, i.e., IT0, IT01, IT1, follow these letters up to
IT16; the larger the number, the larger the tolerance.
19. Tolerance Class This term is used for a combination of fundamental deviation and toler-
ance grade.
20. Allowance It is an intentional difference between the maximum material limits of mating parts.
For a shaft, the maximum material limit will be its high limit and for a hole, it will be its low limit.
21. Fits The relationship existing between two parts, shaft and hole, which are to be assembled,
with respect to the difference in their sizes is called fit.
In the earlier part of the nineteenth century, the majority of components were actually mated together, their
dimension being adjusted (machined) until the required assembly-fit was obtained. This trial-and-error
type of assembly method demands an operator’s skill. So, in this case, the quality and quantity of the output
depends upon the operator. In today’s context of a mass-production environment, interchangeability
Tolerance
Allowance
Tolerance
High limit
Low limit
Max.size
Min.size
and continuous assembly of many complex components could not exist under such a system. Modern
production engineering is based on a system of limits, fits and tolerances.
6.5.1 Limits
In a mass-production environment and in case of outsourcing, different operators on different similar
machines and at different locations produce subassemblies. So according to K J Hume “It is never possible
to make anything exactly to a given size of dimensions”. And producing a perfect size is not only difficult, but is
also a costly affair. Hence, to make the production economical some permissible variation in dimension
has to be allowed to account for variability. Thus, dimensions of manufactured parts, only made to lie
between two extreme dimensional specifications, are called maximum and minimum limits. The maxi-
mum limit is the largest size and the minimum limit is the smallest size permitted for that dimension.
6.5.2 Tolerance
The inevitable human failings and machine limitations,
Tolerance
prevent achieving ideal production conditions. Hence, a
purposefully permissible variation in size or dimension
Tolerance
called tolerance (refer Fig. 6.6) is to be considered for pro-
ducing a part dimension. And the difference between the Hole
upper and lower margins for variation of workmanship
is called tolerance zone. To understand tolerance zone, one
must know the term basic size. Basic size is the dimension
worked out of purely design considerations. Thus, gener-
Max DIA
Shaft
ally basic dimensions are first specified and then the value
Min DIA
Max DIA
Min DIA
(of tolerance) is indicated as to how much variation in the
basic size can be tolerated without affecting the function-
ing of the assembly.
Tolerance can be specified on both the meeting ele-
ments, i.e., on the shaft and/or on the hole. For example, Fig. 6.6 Tolerance
a shaft of 30-mm basic size along with a tolerance value of
0.04 may be written as 30 ± 0.04. Therefore, the maximum permissible size (upper limit) is 30.04 mm
and the minimum permissible size ( lower limit) is 29.96 mm. Then the value of the tolerance zone is
(upper limit − lower limit) = 0.08 mm.
The practical meaning of the word tolerance is that the worker is not expected to produce a part
with exact specified size, but that a definite small size error (variation) is permitted. Thus, tolerance is
the amount by which the job is allowed to deviate from the dimensional accuracy without affecting
its functional aspect when assembled with its mating part and put into actual service. If high-perfor-
mance requirement is the criteria for designing the assembly then functional requirements will be the
dominating factor in deciding the tolerance value. But in some cases why are close tolerances specified
for a specific job? This question may be answered by giving reasons like inexperienced designer, creed
for precision, fear of interference, change in company or vendor’s standards or may be the practice of
Limits, Fits and Tolerances 135
1. Unilateral Tolerances System In this type of system, the part dimension is allowed to
vary on one side of the basic size, i.e., either below or above it (refer Fig. 6.8). This system is preferred
in an interchangeable manufacturing environment. This is because it is easy and simple to determine
deviations. This system helps standardize the GO gauge end. This type of tolerancing method is
helpful for the operator, as he has to machine the upper limit of the shaft and the lower limit of the
hole knowing fully well that still some margin is left for machining before the part is rejected.
Examples of unilateral systems
+0.02 +0.02
+0.01
1) 30 , 2) 30+0.00 ,
+0.00 +0.00
3) 30−0.01 , 4) 30−0.02 .
2. Bilateral Tolerances System In this system, the dimension of the part is allowed to vary
in both the directions of the basic size. So, limits of the tolerances lie on either side of the basic size.
Using this system, as tolerances are varied, the type of fit gets varied. When a machine is set for a basic
size of the part then for mass production, the part tolerances are specified by the bilateral system.
Examples of bilateral systems
+0.00
±0.02
1) 30−0.01 , 2) 30
136 Metrology and Measurement
Tolerance
Tolerance
Basic size
Minimum
size
Shaft
Minimum This is the maximum
size material condition.
Maxmium
size
Figure 6.10 shows logical ways to meet the assembly tolerances. This diagram is called ‘logical
tree of tolerancing’. It explains the means when no deterministic and statistical co-ordinations
exist.
6.6 FITS
The variations in the dimensions of a shaft or a hole can be tolerated within desired limits to arrange for
any desired fits. A fit is the relationship between two meeting parts, viz., shaft and hole. This relationship
is nothing but the algebraic difference between their sizes.
It can be defined as ‘the relationship existing between two parts, shaft and hole, which are to be
assembled, with respect to the difference in their sizes before the relationship is called fit. It is also the
degree of tightness or looseness between the two mating parts. Depending on the mutual position of
tolerance zones of the coupled parts, three types of fits can be distinguished:
DETERMINISTIC NO
COORDINATION STATISTICAL COORDINATION
COORDINATION
100% WORST CASE
INSPECTION STATISTICAL TOLERANCING
TOLERANCING
A. Clearance Fit It is a fit that always enables a clearance between the hole and shaft in the coupling.
The lower limit size of the hole is greater or at least equal to the upper limit size of the shaft.
B. Transition Fit It is a fit where (depending on the actual sizes of the hole and shaft) both
clearance and interference may occur in the coupling. Tolerance zones of the hole and shaft partly or
completely interfere.
C. Interference Fit It is a fit that always ensures some interference between the hole and shaft
in the coupling. The upper limit size of the hole is smaller or at least equal to the lower limit size of the
shaft.
Properties and fields of use of preferred fits are described in the following overview. When
selecting a fit, it is often necessary to take into account not only constructional and technological but
also economic aspects. Selection of a suitable fit is important, particularly in view of those measuring
instruments, gauges and tools which are implemented in production. Therefore, while selecting a fit,
proven plant practices may be followed.
The parts can be easily slid one into the other and turned. The tolerance of the coupled parts and fit
clearance increases with increasing class of the fit.
Minimum Clearance In case of a clearance fit, it is the difference between the minimum size of
the hole and the maximum size of the shaft.
Maximum Clearance In case of a clearance fit, it is the difference between the maximum size of
the hole and minimum size of the shaft.
1. Slide Clearance Fits (RC) When the mating parts are required to move slowly but in regu-
lar fashion in relation to each other, e.g., in the sliding change gears in the quick change gear box of a
machine tool, tailstock movement of a lathe, and feed movement of the spindle in case of a drilling
machine, sliding fits are employed. In this type of fit, the clearances kept are very small and may reduce
to zero. But, for slow and non-linear type of motion, e.g., motion between lathe and dividing head or
the movement between piston and slide valves, an ‘easy slide fit ’ is used. In this type of clearance fit, a
small clearance is guaranteed.
2. Running Clearance Fits (RC) When just a sufficient clearance for an intended purpose
(e.g., lubrication) is required to maintain between two mating parts which are generally at lower/mod-
erate speeds, e.g., gear box bearings, shafts carrying pulleys, etc., then a close running fit is employed.
Medium-running fits are used to compensate the mounting errors. For this type, a considerable
clearance is maintained, e.g., in the shaft of a centrifugal pump. In case of considerable amount of
working temperature variations and/or high-speed rotary assembly, loose running fits are employed.
The following are grades of clearance fits recommended for specific requirements:
RC 1 Close sliding fits with negligible clearances for precise guiding of shafts with high require-
ments for fit accuracy. No noticeable clearance after assembly. This type is not designed for free
run.
RC 2 Sliding fits with small clearances for precise guiding of shafts with high requirements for fit
precision. This type is not designed for free run; in case of greater sizes, a seizure of the parts may oc-
cur even at low temperatures.
RC 3 Precision-running fits with small clearances with increased requirements for fit precision. De-
signed for precision machines running at low speeds and low bearing pressures. Not suitable where
noticeable temperature differences occur.
RC 4 Close running fits with smaller clearances with higher requirements for fit precision. Designed
for precise machines with moderate circumferential speeds and bearing pressures.
RC 5, RC 6 Medium-running fits with greater clearances with common requirements for fit preci-
sion. Designed for machines running at higher speeds and considerable bearing pressures.
140 Metrology and Measurement
RC 7 Free running fits without any special requirements for precise guiding of shafts. Suitable for
great temperature variations.
RC 8, RC 9 Loose running fits with great clearances and parts having great tolerances. Fits exposed
to effects of corrosion, contamination by dust and thermal or mechanical deformations.
Minimum Clearance In case of interference fit, it is the arithmetical difference between maximum
size of the hole and the minimum size of shaft before assembly.
Maximum Interference In case of interference fit, it is the arithmetical difference between the
minimum size of the hole and the maximum size of the shaft before assembly.
Interference fits are rigid (fixed) fits based on the principle of constant elastic pre-stressing of
connected parts using interference in their contact area. Outer loading is transferred by friction between
the shaft and hole created in the fit during assembly. The friction is caused by inner normal forces
created as a result of elastic deformations of connected parts.
Interference fits are suitable for transfer of both large torques and axial forces in rarely
disassembled couplings of the shaft and hub. These fits enable high reliability of transfer of even
high loads; including alternating loads or loads with impacts. They are typically used for fastening
geared wheels, pulleys, bearings, flywheels, turbine rotors and electromotors onto their shafts,
with gear rings pressed onto wheel bodies, and arms and journals pressed onto crankshafts.
Press on, in general, means inserting a shaft of larger diameter into a hub opening, which is smaller.
After the parts have been connected (pressed-on), the shaft diameter decreases and the hub opening
increases, in the process of which both parts settle on the common diameter. Pressure in the contact
area is then evenly distributed, shown in Fig. 6.12. The interference d, given by the difference between
the assembly-shaft diameter and hub-opening diameter, is a characteristic feature and a basic quantity
of interference fit. The value of contact pressure, as well as loading capacity and strength of the fit,
depends on the interference size.
With respect to the fact that it is not practically possible to manufacture contact area diameters
of connected parts with absolute accuracy, the manufacturing (assembly) interference is a vague and
accidental value. Its size is defined by two tabular values of marginal interferences, which are given by
the selected fit (by allowed manufacturing tolerances of connected parts). Interference fits are then
Limits, Fits and Tolerances 141
designed and checked on the basis of these marginal assembly interferences. There are two basic ways
of solving assembly process in case of interference fits:
1. Longitudinal Press (Force Fit)[FN] Longitudinal press is the forcible pushing of the
shaft into the hub under pressure or using mechanical or hydraulic jigs in case of smaller parts. When
using longitudinal pressure, surface unevenness of connected parts is partially stripped and smoothed.
This results in reduction of the original assembly interference and thus reduction of the assembly-
loading capacity. The amount of mounting smoothness of the surface depends on load-bearing treat-
ment of the connected part edge surface, speed of press and mainly on the roughness of connected
parts. The press speed should not exceed 2 mm/s. To prevent seizing, steel parts are usually greased.
It is also necessary to grease contact areas in case of large couplings with large interference, where
extremely high press forces are required. Parts from different materials may be dry-pressed. Greasing
contact areas enables the press process; however, on the other hand it leads to a decrease in friction
coefficient and coupling loading capacity. From the technological point of view, a longitudinal press is
relatively simple and undemanding; but, it shows lower assembly loading capacity and reliability than
a transverse press.
FN 1 Light-drive fits with small interferences designed for thin sections, long fits or fits with cast-
iron external members
FN 2 Medium-drive fits with medium interferences designed for ordinary steel parts or fits with high-
grade cast-iron external members
FN 3 Heavy-drive fits with great interferences designed for heavier steel parts
FN 4, FN_5 Force fits with maximum interferences designed for highly loaded couplings
1. Push Fit (LT) Consider the examples like changing gears, slipping, bushing, etc., whose sub-
components are dis-assembled during operations of the machines. It requires a small clearence, where
a push fit could be suitably employed.
2. Wringing Fit (LT) In case of reusable/repairable parts, the sub-parts must be replaced with-
out any difficulty. In these cases, assembly is done employing a wringing fit. The following are grades
of transition fits recommended for specific requirements:
LT 1, LT2 Tight fits with small clearances or negligible interferences (easy detachable fits of hubs
of gears, pulleys and bushings, retaining rings, bearing bushings, etc.). The part can be assembled or
disassembled manually.
LT 3, LT4 Similar fits with small clearances or interferences (demountable fits of hubs of gears and
pulleys, manual wheels, clutches, brake disks, etc.). The parts can be coupled or disassembled without
any great force by using a rubber mallet.
LT 5, LT6 Fixed fits with negligible clearances or small interferences (fixed plugs, driven bushings,
armatures of electric motors on shafts, gear rims, flushed bolts, etc.). It can be used for the assembly
of parts using low-pressing forces.
Although there can be generally coupled parts without any tolerance zones, only two methods of coupling
of holes and shafts are recommended due to constructional, technological and economic reasons.
Limits, Fits and Tolerances 143
Shaft
Shaft
Hole Hole
Shaft
Hole
Shaft
Shaft
Fig. 6.13(a) Hole-basis system
Hole
Hole Shaft
Shaft
Hole
Shaft
Hole
Hole
Fig. 6.13(b) Shaft-basis system
144 Metrology and Measurement
As discussed in the earlier article, in India we have IS: 919 recommendation for limits and fits for
engineering. This standard is mostly based on British Standards BS: 1916-1953. This IS standard was
first published in 1963 and modified several times, the last modification being in 1990. In the Indian
Standard, the total range of sizes up to 3150 mm has been covered in two parts. Sizes up to 500 mm are
covered in IS: 919 and sizes above 500 mm, up to 3150 mm, are covered in IS: 2101. However, it is yet to
adopt several recommendations of ISO: 286. All these standards make use of two entities of the stan-
dard limits, fits and tolerences terminology system—standard tolerances and fundamental deviation.
IT Grade 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Lapping
Honing
Super finish-
ing
Cylindrical
grinding
Diamond
turning
Plan grinding
Broaching
Reaming
Boring, Turn-
ing
Sawing
Milling
Planing,
Shaping
Extruding
Cold rolling,
Drawing
Drilling
Die casting
Forging
Sand casting
Hot rolling,
Flame cutting
146 Metrology and Measurement
Lower
deviation
Tolerance
Zero line
Max. limit of
Basic size
Min. limit of
size
size
Fig. 6.14 Basic size and its deviation
Fundamental
deviation
Tolerance zone
Tolerance
Zero line
(a)
Tolerance zone
Tolerance
Zero line
Fundamental deviation
(b)
Fig. 6.15 Disposition of fundamental deviation and tolerance zone w.r.t the zero line
Limits, Fits and Tolerances 147
350 mm
300
250 A
200 HOLES
B
F 150 EI CD
U 50 MN BASIC SIZE
E P R
0 F G
N H J K S T
U V
D –50 XY
A Jc Z ES
–150 Za
M
–200 Zb
E Zc
N –250
T –300
A
L
300
D
250 zb zc
E 200 za
jc
V 150 y z
50 u v w
ei
I
g h j s t BASIC SIZE
A 0 p r
f k mn
T –50 e
d
I –150 es c
b SHAFTS
O –200
N –250 a
–300
350 mm
Fig. 6.16 Position of fundamental deviations
and each has 1\18 grades of tolerences. The arrangement for the representation of the position of
fundamental deviations is shown in Fig. 6.16.
Hole ‘A’ and shaft ‘a’ have the largest fundamental deviations, hole being positive and shaft being
negative. The fundamental deviations for both hole ‘H’ and shaft ‘h’ are zero. For the shafts ‘a’ to ‘g’,
the upper deviation is below the zero line, and for the shafts ‘j’ to ‘zc’, it is above the zero line. For the
holes ‘A’ to ‘G’, the lower deviation is above the zero line, and for the holes ‘J’ to ‘Zc’; it is below the
zero line. The shaft for which upper deviation is zero is called basic shaft (i.e., ‘h’) and the hole for which
lower deviation is zero is called basic hole (i.e., ‘H’).
7 = the tolerance grade, i.e., IT7. By knowing this value, the limits for 55-mm size can be
found out.
2) Shaft = 60 m9 means
60 = the basic size of the shaft.
m = the position of the shaft w.r.t zero line. In this case, it is above the zero line.
9 = the tolerance grade, i.e., IT9. By knowing this value, the limits for 60-mm size can be
found out.
For deciding limits we have to find out the value of tolerance grades, first for hole and then of the shaft
(as the hole basis system has been followed) to suit the requirements for the type of fit to be employed in
the application under consideration. So, the calculation for tolerance grade is done as follows:
The fundamental tolerance unit is denoted as i (in microns). It is used to express various IT grades
from IT5 to IT16, where the value of i in terms of the diameter D (in mm) can be calculated as
i = 0.45 3 D + 0.001D
The diameter ‘D’ (in mm) is the geometric mean of the diameter steps (please refer Table 6.3).
Tolerances are same for all diameter sizes, which fall in the specific range of the diameter step. These
steps are the recommendations of IS: 919.
The values of tolerances for tolerance grades IT5 to IT16 are given in Table 6.4.
For the values of tolerance grades IT01 to IT4, the formulae are
For IT01 = 0.3 + 0.008D
For IT0 = 0.5 + 0.012D
For IT1 = 0.8 + 0.02D
General Cases 0–3, 6–10, 18–30, 30–50, 50–80, 80–120, 120–180, 180–250,
(mm) 250–315, 315–400, 400–500
Grade IT5 IT6 IT7 IT8 IT9 IT10 IT11 IT12 IT13 IT14 IT15 IT16
Tolerance 7i 10i 16i 25i 40i 64i 100i 160i 250i 400i 640i 1000i
Limits, Fits and Tolerances 149
The values of tolerance grades IT2 to IT4 are regularly selected approximately and geometrically
between the values of IT1 to IT5. The seven finest grades, i.e., IT1 to IT5 are applicable for the sizes
up to 500 and the remaining eleven coarser grades, i. e., IT6 to IT16 are used for sizes between 500 mm
Shaft Designation In Microns (For D in mm) Shaft Designation In Microns (For D in mm)
e = − 11 D0. 41 n = −5D0. 24
= Geometrical mean
g = − 5.5 D0. 34 r
values el for p and s
= + (IT8 + 1 to 4)
For D ≤ 50
s
= + IT7 to + 0.4D
For D > 50
t = IT7 + 0.63 D
u = + IT7 to + D
v = + IT7 + 1.25 D
h =0
x = + IT7 + 1.6 D
y = + IT7 + 2 D
z = IT7 + 2.5 D
za = IT8 + 3.15 D
zb = IT9 + 4 D
zc = IT10 + 5 D
150 Metrology and Measurement
up to 3150 mm. The manufacturing processes that could produce accuracies expressed in terms of
IT grades are already discussed earlier (refer Table 6.2). The formulae for fundamental deviations for
shafts for sizes up to 500 mm are as follows:
Letter a b c d e f g h j js k m n p r s t u v x y z za zc zc
Symbol
Grade IT 01 and 0 + +
IT 1 + +
IT 2 + +
IT 3 + +
IT 4 + + + + + + + + + +
IT 5 + + + + + + + + + + + + + + + + +
IT 6 + + + + + + + + + + + + + + + + + + + +
IT 7 + + + + + + + + + + + + + + + + + + + + + +
IT 8 + + + + + + + + + + +
IT 9 + + + + + + + +
IT 10 + + +
IT 11 + + + + + +
IT 12 + +
IT 13 + +
IT 14 + +
IT 15 + +
IT 16 + +
Letter A B C D E F G H J JS K M N P R S T U V X Y Z ZA ZB ZC
Symbol
Grade IT 01 and 0 + +
IT 1 + +
IT 2 + +
Limits, Fits and Tolerances 151
IT 3 + +
IT 4 + + + + + + + + + + +
IT 5 + + + + + + + + + + + + + + + + +
IT 6 + + + + + + + + + + + + + + + + + + + +
IT 7 + + + + + + + + + + + + + + + + + + + +
IT 8 + + + + + + + + + + + + + + + + +
IT 9 + + + + + + + + + + + +
IT 10 + + + + +
IT 11 + + + + + + +
IT 12 + +
IT 13 + +
IT 14 + +
IT 15 + +
IT 16 + +
The deviations for the holes are derived from the corresponding values of the shafts. The limit
system (symbol) used for the holes is the same for shaft limits, i.e., grade and letter. Thus, deviation for
hole (ES) is equal to deviation for the shaft (in terms of magnitude) of the same letter symbol but of
opposite signs.
Use Pivots, latches, fits of parts exposed to corrosive effects, contamination with dust and thermal
or mechanical deformations
H9/C9, H9/d10, H9/d9, H8/d9, H8/d8, D10/h9, D9/h9, D9/h8
Running fits with greater clearances without any special requirements for accuracy of guiding
shafts.
Use Multiple fits of shafts of production and piston machines, parts rotating very rarely or only
swinging
H9/e9, H8/e8, H7/e7, E9/h9, E8/h8, E8/h7
Running fits with greater clearances without any special requirements for fit accuracy.
Use Fits of long shafts, e.g., in agricultural machines, bearings of pumps, fans and piston machines
H9/f8, H8/f8, H8/f 7, H7/f 7, F8/h7, F8/h6
Running fits with smaller clearances with general requirements for fit accuracy.
Use Main fits of machine tools. General fits of shafts, regulator bearings, machine tool spindles,
sliding rods
H8/g7, H7/g6, G7/h6
Running fits with very small clearances for accurate guiding of shafts. Without any noticeable
clearance after assembly.
Use Parts of machine tools, sliding gears and clutch disks, crankshaft journals, pistons of hydraulic
machines, rods sliding in bearings, grinding machine spindles
H11/h11, H11/h9
Slipping fits of parts with great tolerances. The parts can easily be slid one into the other and
turned.
Use Easily demountable parts, distance rings, parts of machines fixed to shafts using pins, bolts, rivets
or welds
H8/h9, H8/h8, H8/h7, H7/h6
Sliding fits with very small clearances for precise guiding and centering of parts. Mounting by slid-
ing on without use of any great force; after lubrication the parts can be turned and slid by hand.
Use Precise guiding of machines and preparations, exchangeable wheels, roller guides.
154 Metrology and Measurement
Use Easily dismountable fits of hubs of gears, pulleys and bushings, retaining rings, frequently re-
moved bearing bushings
H8/k7, H7/k6, K8/h7, K7/h6
Similar fits with small clearances or small interferences. The parts can be assembled or disassembled
without great force using a rubber mallet.
Use Demountable fits of hubs of gears and pulleys, manual wheels, clutches, brake disks
H8/p7, H8/m7, H8/n7, H7/m6, H7/n6, M8/h6, N8/h7, N7/h6
Fixed fits with negligible clearances or small interferences. Mounting of fits using pressing and light
force.
Use Fixed plugs, driven bushings, armatures of electric motors on shafts, gear rims, flushed bolts
Fundamental deviations for the holes and shafts for diameters above 500 mm and up to 3150 mm
are given in Table 6.9
Table 6.9 Fundamental deviations for shaft & Holes (for D > 500 mm)
Hole Tolerance Zones The tolerance zone is defined as a spherical zone limited by the upper and
lower limit dimensions of the part. As per ISO system, though the general sets of basic deviations (A ... ZC)
and tolerance grades (IT1 ... IT18) can be used for prescriptions of hole tolerance zones by their mutual
combinations, in practice, only a limited range of tolerance zones is used. An overview of tolerance zones
for general use can be found in Table 6.10. The tolerance zones not included in this table are considered
special zones and their use is recommended only in technically well-grounded cases.
Shaft Tolerance Zones The tolerance zone is defined as a spherical zone limited by the upper
and lower limit dimensions of the part. The tolerance zone is therefore determined by the amount
of the tolerance and its position related to the basic size. As per ISO system, though the general sets
of basic deviations (a ... zc) and tolerance grades (IT1 ... IT18) can be used for prescriptions of shaft
tolerance zones by their mutual combinations, in practice, only a limited range of tolerance zones is
156 Metrology and Measurement
used. An overview of tolerance zones for general use can be found in Table 6.11. The tolerance zones
not included in this table are considered special zones and their use is recommended only in technically
well-grounded cases.
Prescribed hole tolerance zones for routine use (for basic sizes up to 3150 mm)
Note: Tolerance zones with thin print are specified only for basic sizes up to 500 mm.
Hint: For hole tolerances, tolerance zones H7, H8, H9 and H11 are preferably used.
Limits, Fits and Tolerances 157
Prescribed shaft-tolerance zones for routine use (for basic sizes up to 3150 mm)
Note: Tolerance zones with thin print are specified only for basic sizes up to 500 mm.
Hint: For shaft tolerances, tolerance zones h6, h7, h9 and h11 are preferably used.
Upper limit
Tolerance
Upper
deviation
Zero line
Lower limit
Fig. 6.17 For hole ‘H’
Lower deviation
Upper limit
Zero line
Lower
deviation Tolerance
Lower limit
Fig. 6.18 For shaft ‘f’
Limits, Fits and Tolerances 159
Value for the tolerance IT8 (From Table 6.4) = 25 (i ) = 25 (1.789 microns) = 0.0447 mm
As the h-shaft lies below the zero line (refer Fig. 6.16), its fundamental deviation is the upper devia-
tion. Hence, the formula for fundamental deviation from Table 6.5 is = [−5.5 D 0 . 41 ].
∴ −5.5 D 0 . 41 = = −5.5 (57.0.08) 0 . 41 = = −28.86 microns = −0.0288 mm
= 55 mm + (−0.0288) = 54.9712 mm
And, lower limit of shaft = Upper limit of shaft + Value for the Tolerance IT8
= 54.9712 − 0.0447 = 54.9265 mm
Hence, shaft size varies between 54.9712 mm to 54.9265 mm.
5. To check the type of fit we have to calculate
Maximum clearance = 55.028 mm − 54.9265 mm = 0.1015 mm [∴ clearance exists]
Minimum clearance = 54.9712 mm − 55.00 mm = 0.028 mm [∴ clearance exists]
6. Therefore, we can conclude that the type of fit of 55 H7f8 assembly results into ‘Clearance fit’.
In most of cases, it is necessary to specify the geometry features of the part/component, viz., straight-
ness, flatness, roundness, cylindricity, etc., along with linear dimensions alone. The word ‘geometrical
features’ depicts that geometrical tolerances (in context of the accuracy of the dimensions) of every
entity of the part has a relationship with each other. And hence it is accepted that these should be speci-
fied separately. The importance of every aspect of geometry of the part/component is discussed thor-
oughly in the previous chapter. Tables 6.13,14,15 illustrate the geometrical tolerance symbols. Table
6.16 explains ways of representations of geometrical (features) tolerance symbols.
To understand the importance of specifications of the geometrical tolerance symbols in engineer-
ing drawing, consider Fig. 6.19. This figure shows the assembly of a shaft and hole. To get the proper
assembly fit, specifying only diameter values will not give the correct idea about the same. A little
consideration will show that apart from diameter values, some more information is required to be
specified, i.e., information about geometrical tolerances. In absence of this information, when the
mating parts are in maximum metal condition then the worse condition of the assembly of shaft and
hole occurs.
160 Metrology and Measurement
Table 6.12 Equivalent fits for the hole-basis and shaft-basis system
Clearance Transition Interference
Hole Basis Shaft Basis Hole Basis Shaft Basis Hole Basis Shaft Basis
H7-c8 C8-h8 H6-j5 J6-h5 H6-n5 N6-h5
H8-c9 C9-h8 H7-j6 J7-h6
H11-c11 C11-h11 H8-j7 J8-h7 H6-p5 P6-h5
H7-p6 P7-h6
H7-d8 D8-h7 H6-k5 K6-h5
H8-d9 D9-h8 H7-k6 K7-h6 H6-r5 R6-h5
H1-d11 D11-h11 K8-k7 K8-h7 H7-r6 R7-h6
H6-e7 E7-h6 H6-m5 M6-h5 H6-s5 S6-h6
H7-e8 E8-h8 H7-m6 M7-h6 H7-s6 S7-h6
H6-f6 F6-h6 H8-m7 M8-h7 H8-s7 S8-h7
H7-f7 F7-h7 H7-n6 N7-h6 H7-t6 T7-h6
H8-f8 F8-h8 H8-n7 N8-h7 H7-t6 T7-h6
H8-t7 T8-h7
H6-g5 G7-h5 H8-p7 P8-h7
H7-g6 G7-h6 H6-u5 U6-h5
H8-g7 G8-h7 H8-r7 R8-h7 H7-u6 U7-h6
H8-u7 U8-h7
Shaft
Shaft φ 20.58 mm
φ 20.58 mm
Hole
Broad
Classification Geometric
(Kind of feature) Characteristic Symbol Definition
1 2 3 4
Individual or related Profile or A line Condition permitting a
feature uniform amount of profile
variation, either unilaterally
or bilaterally, along a line
element of a feature.
A single surface or Profile of a surface Condition permitting a
element feature whose uniform amount of profile
perfect geometrical profile variation, either unilaterally
is described which may, or of bilaterally, on a surface.
may not relate to a datum.
162 Metrology and Measurement
Broad
Classification Geometric
(Kind of Feature) Characteristics Symbol Definition
Relation feature Perpendicularity Condition of a surface axis, or line
(squareness or which is 90° from a datum plane or
normality) datum axis
A single feature or Angularity Condition of a surface, axis, or
element feature which centre plane which is at a specified
relates to a datum, or angle (other than 90° from a datum
datums in form and plane or axis).
attitude (orientation)
Parallelism Condition of a surface, line or axis
which is equidistant at all points
from a datum plane or axis
Circular runout Composite control or circular
elements of a surface indepen-
dently at any circular measuring
position as the part is rotated
through 360°
Total runout Simultaneous composite control
of all elements of surface at all
Total circular and profile measuring
positions as the part is rotated
through 360°
The main requirement of using interchangeability in the manufactured component (considering cost
of manufacturing) is to attain the close adherence to specified size (not necessarily to the exact basic
size) to fulfill functional requirements. Therefore, it is a permitted variation in the size which results in
economy, but, on the other hand, a system of control and inspection is to be employed. The problem
of inspecting the specific dimension of a component in this type of environment could be solved by
using limit gauges. Limit gauges are used to ensure whether the size of the component being inspected
lies within specified limits or not; however they are not meant for measuring the exact size.
1 2
Feature Control Symbol The feature control symbol consists of a frame con-
Symbol of At max. material
Tolerance
taining the geometric characteristic symbol, datums,
diameter condition references, tolerances, etc.
0.24 M A B C
Geometrical Datum
characteristic References
Taylor states that the ‘GO’ gauge should check all the possible elements of dimensions at a time
(roundness, size, location, etc.) and the ‘NO GO’ gauge should check only one element of the dimen-
sion at a time. And also, according to Taylor, ‘GO’ and ‘NO GO’ gauges should be designed to check
maximum and minimum material limits.
‘GO’ Limit This designation is applied to that limit between the two size limits which corresponds to
the maximum material limit considerations, i.e., the upper limit of a shaft and the lower limit of a hole. The
form of the ‘GO’ gauge should be such that it can check one feature of the component in one pass.
+ 0.24 M A B C -A-
A D D E
14.6
10.62
B C C F
0.02 A
20.4 0.02 B
-C-
5.75
0.02 A
-B-
(a)
Fig. 6.20(a) Example of representation of features of geometric tolerances in engineering
drawing of parts
164 Metrology and Measurement
D2/D3 A–B
D6 A–B
D1 A–B D3
D6
D2 B
A D1 D4 D5
L4 L1
L2
L3
(b)
Fig. 6.20(b) Examples of representation of features of geometric tolerances in engineering
drawing of parts
Max. limit
Tolerance zone
Min. limit
NO
GO
GO
‘NO GO’ Limit This designation is applied to that limit between the two size limits which corre-
sponds to the minimum material condition, i.e., the lower limit of a shaft and the upper limit of a hole.
Limits, Fits and Tolerances 165
Tolerance
zone Max. limit
GO Min. limit NO GO
Work
NO GO (H)
gauge
Tolerance
for shaft
T GO (L) gauge
Plug gauges
Margin for wear
Lower limit (L) (provided when tolerance
for hole is over 0.0035 IN)
GO (H) gauge
A
Higher limit (H)
T
for hole
Direction of wear
of gauge
Some of the most common types of fixed gauges are detailed below:
Gauges in general are classified as non-dimensional gauges and dimensional gauges.
Pipe and tube gauges have a fixed design to quickly access pipe, tube, or hose features, such as outer
diameter, inner diameter, taper, or tube bead. Radius, fillet, or ball gauges are used for comparatively
determining the diameter or radius of a fillet, radius, or ball. Screw and thread pitch gauges are serrated to
comparatively assess thread or screw pitch and type.
Taper gauges consist of a series of strips that reduce in width along the gauge length, and are used to
gauge the size of a hole or slot in a part. Thickness gauges consist of a series of gauge stock fashioned to a
precise thickness for gauging purposes. Taper and thickness gauges are often referred as feller gauges.
US standard gauges have a series of open, key-shaped holes and are used to gauge sheet or plate thick-
ness. Weld gauges are used for assessing weld fillet or bead size. Fixed wire gauges have a series of open
key-shaped holes and are used to gauge wire diameter size.
25.00 STD
In addition to specific fixed gauge types, there are two
less-focused device groupings or materials that may be used (a)
for this type of comparative gauging. Gauge stock is a mate- GO NOT GO
rial that is fashioned to a precise thickness for gauging pur-
24.985 25.015
poses. Gauge stock is available in rolls or individual strips.
Gauge sets and tool kits consist of several gauges and acces-
(b)
sories packaged together; often in a case with adjusting tools.
Tool kits sometimes contain alternate extensions, contact
tips, holders, bases, and standards. Out of these types of
gauges, some of the gauges are discussed as follows:
1. Plug Gauges Plug and pin gauges are used for GO/
NO-GO assessment of hole and slot dimensions or locations (c)
compared to specified tolerances. Dimensional standards are Fig. 6.25 (a) Diagram of single-
used for comparative gauging as well as for checking, cali- ended plug gauge (b) Diagram of
brating or setting of gauges or other standards. Plug, pin, set- double-ended plug gauge
(c) Double-ended plug gauge
ting disc, annular plug, hex and spherical plug individual
gauges or gauge sets fit into this category. Plug gauges e
are made to a variety of tolerance grades in metric and
English dimensions for master, setting or working appli-
0 25 H7 .21
cations.
Plugs are available in progressive or stepped, double end
or wire, plain (smooth, unthreaded), threaded, cylindrical o d c b
and tapered forms to go, no-go or nominal tolerances.
Limit plug gauge
a -- GO-side
2. Ring Gauges Ring gauges are used for GO/
b -- NO-GO side
NO-GO assessment compared to the specified dimen-
c -- Red marking
sional tolerances or attributes of pins, shafts, or threaded
d -- Basic size
studs. Ring gauges are used for comparative gauging as
e -- Tolerence
well as for checking, calibrating or setting of gauges or
other standards. Individual ring gauges or ring-gauge sets Fig. 6.26 Specifications of dimensions
on double-ended plug gauge
168 Metrology and Measurement
(a) (b)
Fig. 6.28 (a) How to use double-ended plug gauge (b) Plate plug gauge
(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India.)
precision tool for production of comparative gauging based on a fixed limit. NO-GO gauges consist of
a fixed limit gauge with a gauging limit based on the minimum or maximum tolerances of the inspected
part. A NO-GO ring gauge’s dimensions are based on the minimum OD tolerance of the round bar or
part being gauged. The NO GO ring (OD) gauge should be specified to a plus gaugemakers’ tolerance
from the minimum part tolerance. Master and setting ring gauges include gauge blocks, master or set-
ting discs. Setting rings are types of master gauges used to calibrate or set micrometers, comparators, or
Limits, Fits and Tolerances 169
other gauging systems. Working gauges are used in the shop for dimensional inspection and are periodi-
cally checked against a master gauge.
3. Snap Gauges Snap gauges are used in production settings where specific diametrical or
thickness measurements must be repeated frequently with precision and accuracy. Snap gauges are
mechanical gauges (Fig. 6.27) that use comparison or the physical movement and displacement of
a gauging element (e.g., spindle, slide, stem) to determine the dimensions of a part or feature. In
this case, snap gauges are similar to micrometers, calipers, indicators, plug gauges, and ring gauges.
Snap gauges are available in fixed and variable forms. The variable forms often have movable, top
sensitive contact attached to an indicator or comparator. The non-adjustable or fixed limit forms
typically have a set of sequential gaps for GO/NO-GO gauging of product thickness or diameter.
Fixed limit snap gauges [Fig. 6.30 (a), (b)] are factory sets or are otherwise not adjustable by the user.
A common example of this type of device is the AGD fixed limit style-snap gauge. These gauges are set
to GO and NO-GO tolerances. A snap gauges’ GO contact dimensions are based on the maximum tol-
erance of the round bar, thickness or part feature being gauged. NO-GO contact dimensions are based
on the minimum tolerance of the round bar, thickness, or part feature being gauged by the snap gauge.
Variable, or top sensitive contact, snap gauges [Fig. 6.30 (c), (d)] use a variable contact point that
moves up during part gauging. The contact point moves providing a GO to NO GO gauging range.
The top contact is normally connected to a dial indicator that provides visual indication of any dia-
metrical or thickness variations.
There are a number of optional snap gauge features that can aid in gauging speed or extending the
measurement range of a particular snap gauge. These features include interchangeable anvils, lock-
ing and back or part support. Snap gauges having replaceable anvils, contact points, styli, spindles, or
other contacting tips or faces to allow for gauging of many different items easily. Back or part support
involves a protrusion or stem located behind a part to hold or stop the part from moving past a cer-
tain point during gauging. Similarly, lockable devices have a slide or spindle on the gauge that can be
locked in a fixed position. Both of these features can be used to quickly foster GO/NO-GO gauging.
Figure 6.30 (e) shows the setting of a gap of the GO side of snap gauges using slip gauges.
4. Air Gauges Air gauges use pneumatic pressure and flow to measure and sort dimensional
attributes. They provide a high degree of speed and accuracy in high-volume production environments.
170 Metrology and Measurement
NOT
GO GO Adjustable
anvils
NO GO
GO
(d) (e)
Air metrology instruments shown in Fig. 6.31 can provide comparative or quantitative measurements
such as thickness, depth, internal diameter (ID), outer diameter (OD), bore, taper and roundness. Air
gauges and gauging systems may also use an indicator or amplifiers such as air columns combined with
air probes or gauges.
There are several types of air gauges. Air plugs are production-quality, functional gauges for
evaluating hole and slot dimensions or locations against specified tolerances. Air rings are also
production-quality, functional gauges, but are used for evaluating specified tolerances of the
dimensions or attributes of pins, shafts, or threaded studs. Air-gauging systems or stations are
large, complex units available in bench-top or floor-mounted configurations. These systems often
include several custom gauges for specific applications, as well as fixtures or other components
for holding or manipulating parts during inspection. Air probes, or gauge heads, are also used
in conjunction with other gauges, and connect to remote displays, readouts, or analog amplifi-
ers. Test indicators and comparators are instruments for comparative measurements where the
linear movement of a precision spindle is amplified and displayed on a dial or digital display. Dial
displays use a pointer or needle mounted in a graduated disc dial with a reference point of zero.
Digital displays present metrology data numerically or alphanumerically, and are often used with
Limits, Fits and Tolerances 171
air gauges that have data-output capabilities. Remote gauges are used on electronic or optical
gauges, probes, or gauge heads that lack an integral gauge.
Air gauges use changes in pressure or flow rates to measure dimensions and determine attributes.
Backpressure systems use master restrictor jets, as well as additional adjustable bleeds or restrictions to
measure pressure changes and adjust for changes in air tooling.
Flow systems use tubes or meters to measure flow rates through air jets, orifices, or nozzles. Back-
pressure systems have high sensitivity and versatility, but a lower range than flow systems. Flow system
gauges require larger volumes of air and nozzles, and are useful where larger measurement ranges are
required. Differential, balanced air, single master, or zero setting air-gauge systems are back pressure
systems with a third zero-setting restrictor.
Some air gauges are handheld and portable. Others are designed for use on a bench top or table, or
mount on floors or machines. Operators who use bench top, table-based, and floor-mounted air gauges
load parts and measure dimensions manually. Automatic gauges (Fig. 6.32), such as the inline gauges
on production lines, perform both functions automatically. In semi-automatic systems, operators load
parts manually and gauges measure automatically. Typically, machine-mounted gauges include test indi-
cators, dial indicators, and or micrometer heads.
(1) Thickness or wall thickness measurement with Millipneu jet air probe, (2) Diameter measurement of
cylindrical through bores with Millipneu jet air plug gauge, (3) Diameter measurement of cylindrical blind
bores with Millipneu jet air plug gauge, (4) Diameter measurement of cylindrical through bores with Mil-
lipneu ball contact air plug gauge, (5) Diameter measurement of cylindrical blind bores with Millipneu
lever contact plug gauge, (6) Diameter or thickness measurement with adjustable Millipneu jet air caliper
gauge (7) Diameter measurement of cylindrical shafts with Millipneu jet air ring gauge, (8) Straightness
measurement of a cylindrical bore with Millipneu special jet air plug gauge, (9) Mating measurement
between bore and shaft with Millipneu jet air plug gauge and jet air ring gauge, (10) Conicity measurement
of an inner cone with Millipneu taper jet air plug gauge. Measurement based on differential measurement
method (11) Measurement of perpendicularity of a cylindrical bore to the end face with Millipneu special
jet air plug gauge—measurement based on differential measurement method (12) Measurement of spacing
between separate cylindrical bores with Millipneu jet air plug gauges. Measurement based on differential
measurement method (13) Measurement of spacing between incomplete cylindrical bores with Millipneu jet
air plug gauges—measurement based on differential measurement method (14) Conicity measurement, form
measurement and diameter measurement of inner cone with Millipneu taper jet air plug gauge (15) Multiple
internal and external measurements with measuring jets and Millipneu contact gauges in conjunction with a
Millipneu seven-column gauge ( Refer Fig. 6.33).
Jet Air Plug Gauges: Millipneu Jet Air Plug Guage Millipneu jet air plug gauges are used for
testing cylindrical through bores or blind bores. The plug gauge bodies are equipped with two oppos-
ing measuring jets, which record the measured value without contact. This arrangement allows the
diameter, the diametric roundness and the cylindricity of bores to be calculated using a single jet air
plug gauge. The diameter is measured immediately after the jet air plug gauge is introduced, while the
172 Metrology and Measurement
diametric roundness deviation can be tested by rotation around 180° and the cylindricity by movement
in a longitudinal direction. The measuring range of the jet air plug gauges is a maximal 76 μm (.003 in).
Jet air plug gauges are supplied as standard in hardened or chrome-plated versions and, if required, with
a shut-off valve in the handle.
Limits, Fits and Tolerances 173
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
The long service life, particularly of the jet air gauges, which are matched
to Millipneu dial gauges, is due in part to the fact that the hardened measuring
jets are recessed relative to the generated surface of the measuring body and
are thus extensively protected against damage.
Sensing
probe
a--GO
b--NO-GO
D2
Fig. 6.41
(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India)
Common shapes or geometries measured include cylindrical and tapered or pipe shapes. A GO gauge
provides a precision tool for production of comparative gauging based on a fixed limit. GO gauges
consist of a fixed-limit gauge with a gauging limit based on the plus or minus tolerances of the inspected
part. NO-GO or NOT-GO gauges provide a precision tool for production of comparative gauging
based on a fixed limit. NO-GO gauges consist of a fixed-limit gauge with a gauging limit based on the
minimum or maximum tolerances of the inspected part. GO/NO-GO gauges are configured with a GO
gauge pin on one end and a NO-GO gauge pin on the opposite end of the handle. GO/NO-GO gauges
provide a precision tool for production of comparative gauging based on fixed limits. GO/NO-GO
gauges are manufactured in the form of stepped pins with the GO gauge surface and the NO-GO gauge
surface on the same side of the handle. The gauge can save type in gauging since the gauge does not
have to be reversed for NO-GO gauging. Master gauge blocks, master or setting discs, and setting rings
are types of master gauges used to calibrate or set micrometers, comparators, or other gauging systems.
Fixed limit or step gauges are specialized thread plug gauges for gauging taper pipe threads. Notches or
external steps indicate maximum and minimum allowable tolerances. Tolerance classes for thread gauges
include Class XX, Class X, Class Y, Class Z, Class ZZ and thread Class W.
Measurement units for thread gauges can be either English or metric. Some gauges are configured
to measure both. The display on the gauge can be non-graduated meaning that the gauge has no dis-
play, dial or analog, digital display, column or bar graph display, remote display, direct reading scale, or
vernier scale.
8. Splined Gauges These are made form blanks whose design various according to the size range
to be accommodated. The splined gauges are available as plug gauges (as shown in Fig. 6.42) or ring gauges
as per the demand. The basic forms of splines will be involute, serrated or straight-sided. Form selection
depend upon dimensions, the torque to be transmitted, manufacturing considerations and type of fit.
9. Radius Gauge These gauges are used to inspect inside and outside radius on the part profile.
With the help of radius gauges we could measure the unknown radius but for limited values. While doing
178 Metrology and Measurement
For inspecting
internal radius
For inspecting
external radius
Air or pneumatic gauges, Bore and ID gauges, Calipers, Digital or electronic gauges, Custom or
OEM gauges, Depth gauges, Masters, setting gauges and other dimensional standards (gauge blocks,
end measuring rods, gauging balls), gauge head or probes, gauge sets or measuring tool kits, gauging
systems or stations, GO-NO GO, attribute or functional gauges (plugs, rings, snaps, flush-pins), height
gauges, indicators and comparators, Laser micrometers, Mechanical micrometers, Micrometer heads,
Thickness gauges, Thread or serration gauges, Specialty and other gauges—designed specifically for
gear, spring, runout, impeller, form or other special functions. The specific gauge best suited for an
application will depend on the part geometry, production volume, gauging conditions (inline vs offline
and environmental factors) and the dimensional tolerance requirements particular to the component
or design. The following figures 6.45 (a to j) are some special-purpose dedicated (fixed and adjustable)
gauges and inspection templates.
i. The form of GO gauges should exactly coincide with the form of the mating part.
ii. GO gauges should enable simultaneous checking of several dimensions. It must always be put in
maximum impassability.
iii. NO GO gauges should enable checking of only dimensions at a time. It must always be put in
maximum passability.
180 Metrology and Measurement
(a) (b)
(c) (d)
(e)
(Continued)
Limits, Fits and Tolerances 181
(f)
(g)
(h) (i)
(j)
Fig. 6.45 Figures (a), (b), (c) and (d) are the fixed gauges;
(e), and (f) are special types of adjustable gauges; (g), (h), (i),
and (j) are the dedicated inspection templates
182 Metrology and Measurement
2. Material Considerations for Gauges Gauges are inspection tools requiring high degree
of wear resistance. Apart from this, a gauge is also required to ensure stability for its size and shape and
corrosion resistance, and should have low temperature coefficient. Therefore, gauges are made from
special types of alloy and by special processes. Such few materials are listed along with their special
properties in Table 6.17.
Table 6.17 Materials used for gauge with their special properties
1. Chromium Plating Increased wear resistance (restoring the worn gauges to original size)
2. Flame-plated tungsten carbide Increasing the size of coating substantially increases wear life (where
frequency of usage is comparatively high)
3. Tungsten carbide Great stability, wear resistance. Controlled temperature environment
is required (used in case of extensive usage and against high abrasive
work surfaces).
3. Gauge Tolerance The expected function of the fixed gauges and the dimensions to be
measured are the variables that necessarily require wide veriety of gauge type. Gauges are used as
a tool to inspect the dimensions (but, they are not used to measure the dimensions). As like any
other part/component, gauges after all must be manufactured by some processes, which require
manufacturing tolerance. After knowing the maximum and minimum metal conditions of the job
dimension under inspection, the size of gauge tolerance on the gauge is allowed. This tolerance,
to anticipate the imperfection in the workmanship of the gauge-maker, is called gaugemaker’s tol-
erance. Technically, the gauge tolerance should be as small as possible, but it increases the manu-
facturing cost (refer Article 6.4.2). There is no universally accepted policy for the amount of gauge
tolerance to be considered while designing the size of the gauge. In industry, limit gauges are made
10 times more accurate than the tolerances to be controlled. In other words, limit gauges are usually
provided with the gauge tolerance of 1/10th of work tolerance. Tolerances on inspection gauges are
generally 5% of the work tolerance, and that on a reference or master gauge is generally 10% of the
gauge tolerance.
After determining the magnitude of gauge tolerance, to avoid the gauge in accepting defective work,
the position of gauge tolerance with respect to the work limits is to be decided. There are two types
of systems of tolerance allocation, viz., unilateral and bilateral (refer Fig. 6.46). In case of a bilateral
system, GO and NOT GO tolerance zones are divided into two parts by upper and lower limits of
the workpiece tolerance zone. The main disadvantage of this system is that those parts which are not
within the tolerance zone can pass the inspection and vice versa. In case of a unilateral system, the work
tolerance entirely includes the gauge-tolerance zone. It reduces the work tolerance by some magnitude
Limits, Fits and Tolerances 183
Upper limit
Work
tolerance
of the gauge tolerance. Therefore, this system ensures that the gauge will allow those components only
which are within the work tolerance zone.
4. Wear Allowance As soon as the gauge is put into service, its measuring surface rubs con-
stantly against the surface of the workpiece. This results into wearing of the measuring surfaces of
the gauge. Hence, it loses its initial dimensions. Consider a GO gauge that is made exactly to the
maximum material size (condition) of the dimension to be gauged. The slightest wear of the gauging
member causes the gauge size to pass those parts which are not within its design tolerance zone. In
other words, the size of the GO plug gauge is reduced due to wear and that of a snap or ring gauge
is increased.
For the reason of gauge economy, it is customary to provide a certain amount of wear allowance while
dimensioning the gauge, and it leads to a change in the design size of the gauge. Wear allowance must be
applied to a GO gauge and is not needed for NOT-GO gauges as wear develops in the direction of safety.
Wear allowance is usually taken as 10 % of gauge tolerance. It is applied in the direction opposite to wear,
i.e., in case of a plug gauge, wear allowance is added and in ring or gap/snap gauge, it is subtracted.
Work Work
tolerance tolerance
NOT NOT
GO GO
1. No work should be produced by workshops or accepted by the inspection department which lies
outside the prescribed limits of size.
2. No work should be rejected which lies within the prescribed limits of size.
These two principles pertain to two situations and the common conclusion (solution) to this is to
employ two sets of gauges, one set to be used during manufacturing (known as workshop gauge) and
the other (inspection gauges) to be used for final inspection of parts. Tolerances on workshop gauges
are arranged to fall inside the work tolerances, and tolerances on inspection gauges are arranged to fall
outside the work tolerances. To approach the first principle, general gauges are recommended. In this
type of gauges, the tolerance zone for a GO gauge is placed inside the work tolerance and the tolerance
zone for a NOT-GO gauge is placed outside the work tolerance (refer Fig. 6.48).
Design size
Gaugemaker's
Master
Object tolerance
gauge
Fig. 6.48 Tolerance zone for gauges
In case of a master gauge (setting gauge for comparator instruments), the gaugemaker’s tolerances
are distributed bilaterally. It is done by using two parameters, the first is the size of the object and the
other is the median size of the permissible object size limits.
6. Gauging Force It is the amount of force applied for inserting the gauge into the part geom-
etry during inspection of the part-using gauge. In this process so many parameters are involved, viz.,
material of part, elasticity of material, gauging dimensions and conditions, etc. Therefore, it is very dif-
ficult to standardize the gauging force. In practice, if a GO gauge fails to assemble with the part then
it is quite definitely outside the maximum metal limit. Similarly, if a NO-GO gauge assembles freely
under its own weight then the part under inspection is obviously rejected. Chamfering is provided on
GO gauges to avoid jamming.
NOT-GO gauges
Gauge tolerance
Hole Tolerance (+ve)
LL of hole
GO gauges
Max. metal
limit of hole
Max. metal
limit of shaft
GO gauges
Shaft Tolerance (−ve)
HL of shaft
Direction of
wear of
GO gauge
Ring / Gap gauges
NOT-GO gauges
Min. metal
limit of shaft
L.L. of shaft
HL = Higher limit, LL = Lower limit
task at hand but not all of which will be efficient, practical or cost-effective. The first step in finding the
best tool for the job is to take a hard look at the application. Answers to the following questions will
help the user zero in on the gauging requirements.
• What is the nature of the feature to be inspected? Are you measuring a dimension or a location?
Is the measurement a length, a height, a depth or an inside or outside diameter?
• How much accuracy is required? There should be a reasonable relationship between the specified
tolerance and the gauge’s ability to resolve and repeat. Naturally, the gauge must be more precise
than the manufacturing tolerance, but a gauge can be too accurate for an application.
• What’s in the budget for gauge acquisition? Inspection costs increase sharply as gauge accuracy
improves. Don’t buy more than you need.
• What’s in the budget for maintenance? Is the gauge designed to be repairable or will you toss it
aside when it loses accuracy? How often is maintenance required? Will maintenance be performed
in-house or by an outside vendor? Remember to figure in the costs of mastering and calibrating.
• How much time is available, per part, for inspection? Fixed, purpose-built gauging may seem less
economical than a more flexible, multipurpose instrument, but if it saves a thousand hours of
labour over the course of a production run, it may pay for itself many times over.
• How foolproof must the gauge be, and how much training is required? Fixed gauging is less prone
to error than adjustable gauging. Digital display is not necessarily easier to read than analog. Can
you depend on your inspectors to read the gauge results accurately at the desired rate of through-
put? If not then some level of automation may be useful.
• Is the work piece dirty or clean? Some gauges can generate accurate results even on dirty parts,
others can’t.
• Is the inspection environment dirty or clean, stable or unstable? Will the gauge be subject to
vibration, dust, changes in temperature, etc.? Some gauges handle these annoyances better
than others.
• How is the part produced? Every machine tool imposes certain geometric and surface-finish irreg-
ularities on workpieces. Do you need to measure them, or at least take them into consideration
when performing a measurement?
• Are you going to bring the gauge to the part or vice-versa? This is partly a function of part size
and partly of processing requirements. Do you need to measure the part while it is still chucked in
a machine tool, or will you measure it only after it is finished?
• What is the part made of? Is it compressible? Easily scratched? Many standard gauges can be
modified to avoid such influences.
• What happens to the part after it is inspected? Are bad parts discarded or reworked? Is there a
sorting requirement by size? This may affect the design of the inspection station as well as many
related logistics.
Limits, Fits and Tolerances 187
Illustrative Examples
Example 1 Design a plug gauge for checking the hole of 70H8. Use i = 0.45 3 D + 0.001D, IT8 = 25i,
Diameter step = 50 to 80 mm.
Example 2 Design and make a drawing of general purpose ‘GO’ and ‘NO-GO’ plug gauge for inspecting
a hole of 22 D8. Data with usual notations:
i. i (microns) = i = 0.45 3 D + 0.001D
ii. Fundamental deviations for hole D = 16 0.44.
iii. Value for IT8 = 25i
188 Metrology and Measurement
Solution:
(a) Firstly, find out the dimension of hole specified, i.e., 22 D8.
For a diameter of 22-mm step size (refer Table 6.3) = (18 − 30) mm
∴ D = d1 ×d2 = 18×30 = 23.2379 mm
= 1.3074 microns
Tolerance value for IT8 = 25 i ….(refer Table 6.4)
= 25 (1.3074) = 32.685 microns = 0.03268 mm.
( b) Now Fundamental Deviation (FD) for hole,
D = 16(0. 44)
D = 16 [23.2379](0.44)
D = 63.86 microns = 0.06386 mm.
Lower limit of the hole = basic size + FD
= (22.00 + 0.06386) mm
= 22.06386 mm
And upper limit of the hole = Lower limit + Tolerance
= (22.06386 + 0.03268) mm
= 22.0965 mm
Lower level
Fundamental Basic
deviation size
Fig. 6.50
Limits, Fits and Tolerances 189
(c) Now consider gaugemaker’s tolerance (refer Article 6.9.4 (c)) = 10% of work tolerance.
= 0.03268(0.1) mm
= 0.00327 mm
(d ) wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
= 0.00327 (0.1) mm = 0.000327 mm
(e ) For designing general-purpose gauge
∴ Size of GO plug gauge after considering wear allowance = (22.06386 + 0.000327) mm
= 22.0641 mm
+0.00327 +0.00327
∴ GO size is 22.0641−0.00 mm and NO-GO size is 22.965−0.00 mm.
Refer Fig. 6.49
22.0997
NO-GO
22.0965
Work
tolerance
= 0.0326 mm
22.06737
22.0641 GO
Wear
allowance
= 0.003
22.0965
Example 3 Design a ‘Workshop’ type GO and NO-GO Gauge suitable for 25 H7 .Data with usual
notations:
1. i (in microns) = i = 0.45 3 D + 0.001D
2. The value for IT7 = 16i.
Solution:
(a ) Firstly, find out the dimension of hole specified, i.e., 25 H7.
190 Metrology and Measurement
For a diameter of 25-mm step size (refer Table. 6.3) = (18 − 30) mm
∴ D = d1 ×d 2 = 18×30 = 23.2379 mm
= 1.3074 microns
Tolerance value for IT7 = 16 i ….(refer Table 6.4)
= 16(1.3074) = 20.85 microns ≅ 21 microns
= 0.021 mm
+0.021
(b) Limits for 25 H7 = 25.00 −0.00 mm
∴ Tolerance on hole = 0.021 mm
(c) Now consider gaugemaker’s tolerance [refer Article 6.9.4 (c)] = 10% of work tolerance
∴ tolerance on GO Gauge = 0.0021 mm, similarly, NO-GO is also = 0.0021 mm.
(d) As tolerance on the hole is less than 0.1 mm, therefore no wear allowance will be provided.
(e) For designing workshop-type gauge
Refer Fig. 6.52.
20.021
NO-GO
Work
+ Ve
Tolerance
= 0.021 mm
GO
20.000
Example 4 Design ‘workshop’, ‘inspection’, and ‘general type’ GO and NO-GO gauges for checking the
assembly φ25H7/f8 and comment on the type of fit. Data with usual notations:
∴ D = d1 ×d 2 = 18×30 = 23.2379 mm
+0.010
Limits for shaft f 8 25.00−0.043 mm .
(d) Now consider gaugemaker’s tolerance for hole gauging [refer Article 6.9.4 (c)] = 10% of work
tolerance.
∴ tolerance on GO Gauge = 0.0021 mm.
(e) Wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
∴ wear allowance = 0.1(0.0021) = 0.00021 mm
(f ) Now consider gaugemaker’s tolerance for shaft gauging [refer Article 6.9.4 (c)] = 10% of work
tolerance.
∴ tolerance on GO Gauge = 0.0033 mm
(g) Wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
∴ wear allowance = 0.1 (0.0033) = 0.00033 mm
(h) Now the gauge limits can be calculated by referring Fig. 6.49 and the values are tabulated as
follows:
Table 6.18
Plug Gauge (for hole gauging) Ring Gauge (for shaft gauging)
Types of Gauges
GO gauge NO-GO gauge GO gauge NO-GO gauge
+0.00231 +0.0201 −0.01033 −0.0397
Workshop 25.00 +0.00021 25.00 +0.0002 25.00−0.01363 25.00−0.0430
= 25.021
LL of hole General
Workshop Inspection
Min. metal
limit of hole
NOT-GO gauges
Gauge tolerance
Hole Tolerance (+ve)
Direction of
Plug gauges wear of
GO gauge
LL of hole
= 25.000
GO gauges
Max. metal
limit of hole
Max. metal
limit of shaft
GO gauges
Shaft Tolerance ( ve)
HL of shaft
Direction of
= 24.990
wear of
Ring/Gap gauges
GO gauge
NOT-GO gauges
Min. metal
limit of shaft
LL of shaft
= 24.957 HL = Higher limit, LL = Lower limit
Review Questions
a. i = 0.45 3 D + 0.001D
b. The Fundamental Deviation for shaft ‘f ’ = −5.5 D 0.41
c. The value for standard tolerance grade IT8 = 25i
d. The diameter steps available are 18–30, 30–50, 50–80
21. Design GO and NOGO limit plug gauges for checking a hole having a size of 40 [0.00, 0.04]. Assume
the gaugemaker’s tolerance to be equal to 10% of work tolerance and wear allowance equal to 10%
of gaugemaker’s tolerance.
22. A shaft of 35±0.004 mm is to be checked by means of GO and NOGO gauges. Design the dimen-
sions of the gauge required.
23. A 25-mm H8F7 fit is to be checked. The limits of size for the H8 hole are high limit = 25.03 mm
and low limit equal to basic size. The limits of the size for an F7 shaft are high limit = 24.97 mm
and low limit = 24.95 mm. Taking gaugemaker’s tolerance equal to 10% of the work tolerance,
design a plug gauge and gap gauge to check the fit.
24. Design a plug and ring gauge to control the production of a 90-mm shaft and hole part of H8e9.
Data given:
a. i = 0.45 3 D + 0.001D
b. The upper deviations for ‘e’ shaft = −11 D0.41
c. The value for standard tolerance grade IT8 = 25i and IT9 = 40i
d. 90 mm lies in the diameter step of 80 mm and 100 mm
25. Explain in brief what is meant by the term tolerance zone as used in positional or geometrical tol-
erancing. How are they specified on a drawing?
26. Describe some precautions to be taken in prescribing the accuracy of a limit gauge.
27. What is a gauge? Provide suitable definition and explain how a workshop gauge differs from an
inspection gauge.
28. Design a suitable limit gauge confirming to Taylor’s principle for checking a 60H7 square hole that
is 25 mm wide. How many gauges are required to check this work? Sketch these gauges and justify
your comments.
29. A 70-mm m6 shaft is to be checked by GO-NO-GO snap gauges. Assume 5% wear allowance and
10% gaugemaker’s tolerance (% of the tolerance of the shaft). The fundamental deviation for m fit
is (IT7 – IT6) where multipliers for grade IY 7 is 16 and IT 6 is 10. Sketch the workshop, inspection
and general gauges.
7 Angular Metrology
7.1 INTRODUCTION
The concept of an angle is one of the most important concepts in geometry. The concepts of equality,
and sums and differences of angles are important and are used throughout geometry; but the subject
of trigonometry is based on the measurement of angles.
Angular Metrology 197
There are two commonly used units of measurement for angles. The more familiar unit of measure-
ment is the degree. A circle is divided into 360 equal degrees, and a right angle has 90 degrees in it. For
the time being, we’ll only consider angles between 0° and 360°.
Degrees may be further divided into minutes and seconds, but that division is not as universal as
is used to be. Parts of a degree are now frequently referred decimally. For instance, seven and a half
degrees is now usually written as 7.5°. Each degree is divided into 60 equal parts called minutes. So seven
and a half degrees can be called 7 degrees and 30 minutes, written as 7° 30'. Each minute is further
divided into 60 equal parts called seconds, and, for example, 2 degrees 5 minutes 30 seconds is written as
2° 5' 30''. The division of degrees into minutes and seconds of an angle is analogous to the division of
hours into minutes and seconds of time.
Usually, when a single angle is drawn on an xy-plane for analysis, we draw it with the vertex at the
origin (0, 0), one side of the angle along the x-axis, and the other side above the x-axis.
The other common measurement unit for angles is radian. For this measurement, consider the unit circle
(a circle of radius 1 unit) whose centre is the vertex of the angle in question. Then the angle cuts off
an arc of the circle, and the length of that arc is the radian measure of the angle. It is easy to convert
a degree measurement to radian measurement and vice versa. The circumference of the entire circle is
2π (π is about 3.14159), so it follows that 360° equals 2π radians. Hence, 1° equals π/180 radians and
1 radian equals 180/π degrees.
An alternate definition of radian is sometimes given as a ratio. Instead of taking the unit circle
with centre at the vertex of the angle, take any circle with its centre at the vertex of the angle. Then
the radian measure of the angle is the ratio of the length of the subtended arc to the radius of the
circle. For instance, if the length of the arc is 3 and the radius of the circle is 2 then the radian mea-
sure is 1.5.
The reason that this definition works is that the length of the subtended arc is proportional to the
radius of the circle. In particular, the definition in terms of a ratio gives the same figure as that given
above using the unit circle. This alternate definition is more useful, however, since you can use it to
relate lengths of arcs to angles. The formula for this relation is
Radian measure times radius = arc length
For instance, an arc of 0.3 radians in a circle of radius 4 has length 0.3 times 4, that is, 1.2.
Table 7.1 shows common angles in both degree measurement and radian measurement. Note that
the radian measurement is given in terms of π. It could, of course, be given decimally, but radian mea-
surement often appears with a factor of π.
The basic standards for angle measurement used by NPL depend either on the accurate division of
a circle or on the generation of a known angle by means of a precision sine-bar. Several methods are
available for dividing a circle but the one employed by NPL for undertaking measurements for the precision
198 Metrology and Measurement
engineering industry is based on the accurate meshing of two similar sets of uniformly spaced vee-
serrations formed in the top (rotatable) and base (fixed) members of an indexing table.
90° π/2
60° π/3
45° π/4
30° π/6
Protractors and angle gauges measure the angle between two surfaces of a part or assembly. Fixed angle
gauges, universal protractors, combination sets, protractor heads, sine bars, and T bevels are used for
angular measurement. Protractors and angle gauges fall under the category of measuring tools. Mea-
suring tools are instruments and fixed gauges that provide comparative and quantitative measurements
of a product or component’s dimensional, form and orientation attributes such as length, thickness,
level, plumbness and squareness. Measuring or shop tools include rules, linear scales, protractors and
angle gauges, level sensors and inclinometers, and squares and fixed gauges. Measuring tools are used
in construction and building (contractors), drafting and drawing (designers), machine shops and tool
rooms (machinists), field work (surveyors) and offices.
The types of protractors and angle gauges available include angle square, rule depth or angle gauge,
combination set or square, fixed angle gauge, protractor head, rectangular or semicircular head protrac-
tor, sine bar or block or plate, universal or sliding bevel, and universal or bevel protractor. An angle
square consists of a square with angular graduations along the longest face or hypotenuse. A rule depth
or angle gauge is a combination rule with an attachment for indicating the depth orientation of the hole
with respect to the top surface. Combination squares measure length; centre, angular or squareness
determination, and have transfer or marking capability. These multiple tasks are possible because these
sets have a series of optional heads (square, centre or protractor). Fixed angle gauges have a series of
fixed angles for comparative assessment of the angle between two surfaces. Protractor heads are an
attachment or an optional part of a combination square set. The protractor head slides onto the steel
rule and provides a tool for angular measurement or transfer. Rectangular or semicircular head protrac-
tors have long, thin, unruled blades and heads with direct reading angular graduations. Sine bars, blocks,
tables or plates are used for precision angular measurement and are used in machine shops, tool rooms
Angular Metrology 199
or inspection labs. Trigonometric calculations are used to determine the angles. Universal bevels, slid-
ing bevels, combination bevels, or T-bevels are used to transfer or duplicate angle measurements. Usu-
ally, bevels do not have any graduations. Universal or bevel protractors have a base arm and a blade
with a wide angular range. Bevel protractors have a graduated, angular direct reading or vernier scale
located on a large disc. Protractors and angle gauges can be level-sensing or inclinometers. Mechanical
or electronic tools indicate or measure the inclination of a surface relative to the earth’s surface usually
in reference to the horizontal (level), vertical (plumb) or both axes.
These include graduated or non-graduated audible indicators or buzzers, columns or bar graphs,
dials, digital displays, direct reading scales, remote displays, and vernier scales. Features for protrac-
tors and angle gauges include them to be either machine or instrument mounted with a certificate of
calibration, locking feature, marking capability, and linear rule. Common materials of construction
for protractors and angle gauges include aluminium,
brass or bronze, cast metal or iron, plastic, fiberglass,
glass, granite, stainless steel, steel and wood.
A very wide variety of devices and sizes have been
developed to handle almost any situation, including
optical, and the newer, laser types. Some may have
measuring graduations, and movable blades and
accessories such as scribers, bevel and centre finders.
(a)
Sometimes selecting the right one is a puzzlement.
Some of them are discussed as follows:
tions, almost any type of angle can be handled. Fig. 7.1 Arm protractor
200 Metrology and Measurement
2. Squares Since the most common angles are right or perpendicular angles, squares are the most
common devices for drawing them. These range from the small machinist to large framing or rafter
types. Among the most useful for model making is the machinist square with blades starting at 2" and
up. These are precision ground on all surfaces, any of which can be used. The inside handle corner is
relieved and the outside blade corner is notched for clearance. Although they are designed for align-
ment of machine tools and work, they fit nicely inside rolling stock and structures for squaring corners.
Do not overlook the use of bar stock, of shape similar to the handle, for tighter fits.
Blade
Acute-angle attachment
Turret
Scale
Body
Vernier Stock
scale
Turret Eyepiece
Blade
Stock
An alternative to this is the optical bevel protractor (as shown in Fig. 7.6), which can measure angles up
to 2 minutes of an arc. It consists of a glass circle graduated in divisions of 10 minutes of an arc. The
blade clamps the inner rotating member, which carries a small microscope (eyepiece) through which
circular graduations can be viewed against the main scale. Figure 7.7 shows advancement in the case of
bevel protractor which gives a digital display of an angle.
202 Metrology and Measurement
Digital display
7. Combination Set Small movable combination ‘squares’ are useful for less critical applica-
tions, where others will not fit. This combination set has a graduated blade, square, 45 degree, centre
finder, scriber, and bubble level.
Square head
15 16 17 23 24 20 26 27 28 29 30
1 2 3 4 5 6 7
Steel rule
95 90 8
10 0
5
15
80
20
75
25
70
30 5
65
60
55 50 45 40 3
(a) (b)
Fig. 7.10 (a) and (b) Pictorial views of combination set
8. Angle Gauges A series of fixed angles are used for comparative assessment of the angle
between two surfaces. Important specifications to consider when searching for protractors and angle
gauges include angular range and angular resolution. There are many choices for scales or displays on
protractors and angle gauges.
204 Metrology and Measurement
(a) (b)
Fig. 7.11 (a) and (b) Use of centre head and square head of
combination set respectively
Dr Tomlinson developed angle gauges in 1941. By making different permutations and combinations
of the gauge setting, we could set an angle nearest to 3". The dimensions of angle gauges are 75 mm
length and 16 mm width. Common materials of construction for angle gauges include aluminium, brass
or bronze, cast metal or iron, plastic, fiberglass, glass, granite, stainless steel, steel, and wood. These are
hardened and stabilized. The measuring faces are lapped and polished to a high degree of accuracy and
flatness. Angle gauges are available in two sets (one set is shown in Fig. 7.12). One set consists of 12
pieces along with a square block. Their values are
1o, 3 o, 9 o 27 o and 41o
1', 3', 9', and 27' and
6", 18", and 30"
The other set contains 13 pieces with values of
1o, 3 o, 9 o 27 o and 41o
1', 3', 9', and 27' and
3", 6", 18", and 30"
The angle can be build up by proper combination of gauges, i.e., addition or subtraction, as shown
in Figs 7.13 and 7.14. Figure 7.15 shows a square plate used in conjunction with angle gauges. All its
faces are at right angles to each other. With the help of a square plate, the angle can be extended with
the range of an angle block set in degrees, minutes or seconds.
Angular Metrology 205
options at 100 mm, 200 mm, and 300 mm) at each end. Fig. 7.13 Addition of angle gauges
α
(α − β)
During it’s manufacturing, the various parts are hardened and stabilized
before grinding and lapping. The rollers are brought in contact with the
bar in such a way that the top surface of the bar is absolutely parallel to
the centreline of the (setting) rollers. The holes drilled in the body of the
sine bar to make it lighter and to facilitate handling, are known as relief
holes. This instrument is always worked with true surfaces like surface
plates. Sine bars are available in several designs for different applications.
Figure 7.16 shows the nomenclature for a sine bar as recommended by
IS: 5359–1969. Figure 7.17 shows the pictorial view of a sine bar of Fig. 7.15 Square plate
centre distance equal to 300 mm.
End face
Setting rollers
Lower face
100 or 200 or 300 mm
Upper surface
Showing 300 -mm centre (working surface)
distance between two rollers Relief holes
Setting rollers
Principle of using Sine Bar The law of trigonometry is the base for using a sine bar for angle mea-
surement. A sine bar is designed to set the angle precisely, generally in conjunction with slip gauges. The
angle is determined by an indirect method as a function of sine—for this reason, the instrument is called
‘sine bar’. Also, to set a given angle, one of the rollers of the bar is kept on the datum surfaces (generally,
the surface plate), and the combination of the slip gauge set is inserted under the second roller. If L is the
fixed distance between the two roller centres and H is the height of the combination slip gauge set then
H ⎡H ⎤
sinθ = …..(i) or θ = sin−1 ⎢ ⎥ …..(ii)
L ⎢⎣ L ⎥⎦
Thus, using the above principle it is possible to set out any precise angle by building height H. Use
formula no. (i). Refer Fig. 7.18, which explains the principle of using a sine bar for setting an angle. Or
measure the unknown angle by measuring the height difference between centres of two rollers and use
the above formula no. (ii).
een
t h betw
g L
Len ntre
Ce
Figure 7.19 shows the accessories used for setting and measuring the angles, viz., slip gauges, and
dial indicator.
When the component is small and can be mounted on the sine bar then setting of instruments for
measuring unknown angles of the component surface is as shown in Fig. 7.20.
Refer Fig. 7.20. The height of slip gauges is adjusted until the dial gauge reads zero at both ends of
the component. And the actual angle is calculated by the above-said formula no. (ii).
When the component is large in size or heavy, the component is placed on the datum surface and the
sine bar is placed over the component, as shown in Fig. 7.21. The height over the rollers is measured
using a height gauge. A dial test gauge is mounted (on the slider instead of a blade) on the height gauge
as a fiducial indicator to ensure constant measuring pressure. This could be achieved by adjusting the
height gauge until the dial gauge shows the same zero reading each time.
208 Metrology and Measurement
Roller
Slip gauge
Fig. 7.19 Set of sine bar, slip gauge and dial indicator
(Courtesy, Metrology Lab Sinhgad C.O.E., Pune.)
Angle plate
Dial gauge
Direction of
movement of
dial gauges
Component
Sine bar
Component
Fig. 7.21 Angle measurement using sine bar and vernier height gauge
Note down the two readings for the two positions shown in Fig. 7.21 of either of the rollers. If H is
the difference in the heights and L is distance between two roller centres of the sine bar then
⎡H ⎤
angle of the component surface = θ = sin−1 ⎢ ⎥
⎢⎣ L ⎥⎦
Other Aspects of Use of Sine Bar To measure and /or set the angle accurately using a sine bar
the main requirement is that it must be accurate. If the sine bar is to be accurate then it must accurately
have some important geometrical constructional features:
i. The axis of the rollers must be parallel to each other and the centre distance L must be precisely
known; this distance specifies the size of the sine bar.
ii. The rollers must be of identical diameters and round within a close tolerance.
iii. The top surface of the sine bar must have a high degree of flatness and it should be parallel to
the plane connecting the axis of the rollers.
210 Metrology and Measurement
The accuracy requirement and tolerance specified by IS: 5359–1969 for a 100-mm sine bar are as
follows:
Any deviation of the dimensional size limit of the sine bar from the specifications mentioned above
may lead to an error in angular measurement. Hence some of the sources of errors in the sine bar may
be an error in distance between two rollers, errors in parallelism of upper and lower surface w.r.t. the
datum surface when resting on it and also with the plane of the roller axis, error in equality of size of
rollers and its cylindricity, error in parallelism between the two roller axes, error in flatness of upper
surface or error in slip-gauge combination and its wrong setting w.r.t. sine bar. The error may be due
to measuring high angles itself (this is due to the fact that as the angle increases, the error due to the
combined effect of the centre distance and gauge-block accumulated tolerance increases.) Below 45°,
this type of error is small. Hence, sine bars are not recommended for use for measuring or setting
angles larger than 45°.
10. Sine Centre These are used in situations where it is difficult to mount the component on the
sine bar. Figure 7.22 shows the construction of a sine centre. This equipment itself consists of a sine
bar, which is hinged at one roller end and mounted on the datum end.
Two blocks are mounted on the top surface of the sine bar, which carry two centres and can be
clamped at any position on the sine bar. These two centres can be adjusted depending upon the
length of the conical component. The procedure to measure is the same as it is in case of the use of
a sine bar. Figure 7.23 shows the use of a sine centre for measuring the included angle of the taper
plug gauge.
Apart from the sine bar and sine centres, sine tables are also used to measure angles. Specifically it
can be used for measuring compound angles. These are used for radial as well as linear measurement.
11. Vernier Clinometer Figure 7.24 explains the constructional details of a vernier clinometer.
It consists mainly of a spirit level mounted on a rotating member, which is hinged at one end in hous-
ing. One of the faces of the right-angle housing forms the base for the instrument. This base of the
Angular Metrology 211
Block
Centres
Support
Roller
200/300 mm
size
Fig. 7.22 Sine centre
Conical work
Slip
gauge
Datum
Roller surface
pivot
Fig. 7.23 Measurement of included angle of taper plug gauge using sine centre
instrument is placed on the surface whose angle is to be measured. Then the rotary member is rotated
and adjusted till the zero reading of the bubble in the spirit level is obtained. A circular scale fixed on
the housing can measure the angle of inclination of the rotary member, relative to a base against an
index.
212 Metrology and Measurement
Circular scale
Spirit Housing
level
Hinge
Base
Rotating member
A further modification in the Vernier clinometer is the micrometer clinometer (refer Fig. 7.25). It
consists of a spirit level whose one end is attached to the barrel of a micrometer and the outer end is
hinged on the base. The base is placed on the surface whose angle is to be measured. The micrometer
is adjusted till the level is horizontal. It is generally used for measuring small angles.
Spirit level
Base hinge
Micrometer
gauge
Other types of clinometers are dial clinometer and optical clinometer, which use the same working
principle used in the case of a bevel protractor (and optical bevel protractor). The whole angle can be
observed through an opening in the dial on the circular scale and the fraction of an angle can be read
Angular Metrology 213
on the dial. In case of an optical clinometer, the reading can be taken by a measuring microscope on
a graduated scale provided on a fixed circular glass disc. With this instrument, angles even up to 1' can
be measured.
12. Autocollimator An autocollimator is used to detect and measure small angular tilts of a
reflecting surface placed in front of the objective lens of the autocollimator. Ideally, the area of
the reflecting surface should be at least equal to the area of the objective lens. However, this is not
generally the case when the autocollimator is used in conjunction with angle gauges or a polygon.
Therefore, since the objective lenses fitted to most commercial instruments have small but significant
wavefront errors, it is important to position the autocollimator so that its optical axis passes through
the centre of the reflecting face of the angle gauge or polygon, reducing the effect of wavefront errors
to a minimum. Figure 7.26 explains the working principle of an autocollimator.
Reflecting
mirror R
Source
X = 2f θ
Image
θ
Fig. 7.26 Principle of autocollimator
If a parallel beam of light is projected from the collimating lens and if a plane reflector R is set up
normal to the direction of the beam, the light will be deflected back along its own path and will be
brought to a focus exactly at the position of the light source. If the reflector is tilted through a small
angle θ, the parallel beam will be deflected through twice the angle, and will be brought to a focus in
the same plane as the light source but to one side of it. The image will not coincide but there will be a
distance = X = 2 f θ between them, where f is the focal length of the lens.
The autocollimator should be rotated about its optical axis, if such a provision exists, until a move-
ment of the reflected image perpendicular to the direction of measurement produces no change of
reading. For photoelectric autocollimators, this condition should be achieved using the photoelectric
detector.
The confusing method of making use of the appearance of the object wires seen directly through
the microscopes is removed in case of modern autocollimator design. Figure 7.27 shows the graticule
situated to one side of the instrument (along the axis perpendicular to the main axis). A transparent
214 Metrology and Measurement
Lamp
Target graticule
Objective lens
Micrometer
beam splitter reflects the light from the graticule towards the objectives, and thus the microscope forms
no direct image. The image formed after reflection, whose angular variations are being measured, is
formed by the light from the objective, which passes through the 45ο beam splitter and this image is
picked up by the microscope. In this type of autocollimator, the microscope is fitted to a graticule opti-
cally at right angles to the eyepiece graticule. One of the important advantages of using an autocollima-
tor is that the instrument can be used at a considerable distance away from the reflector. In Fig. 7.28,
the set-up to measure the angular tilt in a horizontal plane (observe the direction of the curved arrow)
is shown. This set-up can also be used for measuring the flatness and straightness of the surface on
which the reflecting mirror is kept as a reflecting plane.
An autocollimator should ideally be used in an environment where air currents in the optical path
between the autocollimator and the reflecting surface are minimal. Such air currents, by introducing
changes in density and, therefore, of refractive index, produce random movements of the observed
image, impairing the accuracy of the autocollimator setting. For this reason, the distance between the
objective lens and the reflecting surface should be kept to a minimum and, where practicable, the light
path should be shielded from the surrounding air.
Calibration of an autocollimator can be made using the NPL-designed small angle generator.
In the case of visual and photoelectric-setting type autocollimators, small angles are generated to
check the periodic and progressive errors of the micrometer screw which enables the displace-
ment of the reflected image of the target cross-lines to be measured. In the case of automatic
position-sensing electronic autocollimators, the calibration points will have to be agreed upon with
the customer.
Angular Metrology 215
Lamp
Fig. 7.28 Set of autocollimator along with square prism and mirror to measure small
angular tilts in the horizontal plane
(Courtesy, Metrology Lab Sinhgad COE, Pune)
Measurement Uncertainties
• Visual setting autocollimator ±0.3 second of an arc
over any interval up to 10 minutes of an arc
• Photoelectric setting autocollimator Typically ±0.10 sec-
ond of an arc over any interval up to 10 minutes
of an arc
• Automatic position-sensing electronic autocollimator Typ-
ically ±0.10 second of an arc over any interval up
to 10 minutes of an arc
The service has the following advantages over the previous service: (1) calibrations can be made at
any number of user-defined calibration points; and (2) improved measurement uncertainty.
The system has a total operating range of ±10 degrees but with an increased and yet to be quantified
measurement uncertainty.
Case Study Generation of angles by indexing tables is achieved by the meshing of two similar
sets of serrations. Calibration of such tables submitted for test is effected by mounting the table
under test on top of one of the NPL indexing tables and using a mirror-autocollimator system to
compare angles generated by the table under test with similar angles generated by the NPL table. For
the purpose of assessing the accuracy of performance of the serrated type of table, it is considered
sufficient to intercompare each successive 30-degree interval with the NPL table, thus providing 144
comparative measurements. The small angular differences between the two tables are measured by a
photoelectric autocollimator capable of a discrimination of 0.02 second of an arc. A shortened test
may be applied to indexing tables, which have a reproducibility of setting significantly poorer than
the NPL tables, that is, greater than 0.05 second of an arc. For such tables, three sets of measure-
ments of twelve consecutive 30-degree rotations of the table under test are compared with the NPL
table. Between each set of measurements, the test table is moved through 120 degrees relative to the
NPL table.
The uncertainty of measurement is largely dependent on the quality of the two sets of serra-
tions. The criterion for assessing this quality is to check the reproducibility of angular positions of
the upper table relative to the base. Indexing tables similar to the NPL tables will normally repeat
angular positions in between 0.02 to 0.05 second of an arc. The uncertainty of measurement for the
calibration of these tables, based on 144 comparative measurements, is ±0.07 second of an arc.
Indexing tables having a slightly lower precision of angular setting, say between 0.05 and 0.2 second
of an arc, are calibrated by making 36 comparative measurements and the uncertainty of measurement
of the calibrated values will be between ±0.25 and ±0.5 second of an arc.
(a) (b)
Fig. 7.30 Indexing table
Angular Metrology 217
The basic standards for angle measurement used by NPL depend either on the accurate divi-
sion of a circle or on the generation of a known angle by means of a precision sine-bar. Several
methods are available for dividing a circle, but the one employed by NPL for undertaking measure-
ments for the precision engineering industry is based on the accurate meshing of two similar sets
of uniformly spaced vee-serrations formed in the top (rotatable) and base (fixed) members of an
indexing table.
NPL possesses two such indexing tables—one having 1440 serrations and the other 2160 ser-
rations—thus providing minimum incremental angles of 15 and 10 minutes of an arc respectively
throughout 360 degrees. The 1440 table is fitted with a sine-bar device, which enables the
15-minute of an arc increment to be subdivided to give a minimum increment of 0.1 second of an
arc. The accuracies of the NPL master indexing tables are checked by comparison with a similar
table. An essential accessory for the application of these indexing tables to the measurement of
the angle is an autocollimator or some other device for sensing the positions of the features that
define the angle to be calibrated. The autocollimator is used to measure the small angular differ-
ence between the item under test and the approximately equal angle generated by the serrations
of the indexing table.
The autocollimator is set to view either a mirror fixed to the upper (rotatable) member of the
indexing table or the reflecting surfaces of the item under test, e.g., the faces of a precision polygon
or an angle gauge. The settings of the table are made approximately and the small angular deviations
between the angle generated by the table and the angle of the test piece are measured by the autocol-
limator.
Angles generated using sine functions are realized by means of an NPL Small Angle Generator
designed to operate over a range of ±1 degree and intended primarily for the calibration of auto-
collimators and other instruments which measure small angular changes in the vertical plane. This
instrument is essentially a sine-bar, which can be tilted about the axis of a cylindrical shaft fitted
at one end of the bar. Predetermined angular tilts of the sine-bar are affected by inserting gauge
blocks between a ball-ended contact fitted to the sine-bar and a fixed three-point support platform.
The perpendicular separation of the axis of the cylindrical shaft and the centre of the ball-ended
contact of the sine-bar is 523.912 6 mm. Thus a vertical displacement of the ball-ended contact of
0.002 54 mm produces an angular change of the sine-bar of 1 second of an arc throughout a range
of ±10 minutes of an arc. (The normal measuring range of an autocollimator is ±5 minutes of an
arc.) The uncertainty of the NPL small-angle generator is estimated to be ±0.03 second of an arc for
angles in the range of ±10 minutes of an arc. A fused silica reflector, of 75-mm diameter, is mounted
on the sine-bar over its tilt axis and is viewed by the autocollimator under test. This reflector is flat
within 0.01 μm and is used for checking the flatness of the wavefront of the autocollimator.
When new steel polygons and angle gauges are submitted for certification, written evidence is
required from the manufacturer to show that they have received a recognized form of heat treatment
to ensure dimensional stability.
Although there are many different types of industrial requirements involving accurate angular
measurement, only the types of work listed below are normally dealt with by NPL. However, other
218 Metrology and Measurement
Microscope eyepiece
Prism
Lamp
Glass scale
Illuminated
scale
Datum scale
Workpiece
Collimating lens
15 Fixed scale
10
Reflected image of
illuminated scale
10
15
The reflected image is refocused by the lens in such a way that it comes in the view of the eyepiece.
This image can be seen on a glass scale which is placed in the focal plane of the objective lens. It falls
across the simple datum line, but across a similar fixed scale at right angles to the illuminate image. The
reading on the illuminated scale measures angular deviations from one axis at about 90o to the optical
axis and the reading on the fixed scale gives the deviation about an axis mutually at right angles to the
other two.
This enables to ensure the reading on the setting ‘master’. The master may be a sine bar or combina-
tion of angle gauges to set up on a base plate and the instrument is adjusted until a reading on both the
scales is obtained. Then, the ‘master’ is replaced by the work and the slip gauge is to be placed on the
surface of the workpiece to get a good reflective surface. Now the work is rotated until the fixed scale
reading is the same as that on the setting gauge. The difference in the two readings on the illuminated
scale is the error in the work surface angle.
Review Questions
1. List the various instruments used for angle measurement and explain angle gauges.
2. Explain the construction and use of a vernier and optical bevel protector.
3. What is a sine bar? Explain the procedure to use it using a sketch.
4. Discuss the limitations of the use of a sine bar.
5. Explain different types of sine bars with sketches.
6. The angle of a taper plug gauge is to be checked using angle gauges and the angle dekkor. Sketch
the set-up and describe the procedure.
220 Metrology and Measurement
7. Write short notes on (a)Vernier bevel protector (b) Autocollimator (c) Sine bar (d) Angle
dekkor.
8. Describe and sketch the principle of working of an autocollimator and state its applications.
9. Discuss the construction and use of a vernier and micrometer clinometer.
10. What are angle gauges? Explain with suitable examples how they are used for measuring angles.
11. Explain the construction, working and uses of the universal bevel vernier proctor.
12. Sketch two forms of a sine bar in general use. Explain the precautions to be taken while using it to
measure angles.
13. Write a technical note on angle gauge blocks by specifying their limitations. Also explain that to
what accuracy can the angles be generated with angle blocks.
14. Describe the principle of an angle-dekkor and mention its various uses.
8 Interferometry
“The sizes of end standards can also be determined by interferometry principles very accurately…”
Prof. M G Bhat, Professor Emeritus and Technical Director, Sinhgad College of Engineering.,
Pune University, Pune, India
8.1 INTRODUCTION
Huygens’ theory proposes that light is considered as wave motion propagated in ether as an electromag-
netic wave of sinusoidal form. The maximum disturbance of the wave is called amplitude, and the velocity
of transmission is called frequency. The higher points of a wave are called troughs and the lower points are
called crests. The distance between two troughs/crests is called wavelength (Refer Fig. 8.1). The time taken
by light in covering one wavelength is called the time period. The establishment of size accurately in relation
to national and international standards of length is of fundamental importance that is used for achieving
222 Metrology and Measurement
+
λ = Wavelength
Crest
Amplitude
A
O
Trough
−
dimensional accuracy of a product. This wave nature of light is not apparent under ordinary conditions
but when two waves interact with each other, the wave effect is visible and it can be made useful for
measuring applications. For example, when light is made to interfere, it produces a pattern of dark bands
which corresponds to a very accurate scale of divisions. The particular characteristic of this entity is the
unit value of the scale, which is exactly one-half wavelength of light used. As this length is constant, it
can be used as a standard of measurement. The use of interferometry technique enables the determina-
tion of size of end standards (slip gauges and end bars) directly in terms of wavelength of light source
whose relationship to the international wavelength standard is known to a high degree of accuracy. The
subsidiary length standards, which include workshop and inspection slip gauges, setting meters, etc., are
calibrated with the help of interferometrically calibrated reference-grade slip gauges for retaining accuracy.
The French physicists Babinet in 1829 suggested that light waves could be used as a natural standard of
length. Later, a great deal of research was carried out in the similar way regarding the use of interferom-
etry techniques culminating in the establishment of the end standards such as yard and metre, in terms of
wavelength standard in 1960. Wavelength of orange light from krypton-86 spectrum was used.
White light is the combination of all the colours of the visible spectrum: red, yellow, orange, green,
blue, violet and indigo, and each of these colour bands consists of a group of similar wavelengths.
Therefore, this combination of all wavelengths of a visible spectrum and its form is not suitable for
interferometry. To overcome this difficulty, monochromatic light source like mercury, mercury 198,
cadmium, krypton, krypton-86, thallium, sodium and laser beams are used. This light source produces
rays having a single frequency and wavelength, which provide advantages like reproducibility, higher
accuracy of about one part in one hundred million, having a specific precise wavelength value and vir-
tually independent of any ambient conditions.
Figure 8.2 explains the effects of combination of two light rays as rays A and B which are of the
same wavelength. When they happen to be in phase, it results into increased amplitude called resultant
Interferometry 223
Resultant
aR
B aB B
A
A aA
Fig. 8.2 Effect of combination of two monochromatic rays of equal amplitude, which are in phase
aA = Amplitude of wave A, a B = Amplitude of wave B, aR = aA + a B = Resultant amplitude (R)
amplitude. It is the addition of the amplitudes of the combined rays. Hence, if two rays of equal
intensity are in phase, they augment each other and produce increased brightness. If rays A and B
differ by a phase of 180° then the combined result R will be very small, may be zero, if the amplitudes
aA and aB are equal. Therefore, if two rays of equal intensity differ in phase by λ/2, they nullify each
other and result into darkness.
The above discussion reflects that interference can occur only when two rays are coherent, that is,
their phase difference is maintained for an appreciable length of time. This could be possible only when
the two rays originate from the same point of light source at the same time.
A Resultant
aA
aB
B
λ
i. Monochromatic light is allowed to pass through a very narrow slit (S ), and then allowed to pass
through the other two narrow slits (S1 ) and (S2 ), which are very close to each other.
ii. Two separate sets of rays are formed which pass through one another in the same medium.
iii. If path S1L2 and S2L2 are exactly equal then the rays on these paths will be in phase which results
in constructive interference, producing maximum intensity or bright band. The phenomenon
remains same for L1 and L3.
224 Metrology and Measurement
iv. If at the same point D1, the ray path differ- Slit S1 Screen
ence is equal to half the wavelength (S2D1
− S1D1 = λ/2), it results into an out-of- L1 – Light Band
Light Source
phase condition producing zero intensity D1 – Dark Band
or a dark band due to destructive interfer- L2 – Light Band
ence. The phenomenon remains the same D2 – Dark Band
for D2.
Slit S
L3 – Light Band
v. Thus, a series of bright and dark bands
are produced. The dark bands are called
Slit S2
interference fringes. The central bright band
Fig. 8.4 Way of producing interference pattern
is flanked on both the sides by dark bands,
which are alternatively of minimum and
maximum intensities and are known as interference bands.
Another simple method of producing interference fringes is by illuminating an optical flat over a plane-
reflecting surface. An optical flat is a disc of glass or quartz whose faces are highly polished and flat within a
few microns. When it is kept on the surface’ nearly flat dark bands can be seen. These are cylindrical pieces
whose diameters range from 25 mm to 300 mm with the thickness of 1/6th of the diameter. For measuring
flatness, in addition to an optical flat, a monochromatic light source is also required. The yellow – orange
light radiated by helium gas can be satisfactorily used. Such an arrangement is shown in Fig. 8.5. Optical
flats are of two types, namely, Type A and Type B. A Type-A optical flat has a single flat surface and is used
for testing precision measuring surfaces, e.g., surfaces of slip gauges, measuring tables, etc.
A Type-B optical flat has both the working surfaces flat and parallel to each other. These are used
for testing the measuring surfaces of instruments like micrometers, measuring anvils and similar other
devices for their flatness and parallelism.
As per IS 5440–1963, optical flats are also characterized by the grades as their specifications: Grade 1
is a reference grade whose flatness is 0.05 micron and Grade 2 is used as a working grade with tolerance
for flatness as 0.10 micron.
Fig. 8.5 Monochromatic light source set-up along with optical flat and surface under test
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune University, India).
A
(like the surface being turf or convex/concave or cylindri- B
cal or because of any foreign material present in between
the surface under test and the bottom surface of the optical
flat), it could not make an intimate contact and rests at some D
angle ‘θ’. In this situation, if the optical flat is illuminated by
a monochromatic light, interference fringes will be observed
(refer Fig. 8.5 for studying the set-up). These are produced by
the interference of light rays reflected from the bottom sur- Monochromatic
light
face of the optical flat and the top surface of the workpiece
Optical flat
under test as shown in Figs 8.6 and 8.7 through the medium
of air.
An optical flat is shown at much exaggerated inclination Workpiece
with the test surface where the air space distances differ by H
one-half of the wavelength intervals. Dark bands indicate the
curvature of the workpiece surface. Referring to Fig. 8.6, the Fig. 8.6 Application of mono-
bands are represented by B and the mean spacing by A. The chromatic interference method
226 Metrology and Measurement
amount X by which the surface is convex or concave (as in the present instance) is given by the fol-
lowing relation:
B over the diameter D of the optical flat
X=
A
⎛ B⎞
The wavelength = 0.000022 inches so that X = 0.00001 × ⎜⎜ X = ⎟⎟
⎜⎝ A ⎟⎠
Thus, if B is one-quarter of A, it indicates that the surface is concave by 0.0000028 inches over the
diameter D. If mercury green light is used for monochromatic bands, the corresponding value will be
21.50 micro-inches. This phenomenon is explained in detail as follows.
A wave from a monochromatic light source L is made incident on the optical flat (refer Fig. 8.7)
placed on the surface under test. Some of the wave is partially reflected from a point on the bottom
surface of the optical flat a and partially reflected from the point b on the top surface of the surface
under test through the entrapped air. These two components of reflected light rays are recombined at
the eye. These rays differ by the length abc. The rays emerging at points x and y, which have slightly dif-
ferent directions, can be brought together by an optical instrument or eye.
If the length abc is equal to λ/2 (where λ is the wavelength of the light source) then the dark band is
seen. A similar situation can occur at all points like b which are in a straight line across the surface being
checked, and due to this a straight dark band could be seen.
λ/2
Across the
surface
b e g
Fig. 8.8 Alternative dark and bright bands
Interferometry 227
Similarly, at another point along the surface the ray L again splits up into two components whose
path difference length def is an odd number of half-wavelengths and the rays from d and f interfere to
cause darkness. The second dark band is shown by the point e (refer Fig. 8.8).
The amount of inaccuracies of a surface tested by the optical flat method can readily be estimated
by measuring the distance between the bands; thus there will be a surface inaccuracy of 0.00001 inches
over the distance of each consecutive band. For accurate measurements, the distance between the
colour fringes should be taken from the dark centre or from the edge of the red colour, nearest the
centre of the colour fringe.
The development of a typical type of interference pattern mainly depends upon the relationship of the
geometry of the surface and the position of the optical flat. The following are some of the interference
patterns in different situations. (See Fig. 8.10(a), Plate 7.)
A B C A B C A B C
A B C A B C A B C
Optical Flat
A B C A B C A B C
Figure Description
(1) Perfectly flat surface, but the contact is not
good.
∗In order to determine whether the surface is convex or concave, it must be pressed with the finger tip at the
centre of the rings. If the colour fringes move away from the centre, it indicates convexity; and if they move in
towards the centre, the surface is concave. Some such examples are shown as follows:
Interferometry 229
(v) (vi)
(vii) (viii)
Fig. 8.10(b) Different interference patterns
As the inclination between the optical flat and test surfaces increase, fringes are brought closer; and as
inclination reduces, the fringe spacing increases and becomes nearly parallel.
230 Metrology and Measurement
Figure 8.11 shows the optical arrangement for an NPL flatness interferometer. This instrument was
first constructed by NPL and manufactured commercially by Hilger and Watts Ltd. The flatness of the
surface under testing is measured by comparing it with an optically flat surface, which is generally the
base plate of the interferometer. Hence, it works on the principle of absolute measurement. Either cad-
mium or mercury 198 is used as a monochromatic source of light. Each of these gives four wavelengths
(with cadmium): red, green, blue, violet; and (with mercury): green, violet and two yellows.
The whole instrument is built on a single rigid casting and the given specimen (e.g., gauges), under
test is completely enclosed to minimize the effects of temperature variations. In this instrument
(simplest form of NPL interferometer), a mercury lamp is used as a light source whose radiations are
made to pass through a green filter which in turn, makes green monochromatic light pass through
it. This light is focused on to a pinhole giving an intense point source of monochromatic light. This
pinhole is in the focal plane of the calumniating lens and is thus projected as a parallel beam of light.
The wavelength of the resulting monochromatic radiation is in the order of 0.5 microns.
Condensing lens
Green filter
Pin hole
45 degrees
Collimating lens
Now, the beam is directed on to the gauge under test which is wrung on the base plate via an optical
flat in such a way that interference fringes are formed across the face of the gauge. The fringes obtained
can be viewed directly above the means of a thick glass plate semi-reflector set at 45º to the optical axis.
The various results can be studied for comparison.
In case of large-length slip gauges, the parallelism of surfaces can also be measured by placing the
gauge on a rotary table in a specific position and reading number 1 can be taken. The number of fringes
obtained is the result of the angle that the gauge surface makes with the optical flat. This number is
noted. Then the table is turned through 180° and reading number 2 can be taken. Now, fringes are
observed and their number is to be noted. Then the error in the parallelism can be obtained by the fol-
lowing calculations.
The change in distance between the gauge and optical flat = λ/2.
(n −n )×λ
Then, error in parallelism = 2 1
4
where, n1 = number of fringes in the first position
and n2 = number of fringes in the second position.
This is also known as the Pitter – NPL Gauge Interferometer. It is used to determine the actual dimen-
sions or absolute length of a gauge.
Gauge is flat but not parallel from one side to another side.
Gauge is flat but not parallel from one end to the other end.
232 Metrology and Measurement
Monochromatic light from the source falls on a slit through a condensing lens; and after it
passes through the collimating lens, it goes through the constant deviation prism. Its rotation
determines wavelength magnitude of the light rays passing though the optical flat to the upper
surface of the gauge block under test and a base plate on which it is wrung. The light is reflected
in the mirror and its patterns can be observed through a telescopic eyepiece. Construction is as
shown in Fig. 8.12.
Monochromatic
Illuminating
light source
aperture
Plate
Mirror
Condensing lens
a b
Optical flat
Gauge to
be measured
Base plate
Fig. 8.12 Gauge-length interferometer
The actual curvature can be determined by comparing a with the fringe spacing b. The change in
height (h) of λ/2 is given by
a h a λ
= ∴ h= ×
b λ/2 b 2
Illustrative Examples
Example 1 A 1.5-mm surface is being measured on an interferometer. A lamp is used which emits wavelength
as follows:
Red: 0.842 μm and Blue: 0.6628 μm. Calculate the nominal fractions expected for the gauge
for the two wavelengths.
Solution First, calculate the number of half-wavelengths, n and λ being the wavelength of the
source light.
Interferometry 233
Solution First, calculate the number of half wavelengths, n and being the wavelength of the source
light.
For red light
n = λ/2
= 0.643850537/2 = 0.321925185 μm
= 0.321925185 × 10 −3 mm
For green light
n = λ/2
= 0.50858483/2 = 0.254292415 μm
= 0.254292415 × 10 −3 mm
234 Metrology and Measurement
Review Questions
6. What do you mean by the term ‘interferometer’? What are their advantages over optical flats?
7. Sketch the optical arrangement of an NPL gauge-length interferometer and explain how it is used
to compute the thickness of a slip gauge.
8. Write short notes on
(a) Optical flat (b) Gauge-length interferometer (c) NPL flatness interferometer
9. Explain the formation of interference fringes when light falls on an optical flat resting on a lapped
surface. What is the effect of using a monochromatic beam, instead of white light?
10. Sketch the typical fringe pattern observed through an optical flat which illustrates surfaces: (a) flat
(b) concave (c) convex (d) ridged. Explain the test on an optical flat which reveals whether a surface
is convex of concave.
11. Explain the basic difference between a flatness interferometer and length interferometer.
12. A 1-mm slip gauge is being measured on a gauge-length interferometer using a cadmium lamp. The
wavelengths emitted by this lamp are
Red: 0.643850537 μm
Green: 0.50858483 μm
Blue: 0.47999360 μm
Violet: 0.46781743 μm
Calculate the nominal fractions expected for the gauge for the four wavelengths.
9 Comparator
It doesn’t measure actual dimension, but it indicates how much it varies from the basic dimension…
S M Barve, Sr Manager, Gauge Laboratory, Cummins India Ltd.
9.1 INTRODUCTION
Virtually every manufactured product must be measured in some way. Whether a company makes
automobiles or apple sauce, laptops or lingerie, it is inevitable that some characteristic of size, volume,
density, pressure, heat, impedance, brightness, etc., must be evaluated numerically at some point during
the manufacturing process, as well as on the finished product. For a measurement to have meaning, an
accepted standard unit must exist. The inspector measuring parts on the shop floor must know that his
or her millimetre (or ounce, ohm, Newton or whatever) is the same as that being used on a mating part
Comparator 237
across the plant, or across the ocean. A chain of accountability, or traceability, connects the individual
gauge back to a national or international standards body to ensure this and the comparator works for
the same.
i. Locating the object under test on a reference plane with one end of the distance to be mea-
sured.
ii. Holding the comparator in a positive position from the reference plane, with the effective move-
ment of its spindle in alignment with the distance to be measured.
The use of a comparator is not limited to length measurement only but many other conditions of
an object under test can be inspected and variations can be measured. The scope of a comparator is
very wide. It can be used as a laboratory standard in conjunction with inspection gauges. A precision
comparator itself can be used as a working gauge. It can be used as an incoming and final inspection
gauge; moreover, it can also be used for newly purchased gauges.
A good comparator should be able to record variations in microns, and among other desirable features
(characteristics) it should possess the following:
1. The scale used in the instrument should be linear and have a wide range of acceptability for
measurement.
2. There should not be backlash and lag between the movement of the plunger and recording
mechanism.
3. The instrument must be precise and accurate.
4. The indication method should be clear. The indicator must return to zero and the pointer should
be free from oscillations.
5. The design and construction of the comparator (supporting table, stand, etc.) should be robust.
6. The measuring pressure should be suitable and must remain uniform for all similar measuring
cycles.
7. The comparator must possess maximum compensation for temperature effects.
Wide varities of comparators are available commercially in the market, and they can be categorized on
the basis of the way of sensing, the method used for amplification and the way of recording the varia-
tions of the measurand. They are classified as mechanical comparators, optical comparators, pneu-
matic comparators, electrical and electronic comparators, and fluid displacement comparators. Also,
a combination of these magnifying principles has led to the development of special categories of
comparators as mechanical-optical comparators, electro-mechanical comparators, electro-pneumatic
comparators, multi-check comparators, etc. Comparators are also classified as operating either on a
horizontal or on a vertical principle. The vertical is fairly well standardized and is the most commonly
used.
Mechanical comparators are instruments for comparative measurements where the linear movement
of a precision spindle is amplified and displayed on a dial or digital display. Indicators utilize electronic,
Comparator 239
mechanical or pneumatic technology in the amplification process; e.g., dial indicators, digital indicators
and electronic amplifiers or columns. These gauging amplifiers or instruments are available in three
main types:
1. Comparators or high-precision amplifiers (including columns or electronic amplifiers).
2. Indicators (higher precision compared to test indicators, used for inspection).
3. Test indicators (lowest precision, widely applied in production checking).
Mechanical comparators, electronic comparators, or appliers, and pneumatic or air compara-
tors are gauging devices for comparative measurements where the linear movement of a precision
spindle is amplified and displayed on a dial/analog amplifier, column, or digital display. Mechanical
comparators have sophisticated, low-friction mechanism, better discrimination (∼0.00001"), and lower
range (∼+/− 0.0005") compared to indicators. Comparators have a higher level of precision and less
span error compared to conventional dial or digital indicators. The level or precision is sufficient for
measurement of high-precision ground parts and for the calibration of other gauges.
Indicators are gauging devices for comparative measurements where the linear movement of a
spindle or plunger is amplified and displayed on a dial, column or digital display. Typically, indica-
tors have a lower discrimination (∼0.001" to 0.0001") and greater range (∼+/− 1.000 " to +/−
0.050 " total) compared to comparators. The level or precision is sufficient for final-part inspection.
Test indicators have the lowest discrimination when compared with indicators and comparators.
Test indicators used are mainly for set up and comparative production part checking. Test indica-
tors often use a cantilevered stylus or level style probe that facilitates inspection of hard-to-reach
part features, but results in high cosine errors. A cosine error of 0.0006" may result over a travel
range of 0.010″. Test indicators are not considered absolute measuring instruments, but compara-
tive tools for checking components against standard or zeroing-out set-ups. Other devices that fall
within the category of indicators and comparators include gauge sets, gauging stations and gauging
systems.
Dial Indicator Dial indicators are mechanical instruments for sensing measuring-distance varia-
tions. The mechanism of the dial indicator converts the axial displacement of a measuring spindle
into rotational movement. This movement is amplified by either mechanical or inductive means and
displayed by either a pointer rotating over the face of a graduated scale or digital display.
240 Metrology and Measurement
1. Mechanical Dial Indicator It is a displacement indicating mechanism. Its design (as shown
in Fig. 9.1) is basically in compliance with American Gauge Design (AGD) specifications. Its operating
principle consists of a very slight upward movement on the measuring spindle (due to a slight upward
pressure on it) and is amplified through a mechanism in which the measuring spindle usually carries
an integral part of a rack whose teeth mesh with a pinion, the pinion being a part of a gear train. This
mechanism (shown in Fig. 9.2) thus serves two functions—one is of converting linear displacement of
the plunger (in turn, rack) into rotary motion, and the other is to amplify this rotary motion by means
of driving gears (G1, G2, G3 ) meshing with substantially smaller pinions (P1, P2, P3 ). This magnification
Main scale
locking screw
One division
equals one
complete
revolution of
the pointer
over the main
Graduated scale, i.e.,
main scale 1 mm
Plunger movement
direction
Refer Fig. 9.3 (a, b, c). These are the examples of typical
features of commercially available dial indicators: Hair-
spring
Type–A is with a reverse measuring force. • Shockproof
movement via sleeve which floats over the spindle • Con-
stant measuring force • Protective housing (back-wall inte- Plunger
(a) (b)
(d)
(c)
(e) (f)
Fig. 9.4 (a) Precision dial indicator (b) Rear view of the precision dial indicator (c) Dial indicator
on a magnetic base stand, and its application of checking eccentricity of job in chuck is shown
in Fig. (d), and Figs (e) and (f) show flexble magnetic dial stands
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune University, India.)
Comparator 243
Adjustable
tolerance
markers
1 2 3 A B
Pointer
A B
C Fine adjustment
1. Undersize
screw
2. Good
3. Oversized Measuring spindle
C
Contact point
(a) (b)
(1, 2, 3) are the relays, (A, B) are adjusting screws for electric contacts (C) Lifting screw
Fig. 9.5 Mechanical dial indicator (comparator) with limit contacts
(Courtesy, Mahr GMBH Esslingen)
Fig. 9.6 Exploded view of mechanical dial comparator with limit contacts
244 Metrology and Measurement
as sensing heads without indicator scales. This is because the two limit positions of the gauge must be
set with the aid of a single master or gauge block, which represents the limit sizes. For doing this initial
setting, the tolerance markers of the indicator unit are brought to the desired limit positions guided by
the indicator’s scale graduations.
(a) (b)
(c) (d)
(e)
Fig. 9.10 (a) Lever-type dial indicator (b) Three sides where small dovetails are
used to mount (c, d, e) way of mountings to check runout, circularity, ovality
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune University, India.)
246 Metrology and Measurement
The indicator shown in Fig. 9.10 (a) has a measuring range of 0.030—much less than a dial indicator—
and reads plus or minus from the zero point. When the tip is at rest at its neutral point, it can be moved
0.015 in either direction. The tip can be set at different angles for convenience in set up. As on the dial
indicator, the bevel and numeric scale can be rotated to zero the reading. Each division is 0.0005 (5 ten-
thousandths or half a thousandth per division).
Figure 9.11 shows an exploded view of a lever-type dial indicator showing its design feature, and its
applications are explained in Fig. 9.12. The test indicator serves as an instrument for comparative mea-
surements. It can be used in any type of measuring stand. Due to the swiveling feature of the probe and
the reversal of its sensing direction, the test indicator is suitable for many measuring and checking tasks.
Its areas of application are (1) run-out and concentricity checks of shafts and bores, and (2) checks of
parallelism and alignment of flat faces in engineering and tool-making. For accurate measurements, the
axis of the contact point must be perpendicular to the measuring direction. If this is not possible, it is
necessary to multiply the reading on the dial with a correction factor, which depends on the angle α.
The correction factor is negligible for angles below 15°.
Example
Angle ∝ : 30° (estimated)
Reading on dial: 0.38 mm
Measured value: 0.38 × 0.87 = 0.33 mm
Twisted strip
Bell crank lever
Plunger
Fig. 9.14 Johansson Mikrokator
Fig. 9.13 Working mechanism of Johansson Mikrokator (Courtesy, C. E. Johansson Company)
Sigma Mechanical Comparator This type of a simply designed comparator gives 300 to
5000 times mechanical amplification. Figure 9.15 illustrates the operating principle. It consists of a
plunger attached to a rectangular bar, which is supported at its upper and lower ends by flat steel springs
(split diaphragm) to provide frictionless movement. The plunger carries a knife-edge, which bears on
the face of a moving member of the cross-strip hinge. The cross-strip hinge consists of the moving
component and a fixed component by a flexible strip at right angles to each other. Therefore, when the
plunger moves, the knife-edge moves and applies a force on the moving member that carries a light
metal Y-forked arm. A thin phosphor-bronze flexible band is fastened to the ends to the forked arms,
which is wrapped about a driving drum to turn a long pointer needle.
Therefore, any vertical movement of the vertical plunger makes the knife-edge move the block of
cross-strip lever over the pivot. This causes the rotation of the Y-arm and the metallic band attached
to the arms makes the driving drum, and hence the pointer, to rotate. So amplification is done in two
stages:
Total magnification = {(Effective length of arm)/(Distance from the hinge pivot to knife)}
× {(Length of pointer)/(Pointer drum radius)}
Comparator 249
Slit
diaphragm
Flexible driving band
Fixed member
Axis of
Plunger
rotation
Moving member
The amplification mechanism of a sigma comparator is adaptable for gauging multiple dimensions
by mounting several basic mechanisms into a common assembly arranged to have contacts with the
critical dimensions of the objects.
Dial Thickness Gauges This type of comparator also uses a dial indicator as a comparator unit.
It consists of a sturdy, rigid frame made of hard aluminium with a lifting lever for the movable upper
measuring spindle. It has an accuracy of 0.01 mm. Figure 9.16 shows the convenient heat-insulated
handle, open on one end. Figure (a) shows a model with flat measuring faces used for measuring soft
materials such as plastic films, felt, rubber, paper and cardboard. Figure (b) shows a model with spherical
measuring faces for measurement of hard materials such as sheet metal, hardboard, wooden panels and
panes of glass.
(a) (b)
Fig. 9.16 Dial thickness gauges
measuring bores and internal-groove dimensions, and absolute measurements [shown in Fig. 9.17 (i), (j),
(k)]. In these instruments, reliability for repeatability can be ensured due to a rack-and-gear drive and the
indicating scale interval from 0.005 mm up. Contact points are made of hard metal. Tolerance marks in
the dial make for easy reading and give fast and accurate measuring results. They are very handy.
Limitations Due to more moving parts, increasing friction increases inertia and any slackness in
moving parts reduces accuracy. If any backlash exists, it gets magnified. These instruments are sensitive
to vibrations.
Comparator 251
(a) (b)
(g) (h)
(i) ( j) (k)
Fig. 9.17 External and internal groove comparator gauges
(Courtesy, Mahr GMBH Esslingen)
252 Metrology and Measurement
Electronic
indicator
(a) Inside
measurement
without stop
Gauge
plate
(b) Inside
measurement
with stop
Locating
stops
(c) Outside
measurement
Fig. 9.18 ID/OD Plate-gauge type comparator with electronic indicator and figures
(a, b, c) shows prinicpal of part location
(Courtesy, Mahr GMBH Esslingen)
instrument with which to measure the part. In the 1970s, digital readouts were introduced, as program-
mable motorized stage control. As machines become more automated, developers started to incorpo-
rate programmable functions into the optical comparator. This paved the way for complete automation
of an optical comparator machine. And in the 1990s, incorporated software became standard optical
comparator equipment. Computers can be interfaced with optical comparators to run image analysis.
Points from manual or automatic edge detection are transferred to an external program where they can
be directly compared to a CAD data file.
Optical comparators are instruments that project a magnified image or profile of a part onto a
screen for comparison to a standard overlay profile or scale. They are non-contact devices that func-
tion by producing magnified images of parts or components, and displaying these on a glass screen
using illumination sources, lenses and mirrors for the primary purpose of making 2-D measurements.
Optical comparators are used to measure, gauge, test, inspect or examine parts for compliance with
specifications.
Optical comparators are available in two configurations, inverted and erect, defined by the type
of image that they project. Inverted image optical comparators are the general standard, and are
of the less-advanced type. They have a relatively simple optical system which produces an image
that is inverted vertically (upside-down) and horizontally (left-to-right). Adjustment and inspection
requires a trained or experienced user (about two hours of practice time and manipulation). Erect
models have a more advanced optical system that renders the image in its natural or ‘correct’ ori-
entation. The image appears in the same orientation as the part being measured or evaluated. Opti-
cal comparators are similar to micrometers, except that they are not limited to simple dimensional
readings. Optical comparators can be used to detect burrs, indentations, scratches and incomplete
processing, as well as length and width measurements. In addition, a comparator’s screen can be
simultaneously viewed by more than one person and provide a medium for discussion, whereas
micrometers provide no external viewpoints. The screens of optical comparators typically range
from 10˝–12˝ diameters for small units to 36˝–40˝ for larger units. Even larger screen sizes are avail-
able on specialized units. Handheld devices are also available, which have smaller screens as would
be expected.
Profile (Optical) Projector Using this instrument, enlarged (magnified) images of small
shapes under test can be obtained that can be used for comparing shapes or profiles of relatively
small engineering components with an accurate standard or enlarged drawing. Figure 9.19 (a) ( Plate 8)
shows the optical arrangement in the profile projector. The light rays from the light source are col-
lected by the condenser lens from which they are transmitted as straight beams and are then inter-
rupted by the test object held between the condenser and projector lens. Then the magnified image
appears on the screen, which allows a comparison of the resultant image with the accurately pro-
duced master drawing as shown in Fig. 9.19 (a), (b), (c). Figure 9.19 (d) shows a view of the profile
projector’s screen. It is provided with a protector scale. The whole circle is divided into 360°, which
acts as a main scale having 1° as the smallest division for measuring angles between two faces of the
enlarged image. To increase the accuracy of the angular measurement, a vernier scale is provided.
254 Metrology and Measurement Comparator 254
Sharpness of the magnified image can be obtained by focusing and adjusting the distance between
the component and the projection lens. This instrument offers 10 to 100 times magnification. Spe-
cifically, it is used to examine the forms of tools, gauges (e.g., screw-thread gauges) and profiles of
very small-sized and critical components whose direct measurement is not possible (e.g., profiles of
gears in wrist watches, etc.). Apar t from the profile projector, a toolmaker’s microscope is also used
as an optical comparator.
Light source
Condenser lens
Index
Projection lens
L4
Scale
Mirror
Pivot
Lever
Plunger
L1 L2 L3
Zesis Ultra Optimeter This type of optical comparator gives very high magnification, as it works
on a double magnification principle. As shown in Fig. 9.21, it consists of a light source from which light
rays are made to fall on a green filter, which allows only green light to pass through it and, further, it
passes through a condenser lens. These condensed light rays are made incident on a movable mirror M1,
then reflected on mirror M2 and then reflected back to the movable mirror M1. It gives double reflection.
The second-time reflected rays are focused at the graticule by passing through the objective lens.
In this arrangement, magnification is calculated as follows:
Let the distance of the plunger centre to the movable mirror M1 be ‘x ’, plunger movement height
be h and angular movement of mirror be [h /x]. f is the focal length of the lens then the movement of
the scale is 2f δθ, i.e., 2f [h /x].
Light Source
Eyepiece
Green filter
Graticule
Condenser
lens
Objective
lens
Mirror M2
Mirror M1
Plunger
X
Workpiece
Advantages It is more suitable for precision measurement as it gives higher magnification. It con-
tains less number of moving parts and hence, offers good accuracy. Scales used are illuminated so it
allows for taking readings at room lighting conditions with no parallax error.
It is self-cleaning, making it appropriate for use in dirty environments and on dirty parts. It is a
non-contact method, so it doesn’t mar delicate part surfaces; and it can be used to gauge compressible
materials (such as textiles, film and non-wovens) without distortion.
Non-adjustable Zero-setting
jet valve
Precision pressure
reducer
Differential
Workpiece
pressure sensor
(piezoelectric)
During its operation, air gauges detect changes in pressure when the measuring jet approaches the
workpiece. If the distance (S ) to the measuring jet decreases, the pressure within the system increases,
while flow speed and, thus, the volume flow are reduced. If the dimension of the part under consider-
ation is as per the required specifications then the air pressure acting on the opposite side of the pressure
sensor (may be piezoelectric sensor or even diaphragm or bellow) is balanced, no deflection results and
the metering linked to it indicates zero. The pneumatic measuring method involves a rather small linear
measuring range. This measuring procedure comes up to its limits, if the generated surface A, which is
defined by the recess distance S, is larger than the cross-sectional area of the measuring jet of diameter d.
Figure 9.23 (b) shows the linear range in which the instrument should be used to get accurate readings.
Solex Comparator Its working is based on the principle that if the air under constant
pressure escapes by passing through two orifices, and if one of them is kept uniform then the
pressure change at the other due to variation in the workpiece size under test will give the reading.
Therefore, it is also known as ‘The Solex Back Pressure System’, which uses an orifice with the
venturi effect to measure airflow. Figure 9.24 shows the essential element of a pneumatic circuit,
258 Metrology and Measurement
Jet of the
pneumatic d linear range
measuring
head
Pressure p
S
d
4
Workpiece
s
(a) (b)
Fig. 9.23 (a) Direction of air passing through measuring head (b) Per-
formance characteristics of this instrument
A B C
Height difference
proportional to
pressure
D
Manometer tube
Water tank
Fig. 9.24 Pneumatic circuit diagram for solex pneumatic comparator
compressed air flows in from the end A passing through the restrictor (not shown in the figure) to
maintain constant pressure in a circuit equal to the height difference maintained in a manometer
tube; then it progresses to the dip tube. At the same time, the part air (with same pressure) passes
through orifice B to the pneumatic measuring head at C. The difference between B and C will
depend upon the orifice gap S [similar condition, refer Fig. 9.23 (a)]. This method is used for gaug-
ing parts such as bores.
Comparator 259
Velocity Differential-Type Air Gauge with Bar Graph and Digital Display This
type of pneumatic comparator is operated on the principle of measuring the changes in the velocity of
air caused by varying the obstruction of the air escape. It is used to assess and judge-measuring results
at a glance, which makes easy readout. The column amplifier offers a broad range of functions for com-
bining the signals from both static and dynamic measurements. It makes use of a venturi-tube having
different diameters at both the ends to convert the air velocity changes within the system into minute
pressure differentials. Measuring results (also excellently legible at a distance) are indicated by way of
three-color LEDs as shown in Fig. 9.25 ( Plate 9). When the programmable warning and tolerance limits
are exceeded, the LEDs change their colour from green to yellow or red, accordingly. It includes an air/
electronic converter unit permitting direct connection of pneumatic pick-ups to the column amplifier.
When the volume of escaping air is reduced due to increased gap between the surface of the part under
test and the nozzle orifice of the pneumatic measuring head, the velocity of air in the downstream of the
venturi will decrease the resulting pressure variations resulting in the corresponding height change on the
column. Its display range is +/− 10, 30, 100, 300, 1000, 3000, 1000 μm available commercially.
Advantages As the pneumatic measuring head does not come in direct contact with the workpiece
object, no wear takes place on the head. It works on the pneumatic principle (with compressed air), and
gives a very high magnification. It is particularly preferred for repetitive measurement requirement situa-
tions (e.g., in process-gauging application). Its self-aligning and centering tendency makes the pneumatic
comparator the best device for measuring diameters, ovality, and taper of parts either separately or simul-
taneously. In this type of comparator, the amplification process requires less number of moving parts,
which increases the accuracy. Another advantage is that a jet of air helps in cleaning the part.
Limitations The uninformative characteristic of the scale is one of the working limitations. Dif-
ferent measuring heads are needed for different jobs. As compared to electronic comparators, the
speed of response is low. The portable apparatus is not available easily. In case of a glass tube as an
indicating device (column amplifier), high magnification is required to overcome meniscus error.
2. Using capacitive principle, as the displacement of a core attached to a measuring plunger made
up of ferrous material can change the air gap between the plates to modulate the frequency of
the electrical oscillations in the circuit.
3. Using resistive principle, as the displacement of the measuring plunger will stretch a grid of fine
wire, which increases its length, which in turn, alters its electrical resistance.
The metrological term electronic comparator includes a wide variety of measuring instruments which
are capable of detecting and displaying dimensional variations through combined use of mechani-
cal and electronic components. Such variations are typically used to cause the displacement of a
mechanical contacting (sensing) member with respect to a preset position, thereby originating pro-
portional electrical signals, which can be further amplified and indicated. Comparator gauges are the
basic instruments for comparison by electronically amplified displacement measurement. Very light
force can be used in electronic comparators, where almost no mechanical friction is required to be
overcome. This characteristic is of great value when measuring workpieces with very fine finish that
easily could be marred by heavier gauge contact. Consider the example of the test-indicator-type
electronic comparator as an electronic height gauge (shown in Fig. 9.26) ( Plate 9). These gauges carry
a gauging head attached to a pivoting, extendable, and tiltable cross bar of a gauging stand (refer Fig.
3.11). For the vertical adjustment of the measuring head (probe/scriber), the columns of height-
gauge stand are often equipped with a rack-and-pinion arrangement or with a friction roller guided in
a groove. Instead of a cross bar, some of the models are equipped with a short horizontal arm only
which achieves fine adjustment by means of a fixture spring in the base of the stand which when
actuated by a thumb screw, imparts a tilt motion to the gauge column. Electronic height gauges are
generally used for comparative measurement of linear distance (height) of an object whose surface
being measured must lie in the horizontal plane and the distance to be determined must be reflected
from a surface plate representing a plane parallel to the part surface on which the measurement is
being carried out. The size of the dimension being measured is determined by comparing it with
the height of the gauge block stock. Modern digital electronic technology permits absolute height
measurement to work as a perfect comparator, because by the facility provided with the push of a
button, the digital display can be zeroed out at any position of the measuring probe. Applications of
electronic test-indicator-type comparators are essentially similar to those of mechanical test indica-
tors, and measure geometric interrelationship such as run-out, parallelism, flatness wall thickness and
various others. Electronic internal comparators are used for external length or diameter measurement
with similar degree of accuracy. A particular type of mechanical transducer has found application
in majority of the currently available electronic gauges. This type of transducer is the linear variable
differential transformer (LVDT), and its application instrument is discussed in the next sub-article.
Inductive (Electronic) Probes This instrument works on the first principle, i.e., inductive
principle. The effect of measurements with inductive probes is based on the changing position of
a magnetically permeable core inside a coil pack. Using this principle, we can distinguish between
half-bridges (differential inductors) and LVDTs (Linear Variable Differential Transducers). New
models apply high-linearity, patented transducers (VLDT–Very Linear Differential Transducers),
Comparator 261
Secondary windings
Primary windings
Construction of Inductive Probe
1. Stylus Various styli with M2.5 thread are used. S
(2) Sealing (3) Twist lock (6) Measuring (7) Coil system
bellow force spring (10) Connecting cable
(1) Stylus (4) Clearance (5) Rotary stroke (8) Probe sleeve
stroke adjustment bearing (11) 5-channel
DIN-plug
7. Coil system The patented VLDT (Very Linear Differential Transducer) coil system allows for
extremely high linearity values.
8. Probe sleeve To shield the probe against EMC influences, the high-quality nickel–iron alloy Mumetall
is used.
9. Bending cap The normal axial cable outlet of the standard probes can be easily changed to a radial
cable outlet by mounting a slip-on cap.
10. Connecting cable Only resistant PU cables are used for the 2.5-m (8.20 ft) long standard probe cable.
11. 5-channel DIN plug Worldwide, this plug is the most frequently used for connection of inductive probes
to amplifiers. Depending on the compatibility, however, different pin assignments have to be observed.
Figures 9.29 (a) and (b) show thickness measurement; a single inductive probe is used for all
kinds of direct measurements on cylindrical and flat workpieces. It is applied in the same way as dial
indicators, mechanical dial comparators or lever gauges, (c) thickness measurement independent of
workpiece form and mounting, (d) height difference between two steps, (e) axial run-out measure-
ment as single measurement, (f ) radial run-out single measurement, (g) coaxiality measurement on
two shaft ends, ( h) roundness measurement independent of the eccentricity as sum measurement,
(i) taper measurement independent of the total workpiece size, ( j) perpendicularity measurement
independent of workpiece position, ( k) measurement of eccentricity independent of diameter as
differential measurement, and ( l ) measurement of wall thickness with lever-type probe. The probe
lever is protected by friction clutches against excessive strain and is particularly suitable for inside
measurements.
Inductive Dial Comparator Another example of the inductive principle used for compara-
tive measurement is an inductive dial comparator. Now, however, there are digital electronic indicators
that are about the same size and price as dial indicators. Gauges equipped with digital indicators may
possess all of the benefits of an amplifier comparator, including automatic zeroing and offset func-
tions, and data export, at a fraction of the cost. Digital readouts are not necessarily superior to analog
dials, however. Analog displays are ergonomically superior in many applications. For example, users
can observe size variation trends more readily. They can also quickly sense whether a part is good or
bad without having to go through the intellectual process of comparing the number on a digital display
to the allowable tolerance specification. Some electronic indicators now incorporate analog displays to
replicate this benefit.
The electronic snap gauge as shown in Fig. 9.30 (c) is used for rapid measurements of cylindrical
components like shafts, pins, shanks, and for thickness and length measurement. The patented ‘Channel
Lock’-design assures parallelism over the entire adjustment range. It consists of an adjustable centering
stop. Large, square tungsten-carbide anvils 15 × 15 mm, with chamfers are used to assist the locating
of the component to be checked. The lift-off lever for retraction of measuring anvil (Model 301-P),
permits contact-free introduction of the workpieces. All adjustments are accomplished by using the
enclosed socket head spanner.
Comparator 263
(g) (h)
(i) (j)
(k) (l)
1 7
2
Mahr
Mimmtora
0.00
pb on
off
3 mm
Advantages These types of comparators have high sensitivity as they are expressed as the smallest
input (contact member deflection) which produces proportional signals. They contain very few moving
parts, hence there is less friction and wear. Repeatability is ensured as measurement is done in linear
units, computed on 3σ basis. They have a wide range of magnification. They are small, compact and
convenient to use, set up and operate. Readings can be displayed by various means (analog or digital)
used alternately, or several of them simultaneously. Digital display minimizes the reading and interpre-
tation errors.
Limitations External power source is required. The cost of this type of comparator is more than
the mechanical type. Fluctuations in voltage or frequency of electric supply may affect the results. Heat-
ing of coils in the measuring instrument may cause drift.
Review Questions
speed or induce lower cutting forces; but shape. The most common method to
it may not produce a good surface finish. check the surface finish is to compare
Whereas the finish produced on the part the test surface visually and by touch-
is a cause of rejection, this consideration ing it against a standard surface. But,
has an effect on the cost also. If a higher nowadays, many optical instruments,
surface finish is obtained on the material viz., interferometer, light slit micro-
under consideration, under given set of scope, etc., and mechanical instru-
machining conditions, then we could ments, viz., Talysurf, Tomlinson sur-
judge that its machinability is good. face recorders are used to determine
numerical values of the surface finish
It is a well-known fact that no surface
of any surface.
in reality follows a true geometrical
10.1 INTRODUCTION
On the earth’s surface, it is observed that discontinuities or joints do not have smooth surface structures
and they are covered with randomly distributed roughness. The effective role of surface roughness on
the behavior of discontinuities and on shear strength makes the surface roughness an important factor
that has to be taken into account right from the design stage to the final assembled product. New
metrological studies, supported by new methods and technological advances, take into account surface
roughness and its effect on the behavior of discontinuities. In this chapter, techniques that are used in
measurement of surface roughness are discussed.
Surface metrology is of great importance in specifying the function of a surface. A significant pro-
portion of component failure starts at the surface due to either an isolated manufacturing discontinu-
ity or gradual deterioration of the surface quality. The most important parameter describing surface
integrity is surface roughness. In the manufacturing industry, a surface must be within certain limits of
roughness. Therefore, measuring surface roughness is vital to quality control of machining a workpiece.
In short, we measure surface texture for two main reasons:
The quality of a machined surface is characterized by the accuracy of its manufacture with respect
to the dimensions specified by the designer. Every machining operation leaves a characteristic
268 Metrology and Measurement
Waiviness
Roughness height height
Roughness width
Waiviness
width
Roughness-width cutoff
Fig. 10.2 Surface characteristics
Roughness Roughness consists of surface irregularities which result from the various machining
processes. These irregularities combine to form surface texture. It is defined as a quantitative measure
of the process marks produced during the creation of the surface and other factors such as the struc-
ture of the material.
Roughness Height It is the height of the irregularities with respect to a reference line. It is mea-
sured in millimetres or microns or micro-inches. It is also known as the height of unevenness.
Roughness Width The roughness width is the distance parallel to the nominal surface between
successive peaks or ridges which constitute the predominate pattern of the roughness. It is measured
in millimetres.
Waviness This refers to the irregularities which are outside the roughness width cut-off values.
Waviness is the widely spaced component of the surface texture. This may be the result of workpiece or
tool deflection during machining, vibrations or tool runout. In short, it is a longer wavelength variation
in the surface away from its basic form (e.g., straight line or arc).
Metrology of Surface Finish 269
Waviness Height Waviness height is the peak-to-valley distance of the surface profile, measured
in millimetres.
Difference between Roughness, Waviness and Form We analyze below the three main
elements of surface texture—roughness, waviness and form.
Roughness This is usually the process marks or witness marks produced by the action of the cut-
ting tool or machining process, but may include other factors such as the structure of the material.
Waviness This is usually produced by instabilities in the machining process, such as an imbalance
in a grinding wheel, or by deliberate actions in the machining process. Waviness has a longer wavelength
than roughness, which is superimposed on the waviness.
Form This is the general shape of the surface, ignoring variations due to roughness and waviness.
Deviations from the desired form can be caused by many factors. For example, the part being held too
firmly or not firmly enough, inaccuracies of slides or guide ways of machines, or due to stress patterns
in the component.
Roughness, waviness and form (refer Fig. 10.3) are rarely found in isolation. Most surfaces are a com-
bination of all three and it is usual to assess them separately. One should note that there is no set point at
which roughness becomes waviness or vice versa, as this depends on the size and nature of the applica-
tion. For example, the waviness element on an optical lens may be considered as roughness on an auto-
motive part. Surface texture refers to the locally limited deviations of a surface from its ideal shape. The
deviations can be categorized on the basis of their general patterns. Consider a theoretically smooth, flat
surface. If this has a deviation in the form of a small hollow in the middle, it is still smooth but curved.
Two or more equidistant hollows produce a wavy surface. As the spacing between each wave decreases,
the resulting surface would be considered flat but rough. In fact, surfaces having the same height of
irregularities are regarded as curved, wavy, or rough, according to the spacing of these irregularities.
In order to separate the three elements, we use filters. On most surface-texture measuring instru-
ments, we can select either roughness or waviness filters. Selecting a roughness filter will remove
Form
waviness elements, leaving the roughness profile for evaluation. Selecting a waviness filter will remove
roughness elements, leaving the waviness profile for evaluation. Separating the roughness and waviness
is achieved by using filter cut-offs.
Roughness Width Cut-Off Roughness width cut-off is the greatest spacing of respective sur-
face irregularities to be included in the measurement of the average roughness height. It should always
be greater than the roughness width in order to obtain the total roughness height rating.
In basic terms, a cut-off is a filter and is used as a means of separating or filtering the wavelengths
of a component. Cut-offs have a numerical value which when selected reduce or remove the unwanted
wavelengths on the surface. For example, a roughness filter cut-off with a numeric value of 0.8 mm
will allow wavelengths below 0.8 mm to be assessed with wavelengths above 0.8 mm being reduced in
amplitude; the greater the wavelength, the more severe the reduction. For a waviness filter cut-off with
a numeric value of 0.8 mm, wavelengths above 0.8 mm will be assessed with wavelengths below 0.8 mm
being reduced in amplitude.
There is a wavelength at which a filter is seen to have some pre-determined attenuation (e.g., 50%
for a Gaussian filter). In roughness there are two different filters, a long wavelength filter Lc and a short
wavelength filter Ls which suppresses wavelengths shorter than those of interest. There are internation-
ally recognized cut-offs of varying lengths. These are 0.08 mm, 0.25 mm, 0.8 mm, 2.5 mm and 8 mm.
In general, you select a roughness cut-off in order to assess the characteristics of the surface you
require. These are usually the process marks or witness marks produced by the machining process. To
produce a good statistical analysis of these process marks, you would normally select a cut-off in the
order of 10 times the wavelengths under consideration. These wavelengths may be the turning marks
on the component.
Metrology of Surface Finish 271
Note : cut-offs should be determined by the nature of the component and not by the length of the
component. Choosing the wrong cut-off will in some cases severely affect the outcome of the result.
Sample Length After the data has been filtered with a cut-off, we then sample it. Breaking the
data into equal sample lengths does sampling. The sample lengths (as shown in Fig. 10.4) have the
same numeric value as the cut-off. In other words, if you use a 0.8 mm cut-off then the filtered data
will be broken down into 0.8 mm sample lengths. These sample lengths are chosen in such a way that
a good statistical analysis can be made of the surface. In most cases, five sample lengths are used for
analysis.
Traverse length
Sampling
length
(cut-off)
Assessment Length An assessment length is the amount of data left after filtering that is then
used for analysis.The measurement length is dictated by the numerical value of the cut-off, which itself
is dictated by the type of surface under inspection. Typically, a measurement may consist of a traverse
of 6–7 times the cut-off selected. For example, 7 cut-offs at 0.8 mm = 5.6 mm. One or two cut-offs
will then be removed according to the filter type and the remaining cut-offs used for assessment. This
only applies when measuring roughness. For measuring waviness or primary profiles, the data length is
chosen according to application and the nature of the surface. In general, the data length needs to be
sufficient to give a true representation of the texture of the surface.
X X2
3 9
15 225
20 400
33 1089
25 625
18 324
AA = 234/16 = 14.6 micro in
5 25 RMS = (4551/16)1/2 = 16.9 micro in
10 100
15 225
15 225
5 25
11 121
14 196
13 169
27 729
8 64
Total 234 4551
Whenever two machined surfaces come in contact with one another, the quality of the mating parts
plays an important role in their performance and wear. The height, shape, arrangement and direction
of these surface irregularities on the workpiece depend upon a number of factors:
The final surface roughness might be considered as the sum of two independent effects:
1. The ideal surface roughness is a result of the geometry of tool and feed rate, and
2. The natural surface roughness is a result of the irregularities in the cutting operation.
[Boothroyd and Knight, 1989].
Factors such as spindle speed, feed, rate and depth of cut that control the cutting operation can be
set up in advance. However, factors such as tool geometry, tool wear, chip loads and chip formations,
or the material properties of both tool and workpiece, are uncontrolled (Huynh and Fan, 1992). Even
in the occurrence of chatter or vibrations of the machine tool, defects in the structure of the work
material, wear of tool, or irregularities of chip formation contribute to the surface damage in practice
during machining (Boothroyd and Knight, 1989).
f/2 f/2
f feed
φ major cutting edge angle
β working minor cutting edge angle
Fig. 10.5 Idealized model of surface roughness
274 Metrology and Measurement
Practical cutting tools are usually provided with a rounded corner, and Fig. 11.5 shows the surface pro-
duced by such a tool under ideal conditions. It can be shown that the roughness value is closely related
to the feed and corner radius by the following expression:
0.0321 f 2
Ra =
r
where, r is the corner radius.
1. Statistical Descriptors These give the average behavior of the surface height. For example,
average roughness Ra; the root mean square roughness Rq; the skewness Sk and the kurtosis K.
2. Extreme Value Descriptors These depend on isolated events. Examples are the maxi-
mum peak height Rp, the maximum valley height Rv, and the maximum peak-to-valley height Rmax.
3. Texture Descriptors These describe variations of the surface based on multiple events. An
example for this descriptor is the correlation length.
Among these descriptors, the Ra measure is one of the most effective surface-roughness mea-
sures commonly adopted in general engineering practice. It gives a good general description of the
height variations in the surface. Figure 10.6 shows a cross section through the surface. A mean line
is first found that is parallel to the general surface direction and divides the surface in such a way
that the sum of the areas formed above the line is equal to the sum of the areas formed below the
line. The surface roughness Ra is now given by the sum of the absolute values of all the areas above
and below the mean line divided by the sampling length. Therefore, the surface roughness value is
given by
⎡ ⎡area (abc ) + area (cde )⎤ ⎤
Ra = ⎢⎢ ⎣ ⎦⎥
⎥
⎢⎣ f ⎥⎦
where, f is feed
Metrology of Surface Finish 275
Feed f
Work
surface
Machined
Working surface
Working
major cutting-edge angle kre
minir cutting-edge angle kre
Tool
(a)
Rmax Kre
2 b Kre
Rmax a c e
f f d
2 2
(b)
Fig. 10.6 A cross-section through the surface
With an increase in globalization, it has become even more important to control the comparability of
results from different sources. Stylus instruments have been used in the assessment of surface texture for
some sixty years. Initially, simple analog instruments were used, employing an amplifier, chart recorder
and meter to give graphical and numerical output. Analog filters (simple electronic R-C circuits) were
used to separate the waviness and roughness components of the texture. In order to address this issue,
ISO introduced the concept of ‘bandwidth’ in the late 1990s. Under this concept, the shorter wavelengths
used in surface roughness analysis are constrained by a short-wave filter (know as the s-filter—refer ISO
3274:1996). The bandwidth is then limited in a controlled way that relates directly to surface features,
rather than being limited by the (electrical) bandwidth of the measuring system.
Inspection and assessment of surface roughness of machined workpieces can be carried out by
means of different measurement techniques. These methods can be ranked into the following classes.
Stylus
motion (z)
2. Tomlinson Surface Meter The name of the instrument is given after its designer,
Dr Tomlinson. It is comparatively economical and reliable and uses the mechano-opto magnification
method. The body of the instrument carries the skid unit. Its height is adjusted to enable the diamond-
tipped stylus to be conveniently positioned. Except vertical motion, a leaf spring and a coil spring
as shown in Fig. 10.8 restrict all motions of the stylus. The tension in the coil spring causes a similar
tension in the leaf spring adjust and maintains the balance to hold a cross roller (lapped) in a position
between the stylus and a pair of parallel fixed rollers as shown in the plan view. A light spring steel arm
attached to the cross roller carries a diamond at its tip which bears against the smoked glass screen.
During the actual measuring of surface finish, the instrument body is to be drawn across the surface
by rotating a screw (1 r.p.m.) by a synchronous motor while the glass is maintaining as stationary. The
surface irregularities make the diamond probe and in turn, the stylus to move in the vertical direction. It
causes the cross roller to pivot about a specific point. This causes magnification of the said movement
of the arm carrying a scriber and produces a trace on the smoked glass screen. This trace can be further
magnified at 50X or 100X by an optical projector for examination.
D Meter
Filter
Demodulator
Amplifier
B C
Recorder
A
Oscilator
flowing through the coil is modulated. The output (modulated) of the bridge is further demodulated so that
the current flow is directly proportional to the vertical displacement of the stylus (refer Fig. 10.10). This
output causes a pen recorder to produce a permanent record. Nowadays microprocessor-based surface-
roughness measuring instruments are used. One such instrument ‘MarSurf’ is shown in Fig. 10.11 along
with its specifications to understand the attributes of the capabilities of an instrument, viz., digital output,
and print-outs of the form of surface under consideration.
x E
Interference filter
α
L
f f
y η
z
(a) (b)
Fig. 10.12 (a) The measure principle of non-contact technique (b) Lasercheck non-contact
surface roughness measurement gauge [Packaged with a compact 76 mm x 35 mm x 44 mm
(portable head that has a mass of only 0.45 kg. The device will perform for years with
no maintenance and no fragile and expensive stylus tip to protect or replace. The system
performs measurements in a fraction of a second, over a range of 0.006 µm to greater
than 2.54 µm Ra roughness)]
Metrology of Surface Finish 281
and shadowing effects are neglected. The photosensor of a CCD camera placed in the focal plane of
a Fourier lens is used for recording speckle patterns. Assuming Cartesian coordinates x, y, z , a rough
surface can be represented by its ordinates Z (x, y) with respect to an arbitrary datum plane having
transverse coordinates (x, y). Then the r. m. s. surface roughness can be defined and calculated.
1. Machine Vision In this technique, a light source is used to illuminate the surface with a digital
system to view the surface, and the data is sent to a computer to be analyzed. The digitized data is then
used with a correlation chart to get actual roughness values.
2. Inductance Method An inductance pickup is used to measure the distance between the sur-
face and the pickup. This measurement gives a parametric value that may be used to give a comparative
roughness. However, this method is limited to measuring magnetic materials.
1. Securing the workpiece depends on the component size and weight. In most cases, very light stylus
forces are used to measure surface finish, and if possible clamping is avoided. If clamping is necessary
then the lightest restraint should be used.
2. It is best to level the surface to minimize any error. However, on most computer-based measuring sys-
tems, it is possible to level the surface after measuring by using software algorithms. Some instruments
have wide gauge ranges, and in these circumstances leveling may not be so critical because the compo-
nent stays within the gauge range. For instruments with small gauge ranges, leveling may be more critical.
However, in all circumstances, leveling the part prior to measurement is usually the best policy.
There are two ways of overcoming this problem related to soft surfaces and easily marked surfaces be
measured. One is to use non-contact-type measuring instruments such as those with lasers or optical-type
transducers. However, some of these types of instruments can be limited with certain applications. If you need
to use stylus-type instruments then a replica of the surface can be produced allowing contact to be made.
The stylus tip can have an effect on the measurement results. It can act as a mechanical filter. In
other words, a large stylus tip will not fall down a narrow imperfection (high frequency roughness). The
larger the stylus, the more these shorter wavelengths will be reduced. A good example of a typical stylus
would be a 90° conisphere-shaped stylus with a tip radius of 2 um (0.00008"). This will be suitable for
most applications. Other stylus tip sizes are available and are component dependent in their use. For
example, for very small imperfections, a small stylus radius may be used.
282 Metrology and Measurement
Effects of the Stylus Tip The stylus tip radius is a key feature that is often overlooked. As-
suming that a conisphere stylus is being used, the profile recorded by the instrument will in effect be the
locus of the centre of a ball, whose radius is equal to that of the stylus tip, as it is rolled over the surface.
This action broadens the peaks of the profile and narrows the valleys. For simplicity, if we consider the
surface to be a sine wave then this distortion is dependent both on the wavelength and the amplitude.
For a given wavelength (of similar order of size to the stylus tip), the stylus tip will be unable to
reach the troughs of the sine wave if the amplitude is greater than a maximum limiting value. For
amplitudes above this limiting value, the measured peak-to-peak amplitude values will be attenu-
ated. It is worth mentioning in passing that the stylus tip also introduces distortion into other
parameters, because the sinusoidal shape of the surface is not preserved in the measured profile
(refer Fig. 10.13). This can lead to discrepancies between measurements taken with different stylus
radii, and so it is important to state the stylus tip size whenever this differs from the ISO recom-
mendations. Of course, the situation will be even more complicated for more typical engineering
surfaces.
Stylus
surface
The purpose of a parameter is to generate a number that can characterize a certain aspect of the sur-
face with respect to a datum, removing the need for subjective assessment. However, it is impossible
to completely characterize a surface with a single parameter. Therefore, a combination of parameters is
normally used. Parameters can be separated into three basic types:
a. Amplitude Parameters These are measures of the vertical characteristics of the surface
deviations.
b. Spacing Parameters These are measures of the horizontal characteristics of the surface
deviations.
c. Hybrid Parameters These are a combination of both the vertical and horizontal character-
istics of the surface deviations
Metrology of Surface Finish 283
Rku < 3
Rku = 3
Rku > 3
Rz (JIS) It is also known, as the ISO 10-point height parameter in ISO 4287/1-1984. It is numer-
ically the average height difference between the five highest peaks and the five lowest valleys within the
sampling length.
Rz and Rtm Rz = (Peak roughness) Rp + (depth of the deepest valley in the roughness) Rv and is the
maximum peak to valley height of the profile in a single sampling length.
Rtm = The equivalent of Rz when more than one sample length is assessed and is the Rp + Rv values
in each sample length divided by the number of sample length.
Rz1 max is the largest of the individual peak-to-valleys length from each sample length.
R3y , R3z R3z is the vertical mean from the third highest peak to the third lowest valley in a sample
length over the assessment length. DB N311007 (1983)
284 Metrology and Measurement
Ra—Average Roughness This is also known as Arithmetic Average (AA), Centre Line Average
(CLA), and Arithmetical Mean Deviation of the profile. The average roughness is the area between the
roughness profile and its mean line, or the integral of the absolute value of the roughness profile height
over the evaluation length:
L
1
L ∫0
Ra = r (x ) d x
When evaluated from digital data, the integral is normally approximated by a trapezoidal rule:
N
1
Ra =
N
∑r n =1
n
Graphically, the average roughness is the area (shown below) between the roughness profile and its
centreline divided by the evaluation length (normally, five sample lengths with each sample length equal
to one cut-off ):
Average Roughness
Ra
L
L
Fig. 10.15 Average roughness (Ra is an integral of the absolute value of the roughness
profile. It is the shaded area divided by the evaluation length L. Ra is the most commonly used
roughness parameter.)
The average roughness is by far the most commonly used parameter in surface-finish measurement.
The earliest analog roughness-measuring instruments measured only Ra by drawing a stylus continu-
ously back and forth over a surface and integrating (finding the average) electronically. It is fairly easy
to take the absolute value of a signal and to integrate a signal using only analog electronics. That is the
main reason Ra has such a long history.
It is a common joke in surface-finish circles that ‘RA’ stands for regular army, and ‘Ra’ is also the
chemical symbol for radium; only Ra is the average roughness of a surface. This emphasizes that the
‘a’ is a subscript. Older names for Ra are CLA and AA meaning centreline average and area aver-
age.
An older means of specifying a range for Ra is RHR. This is a symbol on a drawing specifying a
minimum and maximum value for Ra.
Metrology of Surface Finish 285
RHR max
min RHR1020
(Older drawings may have used this notation to express an allowable range for Ra. This
notation is now obsolete.)
For example, the second symbol above means that Ra may fall between 10 μ and 20 μ. Ra does not
give all the information about a surface. For example, Fig. 10.16 shows three surfaces that all have the
same Ra, but you need no more than your eyes to know that they are quite different surfaces. In some
applications they will perform very differently as well.
Fig. 10.16 Three surfaces all have the same Ra, even though the eye immediately
distinguishes their different general shapes
These three surfaces differ in the shape of the profile—the first has sharp peaks, the second deep
valleys, and the third has neither. Even if two profiles have similar shapes, they may have a different
spacing between features. In Fig. 10.17 too, the three surfaces all have the same Ra.
If we want to distinguish between surfaces that differ in shape or spacing, we need to calculate other
parameters for a surface that measure peaks and valleys and profile shape and spacing. The more com-
plicated the shape of the surface we want and the more critical the function of the surface, the more
sophisticated we need to be in measuring parameters beyond Ra.
L
1
L ∫0
Rq = r 2 (x ) d x
For a pure sine wave of any wavelength and amplitude, Rq is proportional to Ra; it’s about 1.11 times
larger. Older instruments made use of this approximation by calculating Rq with analog electronics
(which is easier than calculating with digital electronics) and then multiplying by 1.11 to report Rq.
However, real profiles are not simple sine waves, and the approximation often fails miserably. Modern
instruments either digitize the profile or do not report Rq. There is never any reason to make the
approximation that is proportional to Ra.
Rq has now been almost completely superseded by Ra in metal machining specifications. Rq still has
value in optical applications where it is more directly related to the optical quality of a surface.
Rt , Rp , and Rv The peak roughness Rp is the height of the highest peak in the roughness profile
over the evaluation length (p1 below). Similarly, Rv is the depth of the deepest valley in the roughness
profile over the evaluation length (v1). The total roughness, Rt, is the sum of these two, or the vertical
distance from the deepest valley to the highest peak.
Rv = min ⎡⎣r (x )⎤⎦ , 0<x <L
P1 Rp Rt
RV
V1
t
L
Fig. 10.18 Rt , Rp , and Rv
Rt = Rp + Rv
These three extreme parameters will succeed in finding unusual conditions: a sharp spike or burr on
the surface that would be detrimental to a seal for example, or a crack or scratch that might be indicative
of poor material or poor processing.
Rtm, Rpm and Rvm These three parameters are mean parameters, meaning they are averages of the
sample lengths. For example, define the maximum height for the i-th sample length as Rpi. Then Rpm is
M
1
Rpm =
M
∑R
i =1
pi
Similarly,
M
1
Rvm =
M
∑R
i =1
vi
and
M
1
Rtm =
M
∑R
i =1
ti = Rpm + Rvm
where Rvi is the depth of the deepest valley in the i th sample length and Rti is the sum of Rvi and Rpi:
These three parameters have some of the same advantages as Rt, Rp, and Rv for finding extremes in
the roughness, but they are not so sensitive to single unusual features.
288 Metrology and Measurement
It serves a purpose similar to Rt, but it finds extremes from peak to valley that are nearer to each
other horizontally.
Rz (DIN), i.e. Rz according to the German DIN standard, is just another name for Rtm in the Ameri-
can nomenclature (over five cutoffs).
Rz [ DIN ] = Rtm
Rz(ISO) It is the sum of the height of the highest peak plus the lowest valley depth within a sam-
pling length.
R z(ISO)
P1 P4 P2
P3
P5
V3 V5 V4
V2
V1
t
L
Fig. 10.19 Rz (ISO) (the sum of the height of the highest peak plus the lowest valley
depth within a sampling length)
R3zi Third Highest Peak to Third Lowest Valley Height The parameter R 3zi is the
height from the third highest peak to the third lowest valley within one sample length.
R 3z Average third highest peak to third lowest valley height
R 3z is the average of the R 3zi values:
M
1
R3z =
M
∑R
i =1
3 zi
Metrology of Surface Finish 289
P1 P2 R3zi
P3 P5 P4
V3 V5 V4
V2 V1
Fig. 10.20 R3zi (third highest peak to third lowest valley height)
R3z has much the same purpose as Rz except that less extreme peaks and valleys being measured.
R3zmax Maximum third highest peak to third lowest valley height
R3zmax is the maximum of the individual R3zi values:
R3 z max = max ⎡⎢⎣ R3zi ⎤⎦⎥ , 1 ≤ i ≤ M
R3z and R3zmax are not defined in national standards, but they have found their way into many high-
end instruments. They originated in Germany as a Daimler–Benz standard.
PC PC - Peak Count
1 2 3 4 ... n
Peak count
threshold
t
L
Fig. 10.21 Pc—Peak count
2000
1500
500
turned surface Ra = 20.7[1/in]
0
0 20 40 60 80 100 120 140
Pc threshold(μ in)
Fig. 10.22 Changes in Pc values
2. HSC—High Spot Count High spot count, HSC, is similar to peak count except that a peak
is defined relative to only one threshold. High spot count is the number of peaks per cm (or inch) that
cross above a certain threshold. A peak must cross above the threshold and then back below it.
High spot count is commonly specified for surfaces that must be painted. A surface which has pro-
trusions above the paint will obviously give an undesirable finish.
3. Sm—Mean Spacing Sm is the mean spacing between peaks, now with a peak defined relative to
the mean line. A peak must cross above the mean line and then back below it.
If the width of each peak is denoted as Si (above) then the mean spacing is the average width of a
peak over the evaluation length:
Metrology of Surface Finish 291
t
L
Fig. 10.23 HSC—High spot count
Sm
t
L
Fig. 10.24 Sm—Mean spacing
N
S m = (1 N ) ∑ S n
n =1
This parameter is analogous to Sm in that it measures the mean distance between features, but it is
a mean that is weighted by the amplitude of the individual wavelengths, whereas Sm will find the pre-
dominant wavelength.
292 Metrology and Measurement
The above formula leaves in the reciprocal units of µpc. Therefore, the value must ordinarily be
converted from [in] to [µin] or from [cm] to [µm].
K-Randomness Factor
d r (x )
L
1
Δa = ∫
L 0 dx
dx
It is not so straightforward to evaluate this parameter for digital data. Numerical differentiation is a
difficult problem in any application. Some instrument manufacturers have applied advanced formulas
to approximate (dz/dx) digitally, but the simplest approach is to apply a simple difference formula to
points with a specified spacing L/n:
1 N
Δa = ∑ rn+1 − rn
L n−1
If this approach is used, the value of L/n must be specified since it greatly influences the result of the
approximation. Ordinarily, L/n will be quite a bit larger than the raw data spacing from the instrument.
L ∫0 ⎜⎜⎝ d x ⎟⎟⎠
Δq = ⎜ ⎟ dx
1 N
∑ (rn+1 − rn )
2
Δq =
L n−1
3. Lo—Actual Profile Length One way to describe how a real profile differs from a flat line
is to determine how long the real profile is compared to the horizontal evaluation length. Imagine the
profile as a loose string that can be stretched out to its full length.
Metrology of Surface Finish 293
⎛ d r (x )⎞⎟
L 2
Lo = ∫ 1 + ⎜⎜ ⎟ dx
0
⎜⎝ d x ⎟⎟⎠
The answer in a digital evaluation depends on the spacing of the points we choose to approximate dr/dx:
⎛ L ⎞⎟
N 2
Lo = ∑n =1
⎜⎜ ⎟ + (rn +1 − rn )2
⎜⎝ N ⎟⎠
4. Lr—Profile Length Ratio The profile length ratio, Lr, is the profile length normalized by the
evaluation length:
Lo
Lr =
L
The profile length ratio is a more useful measure of surface shape than Lo since it does not depend
on the measurement length.
The larger the value of Lr, the sharper or crisper the surface profile appears, and the larger is the
true surface area of the surface. In some applications, particularly in coating, where good adhesion is
needed, it may be desirable to have a large value of Lr, i.e., a large contact surface area. For most sur-
faces, Lr is only slightly larger than one and is difficult to determine accurately.
0% 15% 100%
Fig. 10.25(b) Bearing ratio curve (comments about shape, plateau, peaks, valleys)
that a point on the profile at a randomly selected x value lies at a height within a small neighborhood of
a particular value z:
Prob (z + dz > r (x ) > z ) = ADF (z ) dz
2. Bearing Ratio Curve The bearing ratio curve is related to the ADF. It is the corresponding
cumulative probability distribution and has much greater use in evaluation of surface finish. The bear-
ing ratio curve is the integral (from the top down) of the ADF (refer Fig. 10.25 (b)).
Other names for the bearing ratio curve are the bearing area curve (this is becoming obsolete with
the increase in topographical methods), the material ratio curve, or the Abbott–Firestone curve.
(See Figs. 10.27 and 10.28, Plate 10.)
(a) (b)
Fig. 10.26 (a) Pocket Surf (b) Drive unit for shop-floor applications
(Courtesy, Mahr Gmbh Esslingen)
Metrology of Surface Finish 295
(a)
(b) (c)
Fig. 10.29 (a) Dual-skid pick-up is suited for roughness measurements on plane surfaces and
a cylindrical surface in longitudinal direction, as well as inside bores with a diameter larger than
12 mm (b) Single-skid pick-up with lateral, spherical skid, 0.3-mm radius in tracing direction,
90° 5-µm stylus radius (200 µin), suitable to measure inner radii in circumferential direction
with a diameter larger than 12 mm (c) Drive units for mobile roughness measuring instruments
(Courtesy, Mahr Gmbh Esslingen)
The pocket surf (as shown in fig. 10.26) is a pocket-sized economically priced, completely portable
instrument which performs traceable surface-roughness measurements on a wide variety of surfaces.
It can be used confidently in production, on the shop floor and in the laboratory (US patent no.
4.776.212).
Features
• Solidly built, with a durable cast aluminum housing, to provide years of accurate, reliable surface
finish gauging
• Can be used to measure any one of four switch-selectable, parameters: Ra, Rmax/Ry, Rz
• Electable traverse length 1, 3 or 5 cut-offs of 0.8 mm/ 0.030 in
296 Metrology and Measurement
Technical Data
• Operates in any position—horizontal, vertical, and upside down
• Four switchable probe positions—axial (folded) or at 90°, 180° or 270°
• Even difficult-to-reach surfaces such as inside and outside diameters accessible
• Integrated data output for SPC-processing units that is compatible with the most common data
processing systems
• Easy-to-read LCD readout presents measured roughness value in microinches or micrometres
within half a second after the surface is traversed
• Out-of-range (high or low) and ‘battery low signals are also displayed
Machining method
Directional lay
Machining allowance
Cylindrical grinding
Ra 0.2
0.10
For example, a cylindrically ground surface with 0.10 mm machining allowance having Ra value of
0.2 μm with cut-off length of 3 mm and direction of lay as perpendicular will be represented as in
Fig. 10.30(b).
Roughness
0.025 0.5 0.1 0.2 0.4 0.8 1.6 3.2 6.3 12.5 25 50
Value Ra µm
Roughness
N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 N11 N12
Grade
Roughness
∇∇∇∇ ∇∇∇ ∇∇ ∇ −
Symbol
Note:
1. Preferred values for arithmetical mean deviation Ra in μm are selected from
0.025, 0.5, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.3, 12.5, 25, 50
2. Preferred values for ten point height irregularities Ra in μm are selected from
0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.3, 12.5, 25, 50
298 Metrology and Measurement
Illustrative Examples
Example 1 In the measurement of surface roughness, heights of 20 successive peaks and valleys were measured
from a datum as follows:
35 25 40 22 35 18 42 25 35 22
36 18 42 22 32 21 37 18 35 20 microns.
If these measurements were obtained over a length of 20 mm, calculate the CLA and RMS values
of the surface.
Solution:
35 + 25 + 40 + 22 + 35 + 18 + 42 + 25 + 35 + 22 + 36 + 18 + 42 + 22 + 32 +
(i) CLA value = 21 + 37 + 18 + 35 + 20
20
= 29.1 microns
35 + 25 + 40 + 22 + 35 + 18 + 42 + 25 + 35 + 22 + 36 + 18 + 42 + 22
(ii) RMS value = +32 + 21 + 37 + 18 + 35 + 20
20
= 4.5934 microns
Review Questions
1. Explain reasons for controlling surface texture.
2. It is not possible to produce a perfectly flat surface. Justify the statement.
3. Explain surface texture w.r.t. its roughness, waviness, lay and sampling length.
4. Explain the terms:
a. Primary texture
b. Secondary texture
c. Ra Value
d. CLA Value of surface roughness
e. Skid and stylus
f. Mean line of profile
g. Micro and macro irregularities
Metrology of Surface Finish 299
5. Explain the procedure to use roughness comparison specimen to assess surface roughness along
with their limitation of applications.
6. Explain the method of finding the CLA index using a magnified graphical record of surface texture.
7. With the help of a neat sketch describe the construction and working of the following instruments;
a. Profilometer
b. Tomlinson surface meter
c. Taylor-Hobson surface roughness instrument
8. Define roundness and state the causes of out-of-roundness.
9. Explain the detail method of checking roundness by using a roundness measuring machine.
10. ‘The deviation from roundness occurs in the form of waves about the circumference of the part’.
Justify the statement.
11. How can you specify the surface finish on a drawing.
12. Explain: (a) Roughness spacing parameters (b) Root-Mean-Square roughness
13. What is the significance of wavelength of surface variations in measurement of surface texture?
14. Explain in principle, the function and operation of a stylus-type surface-texture measuring instru-
ments. Also explain their advantages.
15. How is surface texture related to tolerances on the surface dimensions?
16. Discuss the consequences of not specifying the sampling length in surface-roughness measurement.
17. Specify the causes of surface irregularities found in surface texture.
18. Define the term ‘ten-point height irregularities’ and use a profile to illustrate the answer.
19. Discuss any two methods of surface-finish evaluation and state their merits and demerits.
20. Explain symbolic representation with examples of indicating the main characteristics of surface
texture on drawings.
21. Write a short note on grades for specifying the surface texture.
22. In the measurement of surface roughness, heights of 10 successive peaks and valleys were mea-
sured from a datum as follows:
Peaks: 45, 42, 40, 35, 35 µm.
Valleys: 30, 25, 25, 24, 18 µm.
Determine the Rz value of the surface.
11 Metrology of Screw
Threads
An essential principle of the actual profiles of both the nut and bolt threads is that they must never
cross or transgress the theoretical profile. So bolt threads will always be equal to, or smaller than, the
dimensions of the basic profile. Nut threads will always be equal to, or greater than, the basic profile.
To ensure this in practice, tolerances and allowances are applied to the basic profile.
Practically, to make a thread, tolerances must be applied to ensure that this essential principal always
applies. Tolerancing of screw threads is complicated by the complex geometric nature of the screw-
thread form. Clearances must be applied to the basic profile of the threads in order that a bolt thread
can be screwed into a nut thread. For the thread to be made practically, there must be tolerances applied
to the main thread elements.
Usually, nut threads have a tolerance applied to the basic profile so that it is theoretically possible
for the nut thread profile to be equal to the theoretical profile. Bolt threads usually have a gap between
the basic and actual thread profiles. This gap is called the allowance with inch-based threads and the
fundamental deviation with metric threads. The tolerance is subsequently applied to the thread. Since
for coated threads, the tolerances apply to threads before coating (unless otherwise stated), the gap is
Metrology of Screw Threads 301
taken up by the coating thickness. After coating, the actual thread profile must not transgress the basic
profile of the thread.
A full designation for a metric thread includes information not only on the thread diameter and
pitch but also a designation for the thread tolerance class. For example, a thread designated as
M12 × 1 - 5g6g indicates that the thread has a nominal diameter of 12 mm and a pitch of 1 mm.
The 5g indicates the tolerance class for the pitch diameter, and 6g is the tolerance class for the major
diameter.
A fit between the threaded parts is indicated by the nut-thread tolerance designation followed by
the bolt-thread tolerance designation, separated by a slash. For example, M12 × 1 - 6H/5g6g indicates
a tolerance class of 6H for the nut (female) thread, a 5g-tolerance class for the pitch diameter with a
6g-tolerance class for the major diameter.
A tolerance class is made up of two parts, a tolerance grade and a tolerance position.
Tolerance
grade
minor diameter in the case of a nut thread and the major
diameter in the case of a bolt thread. Tolerance grades
are represented by numbers, the lower the number the Nut
smaller the tolerance. Grade 6 is used for a medium toler- threads Lower deviation
ance quality and a normal length of thread engagement. Basic
Upper deviation size
Grades lower than 6 are intended for fine tolerance qual- Bolt
threads
Tolerance
P
P/8
H/8
3H/8
6 0°
5H/8
G H
30°
H
D, d
P/2
H/4
Nut
D2, d2
threads
D1, d1
Basic P/4
size
Bolt D = major diameter of internal thread
90°
threads D = major diameter of external thread
Axis of D2 = pitch diameter of internal thread
h screw thread d2 = pitch diameter of external thread
g D′ = minor diameter of internal thread
e d′ = minor diameter of external thread
P = pitch
Fig. 11.2 Tolerance position and H = height of fundamental triangle
grading for ISO threads Fig. 11.3 Basic profile of unified ISO thread form
For bolt threads there are four tolerance positions—h has a zero fundamental deviation and e, f, and
g have negative fundamental deviations. (A positive fundamental deviation indicates that the indicates
that the size for the thread element will be smaller than the basic size).
1. Pitch Diameter (often called the effective diameter) of a parallel thread is the diameter of the
imaginary co-axial cylinder which intersects the surface of the thread in such a manner that the inter-
cept on a generator of the cylinder, between the points where it meets the opposite flanks of a thread
groove, is equal to half the nominal pitch of the thread.
Pitch
Major Crest
diameter Angle
nk
Fla
Pitch
diameter
Minor
diameter
Root
Pitch/2
Height or
depth of thread
B.
Flanks
D.Crest Pitch
C.Axis E.Root F.
Crest
Minor diameter
Major diameter
Root
Screw
Thread
angle A.Extenal Threads Internal threads
2. Major Diameter (B) of a thread is the diameter of the imaginary co-axial cylinder that just
touches the crest of an external thread or the root of an internal thread.
3. Minor Diameter is the diameter of the cylinder that just touches the root of an internal
thread.
4. Crest (D) of a thread is the prominent part of a thread, whether internal or external.
5. Root (E ) is the bottom of the groove between the two flanking surfaces of the thread, whether
internal or external.
6. Flanks of a thread are the straight sides that connect the crest and the root.
8. Angle of a Thread is the angle between the flanks, measured in an axial plane section.
9. Pitch of a Thread (F ) is the distance measured parallel to its axis between corresponding
points on adjacent surfaces in the same axial plane. There are three types of pitch errors:
a. Progressive Error of Pitch is a gradual, but not necessarily uniform, deviation of the pitch
of successive threads from the nominal pitch.
304 Metrology and Measurement
Pitches
Two pitches
which is repeated regularly along the screw. p
advance
One pitch
advance
c. Drunkenness is a periodic variation λ
1 2
of a pitch where the cycle is of one pitch
Fig. 11.5 Illustration of pitch of a thread
length.
Effect of Pitch Errors Errors in pitch—namely, incorrect relative position of the flanks—act
obstructively, due to which a perfect external screw (which has pitch error) will not screw into a perfect
internal screw of the same nominal size. Pitch errors virtually increase the pitch diameter of an external
screw and virtually reduce the pitch diameter of an internal screw.
10. External Thread (A) is a thread on the outside of a member, e.g., the thread of a bolt.
11. Internal Thread is a thread on the inside of a member, e.g., the thread inside a nut.
12. Addendum of an external thread is the radial distance between the pitch and major cylinders
or cones, respectively.
13. Dedendum of an external thread is the radial distance between the pitch and minor cylinders
or cones, respectively.
14. Lead is the axial movement of a point following its helical turn around the thread axis, where
n = number of starts, i.e., where there are n helices started at regular intervals round the same cylinder.
15. Rake Angle (λ) is the acute angle formed by a thread helix on the pitch cylinder and a plane
perpendicular to the cylinder axis.
tan λπ = pEn (multistart thread)
tan λπ = pE (single-start thread) where E is the pitch diameter
16. Virtual Effective Diameter of a parallel thread is the simple pitch diameter of an imagi-
nary thread of perfect pitch and flank angles, cleared at the crests and roots but having the full depth
of straight flanks, which would just assemble with the actual thread over the prescribed length of
engagement.
17. Pitch Cylinder has a diameter and location of its axis in such a way that its surface would
pass through a straight head in a manner so that the widths of the thread ridge and the thread
groove are equal and are located equidistantly between the sharp major and minor cylinders of a
given thread form.
Metrology of Screw Threads 305
Most of the threads have triangle-shaped threads. On the other hand, square-shaped and trapezoid-
shaped threads are used for moving machinery which need high accuracy, such as a lathe.
In respect to thread standards, there is a metric thread (M), a parallel thread for piping (PF ), a
taper thread for piping (PT), and a unified thread (UNC, UNF). In this chapter, metrology of threads
is related to metric threads because they are the most widely used in many countries around the
world.
The most common screw thread form is the one with a symmetrical V-profile. The included angle
is 60 degrees. This form is prevalent in the Unified Screw Thread (UN, UNC, UNF, UNRC, UNRF)
form as well as the ISO/Metric thread. The advantage of symmetrical threads is that they are easier
to manufacture and inspect compared to non-symmetrical threads. These are typically used in general-
purpose fasteners.
Other symmetrical threads are the Whitworth and the Acme. The Acme thread form has a stron-
ger thread, which allows for use in transnational applications such as those involving moving heavy
machine loads as found on machine tools. Previously, square threads with parallel sides were used for
the same applications. The square thread form, while strong, is harder to manufacture. It also cannot be
compensated for wear unlike an Acme thread.
47.5°
d r
2. Functional Parameters
a. Effective Diameter ---- Screw Threads Micrometer, Two-or Three-wire
methods, Floating Carriage Micrometer
b. Pitch ---- Screw Pitch Gauge, Pitch Error Testing Machine
Measurement of screw threads can be done by inspection and checking of various components
of threads. The nut and other elements during mass production are checked by plug gauges or
ring gauges.
Support
Fiducial
indicator Measuring
Box
anvils Measuring
head
Fig. 11.10 Schematic diagram of bench micrometer
Micrometer spindle
Indicator anvil
cylindrical piece along with the wedge pieces. The procedure is also repeated along with the threaded
component.
to the axis of the centres, carrying a micrometer and highly sensitive fiducial indicator. The carriage
permits measurements along with the centreline and at right angles to the work. This carriage is then
mounted on another carriage, which is finally mounted on a fixed base. This carriage helps in moving
the carriage having the micrometer and fiducial indicator to position them along with the length of the
workpiece. The indicator also moves along with the bench micrometer.
E
4
3
2
1
0
24
23
22
21
D
B
C
A
D
A
B
C C
Fig. 11.12(a) Floating carriage micrometer
The setting cylinder is kept between the fiducial indicator and micrometer anvil and the reading is
taken and is recorded as (R 1). Without moving the fiducial indicator, the cylinder is replaced by a screw
in such a way that the roots of the threads are touched showing the minor diameter, and the corre-
sponding reading is noted as R 2.
Then, minor diameter = (R 1–R 2 ) + Master Diameter.
common axis of the micrometer and anvil is at the same height as the line of centres. The shank of one
of the conical pegs (C ) is made eccentric; so that by turning it in its hole, it is possible to adjust the axis of
the micrometer to be truly square with the line of centres. After making this setting, the position of the
peg can be maintained by a clamping screw. Taken as a whole, the machine is a development of the bench
micrometer already described, having a free motion at right angles to the line of centres, and capable also
of being traversed along the bed of the machine so as to measure at any desired position along a screw
gauge mounted between the centres. The cylinders used with the machine during the measurements of the
pitch diameter are suspended by threads from light rods (E ) fixed to the micrometer carriage. In order to
eliminate entirely the personal element as regards the ‘feel’ of the micrometer, and also to obtain a control
of the measuring force, the adjustable anvil is fitted with a fiducial indicator (F ) which operates under a
force of about 250 grams wt (8 oz wt), or less if desired. The machine described, which was designed at
NPL, is obtainable commercially in two or three sizes to accommodate gauges up to 250 mm (10 in) or
so in diameter. Reference should now be made to Fig. 11.13 (a,b) (Plate 11), where the central diagram
shows cylinders seated in the groove of the thread and the dimension T beneath them. The objective of
the measurement in the floating micrometer machine is to determine T. The pitch diameter E is then
obtained from the measured value of T by the formula
E=T+P−c+e
where P is a constant depending on the pitch and angle of the screw thread, and the mean diameter of
the small cylinders used; c is a correction depending mainly upon the rake angle of the screw thread; e is
a correction for the elastic compression of the cylinders. In practice, the thread measuring cylinders are
supplied with their measured diameter and with the P values appropriate to each combination of pitch
and the common thread form to which the cylinders may be applied. The values of c and e are in general
relatively small for standard screw threads and the low measuring force used; however, they must be
taken into account as they are significant compared with tolerances for screw gauges.
Wire of
Dia. ′d′
O
Diameter over
wire Dm
D E
Q
B C
E M
A
θ/2 Effective
diameter
Diameter ′T ′ P
Therefore, P = 2 AQ
p
= cot θ / 2 − d ( cosec θ / 2 − 1)
2
For, metric threads, θ = 60°
P = 0.866p − d
For measuring T by using a floating carriage micrometer, place the master cylinder with wires and take
the reading R. Now, replace the master cylinder with the threaded screw and take the reading as S. Then,
T = (R−S ) + diameter of master cylinder.
Indicating
unit
Stylus
Pitch
P/4
O
r Pitch line
A P
θ
Effective
diameter
P/2
Table 11.2 Best size cylinder diameter specifications of different forms of threads
Table 11.3 “Best size“ diameters of cylinder for ISO metric threads
But due to the inclination of the positioning of the two wires, the instrument may get slightly tilted,
which gives incorrect results. The three-wire method overcomes this problem. In this case, the instru-
ment maintains its alignment itself and gives a true reading.
Refer Fig. 11.17 (b).
AD = AB cosec x/2 = r cosec x/2
H = DE cot x/2 = p/2 cot x/2
CD = H/2 = p/4 cot x/2
H = AD − CD
r = cosec x/2 − p/4 cot x/2
Distance over wires = M
= E + 2h + 2r
= E + 2 ( r cosec x/2 − p/4 cot x/2) + 2r
= E + d (1+cosec x/2) − p/2 cot x/2
Metrology of Screw Threads 315
E = Effective diameter
M = Distance over wires
E M
(a)
H B C
X M
X /2
D E
E = Effective diameter
E
M = Distance over wires d = diameter of wires
r = radius of the wires x = angle of thread
h = Height of the center or the wire rod from the effective
diameter
(b)
Fig. 11.17 (a) Three–wire method of measuring effective diameter
(b) Three–wire method of measuring effective diameter
(i) In case of Whitworth Thread X= 550, depth of thread = 0.64p, so that E = D − 0.64p
and cosec x/2 = 2.1657, cot x/2 = 1.921
M = E + d (1+cosec x/2) − p/2 cot x/2
= D + 3.1657d − 1.6005p
where D = Outside diameter
(ii) In case of Metric Threads
Depth of Thread = 0.6495p
E = D − 0.6495p, X= 600, cosec x/2 =2, cot x/2 = 1.732
316 Metrology and Measurement
Measurement of flank angle is the most important form of measurement. Flank angle is the angle
between the straight portion of a thread flank and a line normal to the thread axis. The thread image is
projected by the optical method and then measured by the planned arrangement.
The shadow projector shown in Fig. 11.18 is an arrangement for measuring flank angle. This devel-
opment can be used to advantage on plug screw gauges mounted in a projector on which the opposite
ends of a diameter can be viewed in turn by an accurately straight transverse movement of the plug
across the field of the lens.
Thus, the first measurement of each flank angle is taken from the combined readings of the circular
scale and of the tangent screws. The screw plug is then moved across the field of the lens by means of
the transverse adjustment, until the thread form of the other side appears on the white background of
the protractor, the rake of the beam of light being reset to allow for the reversed direction of the helix.
Without moving the alignment of the protractor on the table of the machine, the pivoted arm (B) is
swung across to measure the angle of the same flank as before. The mean of the two readings gives the
inclination of the flank with respect to the normal to the axis of the screw. The method is described as
the throwover method. The accuracy attainable in flank-angle measurement depends on the alignment of
Straight edge
on projector
screen
B
40
40
20 30
10 20
0 10
1 Div:=1Min:
Fig. 11.18 Shadow protractor for measuring flank angles on horizontal sceerns
Metrology of Screw Threads 317
the axis of the screw with the protractor datum, the length of the flank available for setting the protractor
arm, and the fineness of reading offered by the design of the protractor. It is also affected by the crisp-
ness of the image. The throwover method eliminates small errors of alignment of the protractor with a
plug screw axis but is available only for gauges with a diameter within the traverse of the projector. We can
remember that equal and correct flank angles of the cutting tool correction relation of the cutting tool
and correct relation of tool axis of the cylindrical blank must be sought in manufacture. Actual measure-
ment is done by a protractor mounted on the screen. The setting lines are adjusted exactly to the image of
the profile between the flanks of the single thread and the difference in the two readings of the measure
of the angle. Alternatively, this measurement can also be done by using a toolmaker’s microscope.
Effect of Flank Error Figure 11.19 illustrates the effect caused by errors in flank angles. The cor-
responding flanks of the two screws are not parallel and cannot make full flank-to-flank contact. Instead,
the flanks of the lower outline offer contact only at their extremities: although the plug has a smaller simple
pitch diameter than the ring, it behaves as if it is increased. The simple pitch diameter of a plug screw is
virtually increased by the equivalent of the sum, irrespective of sign, of the errors of the flank angles.
∂a1 and ∂a2 represent the errors in the two flank angles of a screw thread. The virtual increase or
decrease of pitch diameter of an external thread or of an internal thread is given by the following
approximate expressions.
where, ∂a1 and ∂a2 are the errors expressed in degrees and the corresponding virtual change will have
same unit as that of the value used in p.
P/2 Perfect
screw
ring
Screw plug
a a pitch line
Virtual effective diameter
a − δa1 a + δa2
Sample pitch diameter
Scerw plug
having flank angle
errors δa1 and δa2
(screw plug)
(screw plug)
(screw plug)
Ring gauges have internal threads and are taken as a standard sample for measuring the parameters of
internal threads.
⎛ p2 ⎞
Major diameter = x 2 −⎜⎜ ⎟⎟⎟ , where p = pitch.
⎜⎝ 4 ⎟⎠
Major diameter
2. Measurement of Minor Diameter The Screw Gauge
Booklet published by NPL describes the process of measurement of Fig. 11.20 Measurement of
minor diameter, which is as follows: major diameter
The minor diameter can be sized by fitting a mandrel having a diametric taper of about 1 in 500, i.e.,
0.000 2 in per in into the ring. The minor diameter is then taken as the diameter of the mandrel where
it fits the screw thread. Unfortunately, the tapered mandrel gives the minimum size of the minor diam-
eter and does not check ovality. Alternatively, the minor diameter may be sized by a range of cylindrical
plugs, differing in size by known small increments. The minor diameters of screw threads above 20-mm
(0.75 in) diameter may be obtained from the measurement by gauge blocks of the distance between two
precise cylindrical rollers of known size placed diametrically opposite in the screw ring gauge. The minor
diameter is then calculated by adding the diameters of
the rollers to the size of the gauge block combination
which just fits between the rollers. By using precision
rollers, the minor diameters of ring gauges of nominal
diameters up to 100 mm (4 in) may be estimated to an
accuracy of ±0.001 mm (±0.00005 in). This method has
the advantage in that the ovality of the minor cylinder
may be determined by taking measurements around the Fig. 11.21 Measurement of minor
circumference of the screw. diameter
The basis is to measure the pitch diameter of a screw ring gauge by comparison with the ‘pitch diameter’
of a precision annular groove in a solid cylindrical plug. The latter acts as a standard pitch diameter. In prac-
tice, a number of annular grooves are finely ground in a cylindrical plug, the grooves being corresponding in
depth with a range of pitches and having closely nominal flank angles for various thread forms. Each groove
is standardized for pitch diameter (Es ) with thread measuring cylinders in a floating carriage machine. Dif-
ferent designs of contact device have been used, but in essence they are a double-ended stylus carried in a
bar. The stylus which has a radius form at each end is selected to make contact at or near the pitch line to be
measured; the stylus or the bar is so mounted as to be sensitive to contact pressure on either end of itself.
C The total displacement of the standard of pitch diam-
XR eter ES in measurement is XR + XL. Total displacement of
the ring of pitch diameter EG in measurement is YL + YR.
Notice that the two displacements located by each of the
CX ES stylus contacts are in opposite directions:
L
EG = XR + XL + YL + YR − ES
In the NPL machine, the various displacements are
imparted to a carriage upon which the standard and/or
C YL ring may be mounted and which can be moved in a straight
line parallel to the stylus. A micrometer on each side reg-
isters the position of the carriage. Two accurate straight
line motions of the stylus are provided; one normal to
C the face of the carriage to move from thread to thread.
YR
The other, in a plane parallel to the face of the carriage to
locate a diameter. Nowadays, the displacement method is
available in coordinate measuring machines (CMMs). The
major and minor diameter may also be measured using
EG
a suitable sharp radius stylus and a precision plane-faced
Fig. 11.22 NPL displacement method gap of known size as a standard.
Illustrative Examples
Example 1 Calculate the diameter of best size of a wire for an M 20 x 2.5 screw.
Example 2 An M20 × 2.5 plug screw gauge is checked for effective diameter by a floating carriage microm-
eter with best size wire and the following readings were noted:
(i) Diameter of standard cylinder = 18. 001 mm
(ii) Micrometer reading over standard cylinder with two wires of same diameter = 14.6420 mm.
(iii) Micrometer reading over the plug screw gauge with the wires of same diameter = 14.2616 mm.
Calculate the effective diameter of the gauge by neglecting rake and elastic compression errors.
Example 3 For M 16 × 2mm external threads, calculate the best-size wire diameter and the difference
between size under wire and effective diameter
Solution: Pitch of thread P = 2 mm, θ = 60° for metric thread ..... (given)
Best-size wire diameter, db
db = (P/2) × sec (θ / 2) = (2/2) × sec (60/2) = 1.154 mm
Pitch value, P
P = 0.866p − d
= [0.866 × 2] − 1.154 = 0.577 mm
Effective diameter, E
E = Diameter under wire + Pitch Value
E=T+P
∴ difference between effective diameter and size under wire = P = E − T
∴ E − T = 0.577 mm.
Metrology of Screw Threads 321
Example 5 Calculate the effective diameter for an M 24 × 3 plug gauge by using a floating carriage microm-
eter for which readings are taken as follows:
(i) Micrometer reading over standard cylinder with two wires of diameter = 12.9334 mm
(ii) Micrometer reading over the plug screw gauge with two wires as 12.1124 m
(iii) Diameter of standard cylinder = 22.001 mm. Best wire size was used for the above.
Pitch value, P
P = 0.866p − db
= [0.866 × 3] − 1.732 = 0.8659 mm
Value of diameter under wire, T
∴ T = [Ds − Dm ] + D
T = [12.1124 − 12.9334] − 22.001
T = 21.18 mm
Effective diameter, E
∴E=T+P
E = 21.18 + 0.8659
E = 12.0459 mm
Review Questions
1. Explain the nomenclatures of screw thread with the help of a neat sketch.
2. Discuss the various types of pitch errors along with their causes and effects.
3. Name and describe the various methods of measuring the minor diameter of the thread.
4. With the help of suitable sketches describe the pitch-measuring machine for a thread gauge.
5. What is best-size wire? Calculate the diameter of the best wire for an M 20 × 25 screw.
6. Show that the best wire size for measuring effective diameter of threads is given by
db = (P/2) sec (θ/2)
7. Explain in brief the different corrections to be applied in the measurement of effective diameter
by the method of wire.
8. Sketch and describe a floating carriage micrometer and state its use.
9. Explain, the following methods for measuring effective diameter’ with the help of derivation
(a) Two-Wire Method (b) Three-Wire Method
10. For measuring the effective diameter for an M 10 × 1.5 thread gauge for a wire of 0.895 mm by
using floating carriage micrometer, readings are taken as
(a) Micrometer reading over standard cylinder with two wires of 8-mm diameter = 2.4326 mm
(b) Micrometer reading over the gauge with wires mounted as 3.0708 mm
Calculate the effective diameter.
Metrology of Screw Threads 323
11. Suggest a suitable method of inspection for the profile of screw thread with sketches.
12. Name the three most important dimensions of a vee-thread which controls the fitting of threads.
Show with a sketch all dimensions which are necessary to completely define a thread.
13. Define the pitch of a screw thread. Draw an illustrative line diagram of a pitch measuring machine
and describe its working. Explain how the graphs of accumulative and periodic errors look like.
14. Explain the reason why three wires are used to measure a screw thread with a hand micrometer and
two wires are used to measure a floating carriage machine for the same purpose.
15. When measuring the major diameter of an external screw thread gauge, a 35.00-mm diameter
cylindrical standard was used. The micrometer readings over a standard gauge were 9.3768 mm and
11.8768 mm respectively. Calculate the thread-gauge major diameter.
16. What do you mean by a ‘drunken thread’? How it is produced? Describe the method to test drunk-
enness of a component machined on centres.
12 Metrology of Gears
“Metrology of gears checks smoothness of operation, freedom from vibration and noise ……”
P R Trivedi, GM Manufacturing, Mahindra Engineering and Chemical Products Ltd., MIDC Pimpari
12.1 INTRODUCTION
Gears are mechanical devices that transmit power and motion between axes in a wide variety of commercial
and industrial applications. They are widely used for speed reduction or increase, torque multiplication
Metrology of Gears 325
and resolution, and accuracy enhancement for positioning systems. They find applications in areas like
machine tools, automobiles, material-handling devices, rolling mills, ancillary machinery, and so on. Trans-
mission efficiency of gears is 99 per cent, which is due to positive displacement characteristics of a gear,
a power-transmission device. Such high efficiency depends upon the dimension of a gear under consid-
eration with its specified design dimensions. Along with the dimension, the accuracy of their geometrical
forms has considerable effects on the smoothness of operation, freedom from vibration and noise, and
their working life. So, contemplated inspection of gears is inevitable. Gear types available include a spur
or pinion gears, change gears, cluster gears, internal gears, differential end gears, racks, helical gears, her-
ringbone gears, worm wheels, worms, miter or bevel gears, miter or bevel gear sets, hypoid gears, gear
stock and pinion wire, and gear blank. Metric gears are characterized by their millimetre-based module
designation.
Gears are made from a wide variety of materials with many different properties. Factors such as
design life, power-transmission requirements, noise and heat generation, and presence of corrosive ele-
ments contribute to optimization of gear material. Common metal materials of construction for gears
(metric—all styles, gear stock, gear blanks) include aluminum, brass, bronze, cast iron, steel, hardened
steel, and stainless steel. Plastic and other materials that may be used include acetal, Delrin, nylon, and
polycarbonate. Combination gears can have plastic teeth with metal inserts. An important environmen-
tal parameter to consider is the operating temperature.
1. Bevel Gears These gears have teeth cut on a cone instead of a cylinder blank. They are used
in pairs to transmit rotary motion and torque where the bevel gear shafts are at right angles (90 degrees)
to each other. An example of two bevel gears is shown in Fig. 12.3 (a).
2. Crossed Helical Gears These gears also transmit rotary motion and torque through a right
angle. The teeth of a helical gear are inclined at an angle to the axis of rotation of the gear as shown
in Fig. 12.3(b).
3. Worm and Worm Wheel A gear, which has one tooth, is called a worm. The tooth is in
the form of a screw thread. A worm wheel meshes with the worm. The worm wheel is a helical gear
with teeth inclined so that they can engage with the thread—like a worm. Like the crossed heli-
cal gears, the worm and worm wheel transmit torque and rotary motion through a right angle. An
application of the worm and worm wheel used to open lock gates is shown on the left-hand side
in Fig. 12.3 (c).
Metrology of Gears 327
(h) (i)
(j)
Fig. 12.3 (a) Bevel gears (b) Crossed helical gears (c) Worm and worm wheel (d) Single
helical gear (e) Double helical gear (f) Spiral bevel gears (g) Internal face-cut gears
(h) External face-cut gears (i) Rack and pinion (j) Spur gear
328 Metrology and Measurement
4. Helical Gear This gear is used for applications that require very quiet and smooth running, at
high rotational velocities. Parallel helical gears have their teeth inclined at a small angle to their axis of
rotation, as shown in Fig. 12.3 (d). Double helical gears give an efficient transfer of torque and smooth
motion at very high rotational velocities. An example of a double helical gear is shown in Fig. 12.3 (e).
5. Spiral Bevel Gears When it is necessary to transmit quietly and smoothly a large torque
through a right angle at high velocities, spiral bevel gears can be used. An example of spiral bevel gears
is shown in Fig. 12.3(f ).
6. Face Cut Gears This type of gear teeth can be cut on the inside of a gear ring, an example of which
is shown in Fig. 12.3 (g). Internal gears have better load-carrying capacity than external spur gears. They are
safer in use because the teeth are guarded. An example of an external face cut gear is shown in Fig. 12.3 ( h).
7. Rack and Pinion This is used for converting rotary motion to linear motion. A rack-and-pinion
mechanism [shown in Fig. 12.3 (i)] is used to transform rotary motion into linear motion and vice versa.
8. Spur Gears A spur gear is one of the most important ways of transmitting a positive motion
between two shafts lying parallel to each other. These types of gears constitute a large proportion of
the gears in use today. A gear of this class may be likened to a cylindrical blank, which has a series of
equally spaced grooves around its perimeter so that the projections on one blank may mesh in the
grooves of the second. As the design should be such that the teeth in the respective gears are always in
mesh, the revolutions made by each is definite, regular and in the inverse ratio to the numbers of teeth
in the respective gears. This ability of a pair of well-made spur gears to give a smooth, regular, and
positive drive is of the greatest importance in many engineering designs. An example of two spur gears
in mesh is shown in Fig. 12.3 ( j). This chapter confines the scope of discussion for metrology of gears
only with involute gears of straight tooth known as ‘spur’.
Spur gears are also called straight tooth or involute gears. Spur gears mate or mesh via teeth with very specific
geometry. A spur gears pitch is a measure of tooth spacing and is expressed in several ways. Circular
pitch (CP) is a direct measurement of the distance from one tooth centre to the adjacent tooth centre.
Diametric pitch (DP) is the ratio of the number of teeth to the pitch diameter (in inches) of a gear; a
higher DP therefore indicates finer tooth spacing. This is the more common pitch designation for gears
with English design units. Module (mod or M ) is used for metric gears and is the ratio of pitch diameter
(in mm) to the number of teeth; a higher module therefore indicates coarser tooth spacing. Pressure
angle is another specification of tooth form and is the angle of tooth drive action, i.e., the angle between
the line of force between meshing teeth and the tangent to the pitch circle at the point of mesh. Gears
must have the same pitch and pressure angle in order to mesh. Other important gear-size specifications
to consider for gears, (Metric—all styles, gear stock, gear blanks) include number of teeth, face width,
and length. Some of the important terminologies of spur gear are defined as follows:
Metrology of Gears 329
Outside or
u s blank diameter
di
ra
c le
cir Whole depth
h
tc
Pi
Circular pitch
Dedendum
Centre distance
Addendum
Clearance
Addendum circle
Circular tooth
thickness Pitch circle
Working
depth Dedendum or
rooth circle
i. The pitch circle (diameter) is the circle (diameter) representing the original cylinder which
transmits motion by friction and its diameter is the pitch circle diameter.
ii. The centre distance of a pair of meshing spur gears is the sum of their pitch circle radii. One
of the advantages of the involute system is that small variations in the centre distance do not
affect the correct working of the gears.
iii. The addendum is the radial height of a tooth above the pitch circle.
iv. The dedendum is the radial depth below the pitch circle.
v. The chordal addendum is the distance from the top of the tooth to the chord connecting the
circular thickness arc.
vi. The chordal thickness is the thickness of a tooth on a straight line or chord on the pitch circle.
vii. The clearance is the difference between the addendum and the dedendum.
viii. The whole depth of a tooth is the sum of the addendum and the dedendum.
ix. The working depth of a tooth is the maximum depth that the tooth extends into the tooth
space of a mating gear. It is the sum of the addenda of the gear.
x. The addendum circle is that which contains the tops of the teeth and its diameter is the outside
or blank diameter.
xi. The dedendum or root circle is that which contains the bottoms of the tooth spaces and its
diameter is the root diameter.
xii. Circular tooth thickness is measured on the tooth around the pitch circle, that is, it is the length
of an arc.
xiii. Circular pitch is the distance from a point on one tooth to the corresponding point on the next
tooth, measured around the pitch circle.
xiv. The module is the pitch circle diameter divided by the number of teeth.
330 Metrology and Measurement
xv. The diametrical pitch is the number of teeth per inch of pitch circle diameter. This is a ratio.
xvi. The pitch point is the point of contact between the pitch circles of two gears in a mesh.
xvii. Contact between the teeth of meshing gears takes place along a line tangential to the two base
circles. This line passes through the pitch point and is called the line of action.
xviii. The angle between the line of action and the common tangent to the pitch circles at the pitch
point is the pressure angle.
xix. The tooth face is the surface of a tooth above the pitch circle, parallel to the axis of the gear.
xx. The tooth flank is the tooth surface below the pitch circle, parallel to the axis of the gear. If any
part of the flank extends inside the base circle, it cannot have involute form. It may have another
form, which does not interfere with mating teeth, and is usually a straight radial line.
xxi. Backlash is the amount by which the width of a tooth space exceeds the thickness of the engag-
ing tooth on the pitch circles.
xxii. Clearance is the distance from the tip of a tooth to the circle passing through the bottom of the
tooth space with the gears in mesh and measuring radially. The correct clearance is vital to the
motion of gears.
There are two types of gear forms used in engineering frequently, viz., involute and cycloidal. The
second one, i.e., the cycloidal profile is not used for gear form in modern applications. It is used for
some special cases of heavy and impact loading conditions. On the other hand, the involute profile has
a wide scope of application for general purpose in precision engineering.
For reasons of economy in production, modern gear teeth are almost exclusively cut to an involute
form. The involute is a curve which is generated by rolling a straight line around a circle, where the
end of the line will trace an involute. Figure 12.5 shows the
construction of an involute. Using this method to draw a Addendum
gear profile would be very time-consuming, so we will use Pitch circle
an approximation called ‘Unwins construction’.
Base circle
tooth profile.
An involute profile has some prominent advantages Fig. 12.5 Diagram to draw involute
over other types of profiles. Gears possessing involute tooth profile
Metrology of Gears 331
profile have the same pressure angle, variations in the centre distance between two spur gears in mesh
have no effect on the velocity, the face and flank form a continuous curve and all gears having the
same pitch and pressure angle work correctly together. Tooth profiles of spur gears can be measured
accurately. This can be done by establishing how far the tooth forms confirm to the theoretical form
of the involute tooth profile as desired by the designer.
As in the case of limits, fits and tolerances for ordinary engineering components, a system has
been evolved for grades and tolerances of gear toothing. As per the requirements of draft ISO
recommendation number 1328, ‘Accuracy of parallel involute gears’ and revelent Indian Stan-
dard Specifications, viz., IS: 4702, 4725, 4058 and 4059, gears have been classified into 10 quality
accuracy grades from 3 (high-precision gears) to 12 (coarse-quality, low-speed gears). BS Standard
436:1970 and DIN Standards 3963:1977 have categorized spur gears into 12 classes. AGMA Stan-
dard 2001-B88 has categorised spur gears into 15 classes. The quality or grade assigned to any
particular gear is the finest number selected for any of the three elements, viz., limits of tolerance
of pitch, tooth profile and tooth alignment. Normally, for a pair of mating gears, the elements of
the components belong to the identical accuracy grades, but these may also have different grades
as per the agreement between the manufacturer and the user. Table 12.1 shows the relationship
between the quality of toothing, circumferential velocity, dynamic forces, and other determining
factors
Depending upon the quality the grades involved, gears have been classified as per Indian Standard
Specifications. This is given in Table 12.2.
Besides the above-mentioned IS specifications, IS: 4071 lays down requirements for master gears
which are intended for checking other working gears. As stated earlier, grades 1 and 2 (in some cases 3
also) are assigned for master gears. While selecting the quality grade, the designer should always con-
sider the cost involved.
332 Metrology and Measurement
Before considering the methods/techniques and instruments used for gear-parameter measurement, first
we will have to define the types of error to be inspected and amount of dimensional variation allowed,
which finally depends upon the required quality of gear. As every gear rotates about the axis, almost all the
parameters have to be inspected about the axis of rotation. However, the actual axis of rotation depends on
several factors, but one has to ensure that the axis of inspection is very close to the axis of rotation of gear
assembly. Then the axis of inspection may be the axis of bore of the gear blank,or in case if the gear is an
integral part of the shaft then the axis of the shaft is an axis of inspection. From the metrological point of
view, the major aspects of any gear which need to be inspected are
i. Gear blank
ii. Teeth of single gear for tooth profile, for tooth alignment, for tooth spacing around gear and
tooth thickness
iii. Combined error of the gear in assembly
1. Gear Blank Run-out Errors Errors normally inspected for spur gear blanks are the fol-
lowing:
i. Tip diameter run-out error is due to excessive interference of tooth tip with the root fillet of the
mating gear.
ii. Radial run-out of the interface surface may be due to wrong setting on the machine during
manufacturing.
iii. Face run-out of the interface face is a run-out of reference surface specified on drawing. It happens
due to wrong angular positioning of a blank with respect to the axis of manufacture.
2. Gear Tooth Profile Errors These errors are indications of deviation of the actual tooth
profile from the ideal tooth profile. The errors of tooth profiles are the following:
i. Tooth profile error— Tooth profile error is the summation of deviation between actual tooth profile
and correct involute curve which passes through the pitch point measured perpendicular to the
Metrology of Gears 333
actual profile. The measured band is the actual effective working surface of the gear. However, the
tooth modification area is not considered as part of profile error.
ii. Pressure angle error
iii. Basic circle error
The major element to influence the pitch errors is the run-out of gear flank groove.
5. Runout Error of Gear Teeth, Fr This error defines the run-out of the pitch circle. It
is the error in radial position of the teeth. Most often it is measured by indicating the position of
a pin or ball inserted in each tooth space around the gear and taking the largest difference. Alter-
nately, particularly for fine pitch gears, the gear is rolled with a master gear on a variable centre
distance fixture, which records the change in the centre distance as the measure of teeth or pitch
circle run-out. Run-out causes a number of problems, one of which is noise. The source of this
error is most often insufficient accuracy and ruggedness of the cutting arbor and tooling system.
6. Lead Error, fb Lead error is the deviation of the actual advance of the tooth profile from
the ideal value or position. Lead error results in poor tooth contact, particularly concentrating contact
to the tip area. Modifications, such as tooth crowning and relieving can alleviate this error to some
degree.
334 Metrology and Measurement
7. Composite Error It is the combined effect of a number of errors acting simultaneously. This
error term includes two or more types of the individual errors, such as profile errors, pitch error, tooth
alignment error, tooth thickness error, etc. This type of error is measured by meshing a gear under
test with the master gear. Therefore, it is the range of difference between the displacement at the pitch
circle of a gear and that of the master gear meshed with it at a fixed distance when moved through one
revolution, when the driving and driven gear flanks are in proper contact. There are two methods of
measuring this error. Depending on the methods of measuring, the errors are described as single-flank
tooth-to-tooth composite error and double-flank tooth-to-tooth composite error. These errors indicate the difference
between the largest and the smallest centre distance observed during one revolution of the test gear.
8. Assembly Errors When gears are in assembly, they are checked for the following:
i. Centre Distance Errors Centre distance is specified along with (normally unidirectional)
tolerance. Therefore, any increase in centre distance will result into increased (clearance) backlash.
Backlash should be as small as possible and designed for minimum centre distance.
ii. Axes Alignment Error For spur gear, the axes of the two mating gears must be parallel to
each other—any misalignment will result into axes alignment error.
After production, gears are checked and inspected to ensure correctness of different parameters and
smoothness of operation. Different methods are followed for measurement and checking of gears,
which are discussed in detail as follows.
Dial indicator
Body
Sensitive tip
Fixed
measuring tip
Adjustable
or guide
stop
Fig. 12.6 Portable base pitch-measuring instrument
support. The distance the between the fixed and sensitive tip is set to be equivalent to the base pitch of
the gear with the help of slip gauges. This properly set instrument is applied to the gear so that all three
tips make contact with the tooth profile. The reading on the dial indicator is the pitch error.
b. Involute Measuring Machine In case of a large-sized gear, the involute profile is checked
using involute measuring machine. This gear under test is held on a mandrel. A ground circular disc
336 Metrology and Measurement
c. Tooth Displacement Method When the previously discussed dedicated involute mea-
suring machine is not available, the vertical measuring machine (height gauge) is used for checking
the profile of the large-sized gear. Though it is a time-consuming method, it is the best-suited
method for calibration of master involute, and is used for very high-precision components. In this
method, the gear under test is rotated through small angularincrements and the reading on the
vertical measuring machine is noted. These readings are compared with the theoretically calculated
values at about five to ten places along the tooth flank. Trial and error method is used to establish
the required incremental angular positions. Theoretical values may be calculated with respect to the
angular positions, for example, as shown in Fig. 12.10 (b), (c), (d) where, φ = pressure angle, θ =
angular position as
Pitch
circle
L
L1
θ
θ1
(a) (b)
L2 L3
θ2
θ3
(c) (d)
Fig. 12.10 Tooth displacement method
(a) (b)
Fig. 12.11 Computer-controlled probe scanning method
varies from top to bottom, the instrument must measure tooth thickness at a specified position on the
tooth.
Gear Tooth Vernier is an instrument shown in Fig. 12.12 and is used for measuring pitch-line
tooth thickness. It consists of two perpendicular arms on which the main scales and vernier scales are
engraved. One of the scales (horizontal scale) is used to measure the depth (h), i.e., at the chordal adden-
338 Metrology and Measurement
Main and
vernier
scales
Slide
(a) (b)
Fig. 12.12 (a) Gear tooth vernier caliper (b) Measurement of gear tooth thickness
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune, India.)
b. Constant Chord Method In the previous method, measurement of pitch-line tooth thick-
ness by measuring W and h depends upon the number of teeth. Therefore, in the case of a large
number of gears in the set having different values of the number of teeth, if we use gear tooth vernier,
then knowing W and h for each gear and calculating the thickness becomes laborious and time-con-
suming. This limitation has been overcome in the constant chord method.
This method uses this fact as a property that if an involute tooth is considered symmetrically in close
mesh with the basic form then it is observed that when the gear rotates, all teeth come in mesh with the
rack for a given size of tooth, i.e., for the same module, the contact always occurs at points A and B as
shown in Fig. 12.14, which results in distance AB remaining constant. Hence, it is known as Constant
chord. Therefore, it becomes a useful dimension since it has the same nominal value for all gears of a
common system, irrespective of the number of teeth.
340 Metrology and Measurement
Rack form
Pitch line
of the rack
C d
A B
φ
h
D E
P
Tangent to base circle
Pitch circle
Base circle
O
Fig. 12.14 Constant chord method
Refer Fig. 12.14. Distance AB is a constant chord situated at a distance d from the top face. Line AP
is tangent to the base circle.
∴ ∠ CAP = φ
Constant chord,
AB = M = 2 (AC ) (1)
PD = PE = arc(PE )
= [circular pitch/4]
= [( π· PCD)/4 · z ]
∴ l (PD) = [( π· m)/4]
........as, module (m) = (PCD/z ).
Consider Δ APD, AP = PD cos φ
= [(π · m)/4] cos φ (2)
Now consider Δ PAC,
AC = AP cos φ
= [( π · m)/4] cos2 φ
Metrology of Gears 341
Now, putting this value in Eq. (1) we get length of constant chord = l (AB)
= 2 [(π . m)/4] cos2 φ
∴ l (AB) = M = [( π)/2]. m cos2 φ
Now, to calculate the value of distance d consider PAC and Eq. (2).
PC = AP sin φ = [(π. m)/4]( sin φ . cos φ )
d = Addendum − PC
= m − [(π . m)/4]( sin φ. cos φ )
∴ d = m − {1 − (π/4)( sin φ. cos φ)}
C
W = AC = A1 C1 = A2 C2
A
A1 B C2 To determine distance W, consider the trigo-
B2 B1
nometric relationship illustrated in Fig. 12.15.
Fig. 12.16 Generation of a pair of opposed
involutes by a common generator W = Arc AB + Arc BC (1)
342 Metrology and Measurement
θ
To determine the arc AB, the tooth thickness
Base
circle
at the base circle, the trigonometric relationship
is illustrated in Fig. 12.17.
Inv φ (Involute φ
Now, arc AB = 2 (arc AD) (2)
function of φ)
= 2 (arc AC + arc CD)
Arc (AC/ Rb ) = Inv φ radians
∴ arc AC = R b (tan φ − cos φ)
O
Fig. 12.17 Tooth thickness at base circle ∴ arc AC = [(z . m)/2] cos φ (tan φ − φ) (3)
θ radians = (arc EF )/Rp = (arc CD)/Rb
But, Arc EF = ¼ [circular pitch]
= ¼ [π · m]
θ = ¼ [(π · m)/Rp] radians
∴ θ = ¼ [π · m] · [2/(z · m)] = [π/(2 z )] radians
∴ Arc CD = Rb · θ = {[(z . m) / 2] cos φ} ·
[π/ (2 z )] (4)
From the figure,
arc AB = 2(arc AC + arc CD) and substituting the values from (3) and (4), we get
∴ Arc AB = 2{[(z . m)/2] cos φ (tan φ − φ) + {[(z . m)/2] cos φ} · [π/(2 z)]}
∴ = z . m. cos φ [(tan φ−φ ) + [π/(2 z)]] (5)
Now consider Eq. (1), where
Arc AB (tooth thickness at base circle in Fig. 12.15)= arc AB in Fig. 12.17. Now consider Fig. 12.15 again
for the next part of the derivation.
∴ W = arc AB + arc BC
Substituting values of arc CD and Arc DE in the above equation, we get
∴ W = z . m cos φ [(tan φ − φ) + [π/(2 z)]] + S (π. m . cos φ)
∴ theoretical base tangent length value = W = z . m cos φ [(tan φ − φ)+ [π/(2 z)] + (πS/2z)]
Metrology of Gears 343
where,
z = number of teeth, m = module, φ = pressure angle, S = number of tooth space contained within
space W.
Instruments by which the base tangent length measurement can be made are the David Brown tangent
comparator, vernier calipers and micrometers having suitable fixtures on anvils [as shown in Fig. 12.18
(a)
(b)
Fig. 12.18 Gear tooth micrometer for measuring base-tangent length
344 Metrology and Measurement
(a) (b)
Fig. 12.19 Indicating snap gauge with special attachment for measuring
base tangent length
(Mahr GMBH Esslingen)
F "i
double-flank gear-roll testers. These
F r"
instruments help us quickly deter-
mine the existing composite-process
errors on gears. Double-flank gear-
F i"
roll testers are based on well-proven
mechanical inspection procedures for
external and internal spur gears, worm
gears, and bevel gears. Accept/reject Fig. 12.21
results may be evaluated to ISO, DIN,
JIS, AGMA, and/or user-specified standards for traditionally cut metal, plastic injection molded
and powered metal gears. The exploitation of admissible tolerances helps reduce the production time.
To understand the use of a gear-rolling tester, let us define some terms related to spur gear profile
(refer Fig. 12.21).
Total Radial Composite Deviation, Fi″ (TCV) Fi is the difference between the maximum and mini-
mum values of the working centre distance, a , which occurs during a radial (double flank) composite test,
when the product gear with its right and left flank is simultaneously in tight mesh contact with those of a
master gear, and is rotated through one complete revolution.
Tooth-to-Tooth Radial Composite Deviation, f 1″ (T TCV) f i is the value of the radial com-
posite deviation corresponding to one pitch, 360°/z, during one complete cycle of engagement of all
the product gear teeth.
Radial Runout, Fr″ (RRO) Fr — the value of radial run-out of the gear — is the difference be-
tween the maximum and the minimum radial distance from the gear axis as observed by removing the
short-term or undulation pitch deviations and analyzing the long-term sinusoidal waveform.
Single Flank Testing (Single Contact Testing) In this test, the gear is mated with a
master gear on a fixed centre distance and set in such a way that only one tooth side makes contact.
The gears are rotated through this single flank contact action, and the angular transmission error of the
driven gear is measured. This is a tedious testing method and is seldom used except for inspection of
the very highest precision gears.
Gear roll testers come along with a frictionless measuring carriage, which rides on high-precision
roller bearings and guarantees high measuring accuracy and repeatability of the results. The setting
carriage is opposed to the measuring carriage so that tests can be performed with two production gears
or one production gear meshed with a master gear. The measuring carriage transmits the centre dis-
tance deviations to a pick-up or simply a dial indicator.
a"
Double Flank Gear Roll Testing (Double Contact Testing) Two gears are rotated in
tight mesh without play against each other. Under the influence of a pressure that is applied in the direc-
tion of the radial centre distance, at least one left and one right gear flank are meshed (double flank
meshing). This causes variations in the radial centre distance. As always, two tooth flanks are in mesh, the
measurement result represents the sum of the variations of both tooth flanks. For quality assessment,
measuring results are defined as total radial composite deviation Fi ′′, tooth-to-tooth radial composite
Metrology of Gears 347
Master
gear
deviation fi ′′, and radial run-out, Fr ′′. Additionally, this method allows users to compare nominal vs actual
radial centre distance with upper and lower tolerances and to make Go/No-go decisions.
Design Features of Gear Rolling Tester Gear rolling testers employ a virtually frictionless,
backlash-free measuring carriage, which rides on high-precision roller bearings or parallelogram leaf
springs. This exceptionally sound mechanical design is coupled with a solid and stable machine base for
unrivaled accuracy and repeatability. Generally, a maximum of 300-mm diameter gears, 150-mm or still
smaller ones are also tested. The accuracy is of the order of ± 0.0001 mm. Measurement data may be
read on a Milligraph high-speed recorder or an electronic evaluation instrument in connection with an
inductive probe. For simple gear roll tests, mechanical dial comparators or dial indicators may be used.
Modern evaluation possibilities as well as PC hardware and software complete these testers so that they
have become important means for quick and easy quality control (refer Fig. 12.25).
1. Stationary Centres and Arbors Because the centres or arbors remain stationary during in-
spection, the system prevents concentricity variations from influencing the measurement results. A spe-
cial drive mechanism within the measuring carriage rotates the gear being checked around the mounting
element.
2. Solid, rigid design
348 Metrology and Measurement
5. Expansion Possibility Additional modular components are available so that existing ma-
chines may be expanded and adapted to additional measurement tasks. It is also equipped with height-
adjustable quill or driving block.
6. Directly Variable Measuring Force It enables for quick and easy measurable gears of vary-
ing size and quality. For internal gears, the measuring force direction may be reversed.
7. Quick-change Feature of the Measuring Carriage In case of double-flank gear-roll test-
ing, rapid disengagement of mating gears lets you quickly and easily change gears to be measured with-
out resetting the centre distance of the axes.
8. It is particularly suitable for shop-floor measurements next to the gear-cutting machines in order
to perform a first inspection of the manufactured gears.
Metrology of Gears 349
Moulded gears do not shrink in any simple fashion such as a photographic reduction. There are a
minimum of four distinct shrinkage rates for any gear. Even simple features such as outside and
root diameters must be carefully inspected. A simple caliper check will often miss important features.
These diameters must be inspected for total form error as well as concentricity to the principle bore or
datum. For precision inspection purpose, probe the tip and root of each tooth and construct a best-
fit diameter with respect to the gear datum [as shown in Fig. 12.26(b)]. Inspecting the gear involute
profiles requires just as much attention to detail. Each tooth should be inspected since the moulding
process can result in errors anywhere on the gear. The actual form errors of the teeth should be mea-
sured directly so that these errors can be eliminated in the moulding process or compensated for in
the mould cavity.
For rapid and precise measurement of dimension over balls, roundness and conicity of internal
gears in any position and at any depth, a dial bore gauge for inside serrations can be used as shown
in Fig. 12.27. It is a convenient method of checking tooth thickness and obtaining some indication
of accuracy of involute profile in order to measure a gear over a roller placed in opposite tooth
spaces.
The gauge consists of a few modular units for quick conversion of the gauge to another gear size
within the large total measuring range. Two or three different sizes of rollers can be used so that varia-
tions at several places on the tooth flanks can be detected.
Outside
diameter
12.10 RECENT DEVELOPMENT IN GEAR
METROLOGY
Root
The improved manufacturing capability of gear produc- diameter
tion equipment demands higher accuracy measurement
equipment. The uncertainty of calibration data must, as Base circle diameter
a consequence, be reduced to realize the full benefit of
the investment that industry is making in new production
equipment and measuring equipment. Indeed a facility
Base tooth
made available on the commercial level by recent devel- thickness
opments in the field of gear metrology would encourage
manufacturers to invest in inspection and promote the (a)
use of best metrology practice in industry to improve Fig. 12.26 Inspection of shrinkage
gear accuracy. and plastic gears
350 Metrology and Measurement
(b)
INPECTION REPORT
+ TOL - 0.88830 (inch) Dia (6)
− TOL - 0.00030 (inch) CD 'OUTSIDE 00'
SCALE CIRCLE 00
X - 0.00020
Y - 0.00005
Root Diameter = 0.98530 Z - 0.06810
Mn Dev - 0.00050
Dev −0.00060
Mn P - 6
Mn Dev - 0.00021
HUMPIS -0
(INCH)
Y
6002/20TEETH #1
20
Outer Dia = 1.25400 AXICON
100-1.43_DP
Dev 0.00035 14_Decrees
form = 0.00086 21107-3
Y
Y
X-center = −0.00020 N
Y-center = −0.00006
(c)
Fig. 12.26 (Continued)
With the present development of Computer Numerical Controls (CNC), many inspection
machines for lead/involute profile checking, and pitch measurements have got simplified. The fol-
lowing are the descriptions of some of the commercially available machines along with their brief
descriptions.
Metrology of Gears 351
Mi + 2 DM
DM Mi DM
CNC controlled, four-axis gear inspection centre shown in Fig. 12.28 incorporate the most advanced
coordinate measurement technology in the world and can automatically supply extremely accurate veri-
fication of gear tooth topography. Table 12.3 explains the important specifications.
Review Questions
‘In case of irregular shape parts, trigonometry is used to perform its miscellaneous
metrology by dividing the shape into many profiles and counters…’
INTRODUCTION AND NEED OF a general measurement with existing
MISCELLANEOUS MEASUREMENTS instruments is very difficult. For such
Irregularly shaped parts do not have a measurements, instead of using only
defined single-phase geometry. Instead, measuring tools, some parts can be
their geometry is divided into many pro- inspected; and based upon simple trigo-
files and contours. Sheet-metal parts nometric calculations, we can get the
such as car bodies, buckets, cups, uten- required results. These methods can be
sils, mixers and grinders, trucks and typically applied to problems faced during
many other parts have profiles for which actual measurement.
Measurement of taper on one side can be done with the help of two rollers of different diameters.
In Fig. 13.1, X and Y are the centres of two rollers. Line PQ is drawn perpendicular to the line joining
the two centres of rollers, XY. XA is a horizontal line while XB is parallel to the tapered surface of the
piece and inclined at an angle β to the horizontal line. The angle XAY = β/2.
P
B
Y
A X
b/2
Q L
R1 P H2
A2 H1
B
A1
R2
Q
(a)
H2
H1
A2
B
A1
(b)
Fig. 13.2 Measurement of internal taper: a) Larger roller outside the groove, b) Both rollers inside
356 Metrology and Measurement
X
Then ∠ A1A2B = , where X is the angle of the tapered hole,
2
In case of both the balls lying inside the groove, as shown in Fig. 13.2 (b), the formula for the taper
angle A can be found out by
X R1 − R2
Sin =
2 H 1 + R2 − ( H 2 + R1 )
In case of a mating operation a dovetail provides good mating assembly due to its specialized geometry.
The dovetail has sloping sides which act as a guide and prevent lifting of the female mating part during
the mating operation. The angle which the sloping face makes with an imaginary vertical centre (X )
plane, is the point of consideration. The measuring angle requires two pins of equal size, a slip-gauge
set and micrometers. The two pins are placed in such a way that they touch the sides of the dovetail and
the distance L is measured across these pins with a micrometer as shown in Fig. 13.3. Then the pins
are raised on two sets of equal-slip gauge blocks in such a way that the pins do not get extended above
the top surface of the dovetail. The distance M is measured across the pins with the micrometer. If the
height of the slip gauges is H then
AC ( M − L ) / 2 M −L
tan X = = =
BC H 2H
⎛ M − L ⎞⎟
∴ X = tan−1 ⎜⎜⎜ ⎟
⎝ 2 H ⎟⎠
B
Pin
X
Slip X
H
gauges
A C
L
The concave or convex surface can be inspected by using a radius gauge or by using specially designed
templates. However, radius gauges have standard series of measurement for values of 3R, 4R, 5R etc.
Hence a radius of odd dimensions cannot be checked by using radius gauges. The sheet-metal worked
parts have a variety of radius profiles and need to be checked at regular intervals.
D A B
D
L E
(a) (b)
Fig. 13.4 Measurement of concave radius: a) Set-up, b) Schematic
1 ⎛⎜ 2 L2 ⎞⎟
R= ⎜d + ⎟⎟⎟
2 D ⎜⎝ 4⎠
This expression can be used to find the unknown concave radius.
45
0
5
5
0
5
(a)
E
X =(t – D)
D B A
C
(b)
Fig. 13.5 Measurement of spherical convex radius: a) Set-up, b) Schematic
Miscellaneous Measurements 359
of a convenient size is kept on the spherical convex radius. D can be found out by the depth microm-
eter reading.
Δ CAB and C DB are the right-angled triangles in which
CA 2 = CB2 + AB 2
and CA = the unknown convex radius R,
CE = CB − BE = R−X,
and AB = L/2, where L is the length of the depth micrometer.
∴ R2 = (R – X )2 + (L/2)2
Solving, we get
1 ⎛ 2 L2 ⎞⎟
R= ⎜⎜ X + ⎟
2X ⎝⎜ 4 ⎟⎟⎠
1 ⎛ 2⎞
or R= ⎜⎜( t − D )2 + L ⎟⎟
2( t − D ) ⎜⎝ 4 ⎟⎟⎠
R C
L A E
D B
(a) (b)
Fig. 13.6 Measurement of cylindrical convex radius: a) Set-up, b) Schematic
360 Metrology and Measurement
Review Questions
As the size of the part under inspection increases, and the required measurement resolution shrinks,
both data volumes and data rates will increase dramatically. This raw data must be converted into useful
information to facilitate process control and defect reduction. To accomplish this, metrology data
Study of Advanced Measuring Machines 363
λ
Microscopy Roundness
(visual and digital and cylindricity
assessment) Instruments
Sharp-stylus
(roughness)
Coordinate
instruments
measuring
machines
(CMM's)
Fig. 14.1 Instrument overlaps
must be integrated into factory and enterprise-level information systems so that it may be associated
both with other data and with wafer-tracking information. The manner in which metrology integra-
tion occurs will be greatly influenced by the implementation of advances in technology. These include
(1) introduction of advanced proximity correction and phase-shift mask technology; (2) the ramp of
193 nm, 157 nm, and next-generation lithography; (3) integration of copper and low-key interconnect
processes; and (4) the shift from 200-mm to 300-mm wafers in high-volume production. One form of
metrology integration is found in advanced process control (APC). APC applies model-based process
control to reduce process variation, reduce send-ahead and tool monitor wafers, shorten learning cycles
and response-times, enable better tool-matching in high-volume production, improve overall equip-
ment effectiveness, shorten development times, and ease process transfer from pilot line to factory. In
this chapter, we shall try to discuss some of the advanced measuring machines.
Figure 14.2 shows the first length-measuring machine made in 1908. This instrument has the important
feature of producing constant measuring forces and an error correction using a standard curve to com-
pensate pitch errors of the spindle reading with a vernier scale of 1/10,000 mm.
The original idea behind the unit was to produce a precision universal measuring machine, which was
easy to use and could be employed for checking the inside and outside dimensions of parts or measur-
ing instruments and gauges. This idea took on physical form in 1897 when Carl Mahr presented his
model 300 precision measuring machines—a machine, which today looks like a steel rocking horse with
a steering wheel but which in those early days allowed a resolution of 0.001 mm (39.4 µin), which was
quite unparalleled at that time. Their originally intended tasks—namely testing parts and monitoring
gauges—have remained the same. The only thing that has changed is the way they accomplish these:
Today, for example the 828 model developed by Mahr company as shown in Fig. 14.3 ( Plate 13) employs
computer-aided technology to acquire measuring values, perform automatic nominal/actual comparisons
364 Metrology and Measurement
and employ resolutions down to 0.01 µm (.39 µin). These innovations are all the result of some years’
experience in making good ideas work.
These machines consist of mainly a rigid bed; the universal measuring table (floating or fixed) will
be needed for external or internal measurements, e.g., checking of rings, bores or internal threads.
They are equipped with mechanisms, which enable quick and fine table adjustment. For self-centering
of a job, one pair of support blocks or one symmetrical clamping device are provided as shown in
Fig. 14.4 (b). It consists of a supporting fixed carriage (head) and movable measuring carriage (head).
The distance is adjustable, which depends upon the specifications of the model of machine under
consideration. Generally, measuring head travels for 100 mm. It also consists of a holder for holding
a dial indicator.
It uses different types of probes for specific measurement, as shown in Fig. 14.5 (a), to perform
internal measurements on plain test-pieces from 1.5-mm diameter (0.059 in) on. The probe consists of
a 1-mm (0.039 in) ruby ball, holder, and serration grip.
The probe shown in Fig. 14.5 (b) is used for high-precision measurement of internal threads with
exchangeable measuring anvils. The calipers shown in Fig. 14.5 (c) enable us to perform internal measure-
ments on plain test-pieces. The figure shows a pair consisting of a left-hand and a right-hand caliper.
• High measuring accuracy obtained by precise mechanics, such as parallelism of the probe supporter
of ± 1 µm (40 µin), together with up-to-date measuring equipment (See Fig. 14.6, Plate 14)
Study of Advanced Measuring Machines 365
Clamping device
for centers
Centers
(a)
Clamping screw
Clamping jaws
Base-locking
screws
(b)
Extension
(c)
(Continued)
366 Metrology and Measurement
Holder
Clamping
device
Table plate
of the
universal Ring gauge
measuring
table
Base plate
with
support
prism
(d)
Fig. 14.4 Accessories of universal measuring machines
• Depending on the requirements of the measuring task, different display units used, for example,
the digital dial indicator, analog dial indicators as well as inductive or incremental probes
• Easy structure of the unit allows a precise performance of the measuring procedure and a fast
adaptation to new measuring tasks
• Due to serration grip, a quick and easy accessory exchange is ensured
Exchangeable
• Computer support is provided for acquiring, processing, logging, and transmitting measurement data
• Operating reliability and comfort by linking both measuring systems so as to make all information
available on a single screen
• Reliability in complying with documentation requirements through the automatic adoption, stor-
age, and ISO-compliant printout/logging of all relevant measurement data
• Universal application through a generous selection of accessories
• Form stability through a sturdy machine base of hard granite
• Adjustable measuring force for matching to the size and shape of the test-piece—measuring re-
sults are thus unaffected by subjective influences
• Easy change of measuring direction
• High resistance to wear through carbide-reinforced measuring surfaces
The terms ‘numerical control’ and ‘digital readout’ have been applied to the many devices developed
for measuring coordinate dimensions on a workpiece. The workpiece is held in a fixture and a probe
is brought in contact with the work surface to be measured. Either the workpiece or the probe is
held on a movable table or arm and the reading is recorded in the readout section of the control
device. Two and three-axis machines are available. A two-axis machine usually registers the vertical
displacement of the probe. A three-axis machine records both of these as well as transverse hori-
zontal motion.
Numerical control inspection is most commonly applied to the inspection of odd-shaped contoures,
which cannot be easily measured by other means. Since it is a relatively slow process, it is not competi-
tive with automatic gauging devices or other conventional methods for the inspection of easily mea-
sured dimensions.
Data processing
program Machine stand/optical
vibration isolator with
auto-leveling function
Controller
Joystick box
instruments, it is the least significant bit (Reference: ANSI B-89.1.12). The workpiece weight is the
mass of the workpiece being measured.
Coordinate measuring machines may have manual control, CNC control or PC control. Manual
control implies that machine positioning is operator controlled. The operator physically moves the
probe along the axis to make a contact with the part surface and record the measurement (digital read-
outs). A Computer Numerical Control (CNC) may also control machine positioning. PCs, or personal
computers, also control machine positioning in some coordinate measuring machines. The PC records
the measurements made during the inspection and performs various required calculations. Automatic
measuring machines may involve one or more types of gauging devices.
Operation for a coordinate measuring machine can be achieved through an articulated arm,
bridge, cantilever, gantry or horizontal arm. An articulated arm is very common for portable, or
tripod-mounted-style machines. The articulating arm allows the probe to be placed in many differ-
ent directions. In bridge-style machines, the arm is suspended vertically from a horizontal beam that
is supported by two vertical posts in a bridge arrangement. The machine x-axis carries the bridge,
which spans the object to be measured. In cantilever-style machines, a vertical arm is supported
by a cantilevered support structure. Gantry style machines have a frame structure raised on side
supports so as to span over the object to be measured or scanned. Gantry machines are similar in
construction to bridge-style designs. In horizontal arm machines, the arm that supports the probe
is horizontally cantilevered from a movable vertical support. As a result, this style is sometimes
referred as cantilever.
Study of Advanced Measuring Machines 369
Measuring range
107.5°(2730)
31.9°(810)
X
Y
27.6 °(700)
3.2°(82) 1.85°(47)
Coordinate measuring machines can have one of several mounting options. They include bench
top, free standing, handheld and portable. Manufacturers may use these terms interchangeably. Probe
systems for CMMs can be touch probe or discrete point, laser triangulation, camera or still and video
camera. A multi-sensor coordinate measuring machine has capabilities to mount more than one sensor,
camera, or probe at a time. Figure 14.8 shows Orysta-ApexC 776, 7106 model of Mitutoyo along with
specifications in the tables.
2. Correct Plus (Data Feedback System) Mitutoyo Company develops the correct
plus system. After measuring the components mass-produced by a machining centre, the Correct Plus
system feeds the compensation data calculated from the measurement result and nominal value
back to the machining centre. This data feedback system maintains and improves the accuracy of
processing.
• Allows construction of a production system with improved accuracy and reduced ratio of defec-
tive parts.
Study of Advanced Measuring Machines 371
• If the NC program is partially corrected prior to processing, there is no need to make a correction
during processing. Accordingly, Correct Plus promises worry-free operation.
• Two types of systems are available, according to the type of production system:
1. Manual Feedback System The operator decides whether or not to give feedback on correc-
tion data.
Wide varieties of CMM specifications for inspection of small-sized components to complete car
body profile measurement are commercially available in the market. Figure 14.9 ( Plate 15) shows a
new, horizontal arm-type CMM inspecting the profile of the car body, and Fig. 14.10 ( Plate 15) shows
a CNC CMM, which provides a huge measuring range.
Common applications for coordinate measuring machines include dimensional measurement, pro-
file measurement, angularity or orientation, depth mapping, digitizing or imaging, and shaft measure-
ment. Features common to CMMs include crash protection, offline programming, reverse engineering,
shop-floor suitability, SPC software and temperature compensation.
passing the probe over a target surface at its working range. As the probe scans the surface, it transmits
a continuous flow of data to the measurement system. Scanning contact probes may use linear variable
differential transformer ( LVDT ) or optoelectronic position sensing.
12
13 10
6
9
11
8
10
3
5
2 4
Fig. 14.12 Three-dimensional probe Schematic cross section of 3-D probe 1. Stylus 2. Connect-
ing nut 3. Outer clamping spacer 4. Diphragm flexure 5. Middle spacer 6. Triad 7. Shell 8. LVDT
Support (bottom) 9. LVDT support (top) 10. LVDT coils 11. LVDT cover (X shown) 12. LVDT
core 13. LVDT Cover (Z)
Knuckle
joint
Probe
Overall length
EWL
Stylus
Ball/stem
clearance
Fig. 14.14 CMM probe and stylus Fig. 14.15 Stylus and EWL
Effective Working Length (EWL) is the penetration that can be achieved by any ruby ball stylus
before its stem fouls against the feature. Generally, the larger the ball diameter, the greater the EWL
(refer Fig. 14.15 ).
3. While Choosing Styli For Scanning The choice of scanning styli will be dependent
on the scanning application and the type of scanning probe used. Use a stylus which has the same
diameter as the finished cutting tool used to produce the part. Keep the stylus as short as pos-
sible to prevent excessive bending, but ensure that the stylus is long enough to prevent scanning
on the shank.
A. Ruby Ball Styli These are suitable for the majority of probing applications. They incorpo-
rate highly spherical industrial ruby balls. Ruby is an extremely hard ceramic material, and hence the
wear of stylus balls is minimized. It is also of low density which keeps the tip mass to a minimum.
This avoids unwanted probe triggers caused by machine motion or vibration. Ruby balls are available
mounted on a variety of materials including non-magnetic stainless steel, ceramic and carbide, to
maintain stiffness over the total range of styli.
Study of Advanced Measuring Machines 375
TS
(i) (j)
Fig. 14.16 Types of stylii
B and C. Star Styli These can be used to inspect a variety of different features. Using star styli
to inspect the extreme points of internal features such as the sides or grooves in a bore, minimizes
the need to move the probe, due to their multi-tip probing capability. Each tip on a star stylus requires
datuming in the same manner as a single ball stylus.
D. Pointer Styli These should not be used for conventional XY probing. Designed for the mea-
surement of thread forms, specific points and scribed lines (to lower accuracy). The use of radius end
pointer styli allows more accurate datuming and probing of features, and can also be used to inspect
the location of very small holes.
376 Metrology and Measurement
E. Ceramic Hollow Ball Styli These are ideal for probing deep features and bores in X, Y and
Z directions with the need to datum only one ball. In addition, the effects of very rough surfaces can
be averaged out by probing with such a large diameter ball.
F. Disc Styli These ‘thin sections’ of a large sphere are usually used to probe undercuts and
grooves. Although probing with the ‘spherical edge’ of a simple disc is effectively the same as probing
on or about the equator of a large stylus ball, only a small area of this ball surface is available for con-
tact. Hence, thinner discs require careful angular alignment to ensure correct contact of the disc surface
with the feature being probed. A simple disc requires datuming on only one diameter (usually in a ring
gauge) but limits effective probing to only X and Y directions.
Adding a radius end roller allows you to datum and hence probe in the Z direction, provided the
centre of the ‘radius end roller’ extends beyond the diameter of the probe. The radius end roller can
be datumed on a sphere or a slip gauge. Rotating and locking the disc about its centre axis allows the
‘radius end roller’ to be positioned to suit the application.
The disc may also have an M2 threaded centre to allow the fixing of a centre stylus, giving the addi-
tional flexibility of probing the bottom of deep bores (where access for the disc may be limited).
G and H. Cylinder Styli These are used for probing holes in thin sheet material, probing vari-
ous threaded features and locating the centres of tapped holes. The ball-ended cylinder styli allow full
datuming and probing in X, Y and Z directions, thus allowing surface inspection to be carried out.
I. Stylus Extensions These provide added probing penetration by extending the stylus away
from the probe. However, using stylus extensions can reduce accuracy due to loss of rigidity.
J. Tool Datuming Styli The tolerances to which tools can be set depends upon the flatness and
parallelism of the stylus tip to the machine axis. Fine adjustment is provided on all probes and probe
holders to allow these settings to be achieved. Where rotating tools are to be datumed for diameter, the
tools must be rotated in reverse to the cutting direction.
This accuracy requirement can only be met by doing precision optics design and by using the latest
imager technology to obtain the necessary 3-D image resolution and speed. Figure 14.17 schematically
illustrates how laser vision works. The range image is acquired in real time through optical laser trian-
gulation, profile after profile at a rate of 100 to 1000 profiles per second. This can be done by moving
the camera with the processing machine, for example, in the case of inspection of weld-joints, where
the camera is moved along with the welding torch over the joint or the weld bead, and the system builds
a complete contour of the joint or bead over the complete scanned length. This 3-D contour digitiza-
tion is the most efficient method to detect minute weld defects and to acquire enough information to
track the joint at a speed of 1 to 20 metre/minute, which is compatible with the laser-welding process
speed.
Study of Advanced Measuring Machines 377
Imager
(CCD or
CMOS) Laser
diode
α
Collecting Joint
lens
Laser
stripe
Part A Part B
Fig. 14.17 Working of laser vision Fig. 14.18 3-D Image of a weld
Traditionally, this technology has been used for joint tracking and adaptive welding. Beginning
in the mid-1990s, the technology was directed to developing the capability to measure pre-weld
joint fit up as well as finished weld inspection. The early applications were in the automotive and
mining/construction industries. Figure 14.18 presents the 3-D image of a weld taken by a laser
vision system. The laser system is really a sophisticated profiler capable of geometric measure-
ment as well as defect identification. Software library templates have been developed for unwelded
joints, partially welded joints, and finished welds. The system can be programmed by inputting the
applicable weld standard (API 1104, AWS D1.1) requirements such as root openings and included
groove angles for an unwelded joint, and items like groove weld width, convexity, and toe entry
angle for a completed joint.
Laser vision sensors can automate visual inspection of pipes and tubes, help ensure the reliabil-
ity of automatic ultrasonic testing, and make it easier to observe trends. Pipeline welding requires
the utmost attention to detail throughout every phase of manufacturing, starting with material
preparation right through to final inspection. Although automation has entered this world in the
form of mechanized welding systems and semi-automated radiographic and ultrasonic testing, the
human factor is still very much a part of these operations. Two of the most important steps in the
process are joint fit up and visual weld inspection. Laser vision sensing can help improve these
operations.
Automotive manufacturing is a very demanding industry for factory automation because of the
extremely high volumes per year resulting in short cycle times, minimum preventative maintenance
and a need to have a high uptime. Laser vision systems have been used for years, mainly in conjunction
with robots, to do seam finding and seam tracking on components ranging from chassis to body. More
recently, laser vision systems are being used for real-time process control and measurement of welding
processes ranging from arc to laser. The following are the examples of the use of laser vision cameras
in the automotive industry.
378 Metrology and Measurement
AQ: It consists of mainly the following subsystems: (See Fig. 14.25, Plate 18.)
i. Compact 3D-CMM Z-column and machine base are made of rigid granite. All axes are guided by
high-precision roller bearings. The table is made of a special aluminum featuring low mass and high rigidity.
ii. Drive System DC-servo drives with precision, backlash-free centre mount ball screw
vii. High-Accuracy Version High-accuracy MS models with improved length measurement uncer-
tainty, based on incremental linear scales of 0.1 µm resolution and the volumetric error correction (CAA).
viii. Illumination Computer-controlled fiber optic light sources for on axis (top) and back light.
Computer controlled 4-quadrant LED ring light.
1. Optical Sensor High-resolution CCD camera, digital image processing with gray scale evalu-
ation, automatic sub-pixel edge detection, automatic filter routines, multi-window technology, high-
speed focus, opto-electronic 2-step zoom. 0.5 µm resolution, 0.12 µm accuracy, 0.05- µm repeatability,
0.1-s/point Avg. measuring time (Test conditions: 20 × LWD lens). (See Fig. 14.21, Plate 16.)
2. Touch Probe (optional) Touch probe system TPS with Renishaw touch trigger probe TP6 and
integrated automatic probe changer PAC. (Measuring range in x will be reduced by 50 mm). Additional
probe systems available and rotary tables are optional. (See Fig. 14.22, Plate 17.)
3. Laser Probe (optional) In has a fast laser auto-focus for static measurements and is used for fast
and very accurate focusing, also used to detect 3D points on surfaces 10 times faster than video focus.
(See Fig. 14.23, Plate 17.)
• Offset free measurements between optical sensor and laser sensor
• Submicron accurate measurements in milliseconds
• Scanning and digitizing of free-form surfaces
• 0.5-µm resolution, 1-µm accuracy, 0.2-s avg. focus time, 500-s/point scanning rate
The laser scan software allows use of this sensor for non-contact 2D contour scanning with evaluation
of geometrical elements. High-resolution 3D scanning and display of surfaces are additional applica-
tions. This kind of topography is often used for fast measurement and digitizing of free-form surfaces.
(See Fig. 14.24, Plate 18.)
Laser vision sensors can both automate the visual inspection process as well as ensure that the auto-
matic UT process is reliable. The non-contact nature of the sensor makes the process robust in this
380 Metrology and Measurement
tough manufacturing environment. This automatic inspection method removes the subjectivity inherent
with any manual process and eliminates the arguments between production and quality control. The
results speak for themselves and do not require any interpretation—one simply sets the tolerance limits
and the system says “go” or “no go”. The other advantage of automatic visual inspection is the abil-
ity to observe trends and implement targeted process improvement efforts. In addition, the inspection
can examine 100% qualitative parameter of the job under consideration and the results (both data and
image) are archived digitally, thus eliminating paper.
Mass production has brought with it the need for inspection methods, which can keep pace with fast
production. Sampling of course, helps to speed up the inspection process, but often this is not fast
enough and frequently 100 per cent inspection is necessary. As a result, many types of automatic inspec-
tion devices have been developed.
manufacturer is using the Flexcell/AW system to verify the quality of several critical welds on a heavy-duty
frame. The system consists of a robot and camera system which is located in the same station as the robots
welding the cross members to the side rail. The inspection robot follows up and measures these welds as
well as a couple of welds made in the previous station that cannot be seen by the operator. The system is
set up to stop the production line and signal a technician if a defective weld is found. If a marginal weld is
found, the condition is logged for tracking and continuous improvement purposes. Features being mea-
sured include weld size, contour and attributes such as porosity and undercut.
Another benefit of the system is it’s ability to measure the location of the cross member while it is
inspecting. This is very helpful when one is trying to identify where the variation in fit-up or part location is
coming from when the weld shows variation. This information can then be used by the people responsible
for tooling, to improve the detailed parts or fixture.
Measurement
Parts to be processed
Processing
Basic adaptive controls consist of a deviation indicator or sensor, which can be electrical, mechani-
cal, pneumatic, or fluidic; a feedback system, usually electronic or fluidic for speed; and correcting unit.
The deviation indicator monitors the workpiece periodically or continuously and senses whether or not
it is within some preset limits. If the dimension being monitored falls outside these limits, the sensor
relays the information through the feedback system to the correcting unit. The correcting unit then
adjusts the position of the workpiece or tool in such a way so as to eliminate the defect in future parts.
In those cases where corrective action cannot be taken, the sensor sends its feedback into the warning
system which stops the machine and/or activates bells or lights.
Adaptive controls can also interface with a computer system, which constantly analyzes the
output of a machine, and produces the statistical information on process as it is being performed.
D = L1 − L2 − DK
L1
L2
3
2
Y N
4
1
DK
C
5
diameters ensures a fast and comprehensive workpiece assessment. The instrument construction,
the selection of completely new carbon-fiber reinforced plastics and the development of essential
compensation methods and measuring strategies contribute to an unparalleled measuring accuracy
in the production area of ±0.1 µm (±0.039 in). Due to this accuracy, ring gauges, plug gauges and
gauge blocks can be checked in addition to all kinds of industrial products. The Formtester MFU8
D is ideally suited for all applications, which are time-critical, and demand high precision such as the
direct, rapid and reliable on-site check of precision workpieces. The Formtester MFU8 D also solves
measurement tasks in the limit ranges of mechanical length metrology. Figure 14.31 explains the
working principle of the Formtester.
• Measuring capacity and the range of accessories for universal form measurements corresponding
to those of the Formtester MFU 8
• Form and diameter measurements in one and the same chucking with an accuracy of ± 0.1µm
(± 0.039 in)
Review Questions
1. Explain the concept of instrument overlapping.
2. What do you mean by metrology integration?
3. Explain the applications of universal measuring machine.
4. Discuss the use of numerical control for measurement.
5. Discuss the working of a three-dimensional measuring probe.
Study of Advanced Measuring Machines 385
Author Query:
‘Measurements should be made to produce the data needed to draw meaningful conclusions
from the system under test…’
Prof. S. M. Umrani, Member Management Committee, V.I.I.T., Pune, India
MEASUREMENT SYSTEMS of a given quantity is essentially an act or
The progress of measurement systems in the result of comparison between the
industry took place largely in the 1930s. quantity and a predefined standard. In
With the growth of continuous manufac- this modern world, there are widespread
turing, the need for continuous measure- applications of measurement systems in
ment of various process variables like various fields, viz., automobiles, residen-
temperature, pressure, vibration, force, tial appliances, war weapons, satellites,
torque, strain, etc., became a need of the etc. Thus, technology of using instru-
time. Thus, measurement science is the ments to measure and control the physi-
foundation of efficient industrial process- cal and chemical properties of materials
ing and manufacturing. The measurement is called instrumentation.
The measuring process is the one in which the property of an object or a system under consideration
is compared to an acceptable standard unit.
For a measurement to be meaningful, the following three basic things are required:
i. The standard used for comparison purposes must be accurately defined and should be commonly
accepted.
ii. The apparatus used and method adopted must be provable.
iii. The numerical measure is meaningless unless followed by the unit used.
• As science and technology move ahead, new phenomena and relationships are discovered and
these advances make new types of measurements imperative.
• New discoveries are not of any practical utility unless results are backed by actual measurements.
• The measurements not only confirm the validity of hypothesis but also add to its understanding.
• This results in an unending chain, which leads to new discoveries that require more, new and
sophisticated measurement techniques.
• Science and technology are associated with sophisticated methods of measurement.
• Measurement plays a significant role in achieving goals and objectives of engineering because of
feedback information supplied by them.
i. Direct Methods The information that may be available sometimes indicates the progress of
the process in a very simple way involving a direct relation. In direct measurement, the meaning of the
measurement and the purpose of the processing operation are identical. Such direct measurements are
generally accomplished by simple mechanical means.
In direct methods of measurement, the unknown quantity is directly compared against a standard
and the result is expressed as a numerical number and a unit (for example, consider an example of col-
lecting 1 litre of water from a tank. In this example, the meaning of the measurement of volume and
the purpose of the collecting operation, both are same, i.e., collecting 1 litre of water). Direct methods
are quite common for the measurement of quantities like length, mass and time. As human factor is
involved in direct measurement, it may not necessarily be very accurate. The sensitivity obtained is less.
Direct methods are not preferred and are rarely used.
ii. Indirect Methods As direct measurement is not always possible, an indirect measurement
technique, involving a derived relationship between the measured quantity and the desired result is
adopted. In indirect measurement, the meaning of the measurement and the purpose of the processing
operation are not same but they are related to each other. The modem trend in the indirect methods
of measurement is to go for electrical methods which offer possibilities of high speed of operation,
simpler processing of the measurand and adaptation of computer processing as well. The important
aspects of indirect methods are that these methods are comparatively more accurate and have high
sensitivity. Equivalent output is obtained indirectly against a standard, and therefore, these methods are
common and are preferred for measurement of quantities like temperature, level, flow, etc.
Consider an example of pasteurizing milk. This operation is monitored by noting the temperature
of the milk. Here, the temperature measurement is indirect because the purpose of operation is to pas-
teurize the milk, i.e., to remove the bacteria that may damage the milk, and the meaning of measurement
here is to measure the milk temperature. But note that the extent of pasteurization depends upon the
temperature of the milk. In this example, direct measurement would be the bacteria count.
388 Metrology and Measurement
A measuring instrument is simply a device for determining or ascertaining the value of some particular
quantity or condition. The value determined by the instrument is generally, but not necessarily, quan-
titative. A measuring instrument may be required to indicate, record, register, signal or perform some
operations on the value it has determined. Measuring instruments are classified based upon the mode
by which they indicate any change in the quantity to be measured or based on the source of power or
by their function or by construction.
2. Secondary Instruments These instruments are so constructed that the quantity being
measured can only be measured by observing the output indicated by the instrument, e.g., voltmeter,
thermometer, pressure gauge, etc. These instruments are calibrated by comparison against absolute
instruments. These instruments are commonly used, as they give direct readings. Usages of these in-
struments are almost in the sphere of measurement.
2. Manual Instruments These instruments require manual assistance for their functioning,
e.g., a resistance thermometer with Wheatstone’s bridge indicator requires manual adjustment of the
null point to get the corresponding temperature reading.
2. Power-operated These instruments require external power supply for their functioning. This
power may be in the form of electricity or compressed air or hydraulic supply.
2. Recording Type These instruments continuously make a written record of the values of the
measured quantity against some other variable like time, e.g., if the furnace is cooled and these cooling
temperatures are sensed by a recording-type temperature-measuring instrument then the plot or graph
of the furnace temperature against time is produced by the instrument.
It is possible and desirable to describe the operation of a measuring instrument or a system in a gener-
alized manner without resorting to intricate details of the physical aspects of a specific instrument or
system. The whole operation can be described in terms of functional elements.
Most measurement systems contain the following four functional elements:
1. Primary Sensing Element This element first receives the energy from the measured
medium and utilizes it to produce a condition representing the value of the measured variable. The
quantity under measurement makes its first contact with the primary sensing element of the mea-
surement system. This act is then immediately followed by the conversion of the measurand into an
analogous electrical signal. This work is done by a device which converts a physical quantity into elec-
trical quantity termed transducer. The first stage of a measurement system is known as the detector
transducer stage.
2. Variable Conversion Element This element converts the condition produced by the
primary element into the condition useful for functioning of the instrument. The output of the pri-
mary sensing element may be an electrical signal of any form. It may be voltage, frequency, current,
change in resistance or some other electrical parameter. For the instrument to perform the desired
function, it may be necessary to convert this output to some suitable form while preserving the
information content of the original signal. For example, suppose the output of the primary sensing
element is an analog signal and the next stage of the system may accept the signal only in the digital
form. Then an analog-to-digital converter is used to convert the signal into the desired form. Many
390 Metrology and Measurement
instruments do not need any variable conversion element, while others need more than one variable
conversion element.
Variable Manipulation Element This element performs certain operations on the condition
produced by the secondary element. It manipulates the signal presented to it, preserving the original
nature of the signal. Manipulation here means only a change in the numerical value of the signal. For
example, an electronic amplifier accepts a small signal as input and produces an output signal which is
also a voltage but of greater magnitude. It is not necessary that a variable manipulation element should
follow the variable conversion element, as shown in Fig. 15.1. It may precede the variable conversion
element. In case the voltage is too high, attenuators are used which lower the voltage or power for
the subsequent stage of the system. This element represents the parts used for indicating, recording,
signaling, registering or transmitting the measured quantity. The process of variable conversion and
manipulation is called signal conditioning.
Output of primary
Sensor / sensing element is Amplification or Data is transmitted if
transducer converted to attenuation of display is at
suitable form converted signal remote location
3. Data Transmission Element When the elements of an instrument are physically sepa-
rated, it becomes necessary to transmit data from one element to another; or when the primary element
is far away from the secondary element, and an element is essential that transmits the condition of the
primary element to the secondary element. The element that performs this function is called a data
transmission element.
Example Spacecraft are physically separated from the earth where the control stations guiding their
movements are located. Therefore, control signals are sent from these stations to the spacecraft by
telemetry system using radio signals.
Introduction to Measurement Systems 391
4. Data Presentation Element The information about the quantity under measurement
must be displayed in an intelligible form to the personnel or the system for monitoring, control or
analysis purpose. This function is done by the data presentation element to monitor data; visual display
devices are required. These devices may be analog or digital. In case data is to be recorded, recorders
like magnetic tapes, plotters, printers, x-y or y-t recorders and digital storage oscilloscopes may be used.
Using the functional elements we can measure any physical parameter.
Example Suppose we are measuring weight. The function elements will remain the same as shown
in Fig. 15.1. Figure 15.2 shows block diagram for weight measurement. In this case, the primary sensing
element used is the load cell, which is connected to the platform where we will put weights. Weight is
the measurand; load cell is the transducer. When the weight is kept on the platform, it will exert force
on the load cell. The output of the load cell is in millivolts. So voltage proportional to the weight is
produced. Voltage is amplified which is calibrated in terms of weight and given to the conversion ele-
ment. The conversion element here is an analog-to-digital converter. The converted data is given to the
display, which is a digital display. As the display is located in the system, there is no need of the data
transmission element.
The knowledge of the performance characteristics of an instrument is essential for choosing the most
suitable instrument for specific measurement. Measurement system characteristics are divided into two
categories, viz., (i) static characteristics, and (ii) dynamic characteristics.
These characteristics give a meaningful description of the quality of measurement. The static char-
acteristics are concerned with the measurement of quantities that are constant or vary slowly with time,
whereas, the dynamic characteristics are concerned with rapidly varying quantities.
are called static characteristics. Normally, static characteristics at a measurement system are those that must
be considered when the system or instrument is used to measure a condition not varying with respect
to time.
1. Accuracy Accuracy of the instrument may be defined as its ability to respond to a true value of
a measured variable under reference conditions. In other words, it can also be explained as the closeness
with which an instrument reading approaches the true value of the quantity being measured. Moreover,
the accuracy of measurement means conformity to the truth. The accuracy of an instrument may be
expressed in different ways, viz., in terms of the measured variable itself, span of the instrument, upper-
range value, per cent of scale length of actual output reading.
Overall Accuracy For the instruments composed of separate physical units like primary, secondary,
manipulation, etc., overall accuracy is expressed by combining individual accuracies of different elements.
For pressure spring thermometer having accuracy of bulb-capillary system as ±0.5% and accuracy
of Bourdon pressure gauge as ±1 %, the overall accuracy can be expressed as
as measurement is concerned, there is a difference between the two terms as they have sharp differ-
ences in meanings.
3. Repeatability When an instrument is subjected to a certain fixed, known input, and if instru-
ment readings are noted consecutively by approaching the measurement from the same direction under
the same operating conditions then the closeness of all these readings for the same input represents
repeatability of the instrument.
4. Sensitivity The sensitivity of the instrument denotes the smallest change in the value of a
measured variable to which the instrument responds. In other words, sensitivity denotes the maximum
change in an input signal (measured variable) that will not initiate a response on the output (indication),
e.g., the accuracy of a thermometer is 1°C means the thermometer output (response) would change only
if the temperature around it changes by 1°C. Any changes in temperature less than 1°C are not indicated
by this thermometer. Therefore, the static sensitivity of an instrument is the ratio of the magnitude of
the output signal or the response to the magnitude of the input signal or quantity being measured. Its
units depend upon the type of input and output, e.g., count per volt, millimetre per microampere, etc.
6. Drift Drift is an undesirable quality in industrial instruments because it is rarely apparent. The
gradual shift in the indication or record of the instrument over an extended period of time, during
which the true value of the variable does not change is referred as drift. Different kinds of drift are
explained below.
a. Zero Drift If the whole calibration gradually shifts by the same amount due to slippage, or due to
undue warming up of electronic tube circuits, zero drift sets in. Zero setting can prevent this (i.e., by shift-
ing the pointer position). The input–output characteristics with zero drift are shown in Fig. 15.3 (a).
b. Span Drift or Sensitivity Drift If there is proportional change in the indication all along
the upward scale, the drift is called span drift or sensitivity drift. Hence, higher calibrations get shifted
more than the lower calibrations. The characteristics with span drift are shown in Fig. 15.3 (b).
c. Zonal Drift In case the drift occurs only over a portion of span of an instrument, while the
remaining portion of the scale remains unaffected, it is called zonal drift.
There are many environmental factors which cause drift. They may be stray electric/magnetic fields,
thermal emf changes in temperature, mechanical vibrations, wear and tear and high mechanical stress
developed in some parts of the instruments and systems.
394 Metrology and Measurement
Normal characteristics
Normal characteristics span drift
zero drift
Output
Output
Nominal Nominal
characteristics characteristics
Zero drift
Input Input
(a) (b)
Fig. 15.3 (a) Zero and (b) Span drift
7. Dead Zone It is defined as the largest change of input quantity for which there is no output of
the instrument. For example, the input applied to the instrument may not be sufficient to overcome the
friction and it will, in that case, not move at all. It will only move when the input is such that it produces
a driving force which can overcome the friction forces.
i. The conversion from a scale reading to the corresponding measured value of input quantity is
most convenient if one merely has to multiply by a fixed constant.
ii. When the instrument is part of a large data or control system, the linear behavior of the past
often simplifies the design and analysis of the whole system.
Thus, linearity can be defined as the measure of the maximum deviation of a calibration point from
a straight line. Figure 15.4 shows the actual calibration curve, i.e., a relationship between input–output
and a straight line drawn from the origin using the method of least squares.
9. Resolution or Discrimination If the input is slowly increased from some arbitrary (non-
zero) input value, it will again be found that the output does not change at all until a certain incre-
ment is exceeded. This increment is called resolution or discrimination of the instrument. Thus, the
smallest increment in input (the quantity being measured), which can be detected with certainty by an
Introduction to Measurement Systems 395
Actual
calibration
curve
Idealized
Maximum straight
deviation line
Output
Input
Fig. 15.4 Actual calibration curve
instrument, is its resolution or discrimination. Thus, resolution defines the smallest measurable input
change while the threshold defines the smallest measurable input itself.
10. Threshold If the instrument input is increased very gradually from zero, there will be some
minimum value below which no output change can be detected. This minimum value defines the thresh-
old of the instrument. The first detachable output change is often described as being any ‘noticeable
measurable change’. This phenomenon is due to input hysteresis.
11. Hysteresis Hysteresis is a phenomenon which depicts different output effects when loading
and unloading, whether it is a mechanical system or an electrical system, and for that matter, any system.
Hysteresis is the non-coincidence of loading and unloading curves. Consider an instrument which has
no friction due to sliding parts. When the input of this instrument is slowly varied from zero to full
scale and then back to zero, its output varies as shown in Fig. 15.5(a).
Hysteresis in a system arises due to the fact that all the energy put into the stressed parts when load-
ing is not recoverable upon unloading. This is because the second law of thermodynamics rules out any
perfectly reversible process in the world.
12. Static Calibration In general, calibration is defined as a process in which the measurand
is compared with the known standard. The static calibration refers to a situation in which all inputs
(desired, interfering, modifying) except one are kept at constant values. Then the one input under study
is varied over some range of constant values. The input–output relations developed in this way comprise
a static calibration valid under stated constant conditions of all the other inputs. The calibration of all
396 Metrology and Measurement
Output
Unloading
0 Input
Loading
(a) Hysteresis when measurement is from (b) Hysteresis when measurement starts on
zero onwards positive and negative ′side
Fig. 15.5 Hysteresis effect
instruments is important since it affords the opportunity to check the instrument against a known stan-
dard and subsequently to find errors and accuracy.
1. Speed of Response It is the rapidity or fastness with which an instrument responses to any
changes in the input (measured quantity).It can be observed that instruments rarely respond instanta-
neously to changes in the measured variable. But there is some time lag between the change in input and
the initiation of change in the output of the instrument. Also, the speed at which the output changes is
smaller than the speed at which the input changes:
i. Retardation Type In this case, the response of the measurement system begins immediately
after a change in the measured quantity has occurred.
ii. Time-delay Type In this case, the response of the measurement system begins after a dead
time after the application of the input.
Introduction to Measurement Systems 397
3. Fidelity Fidelity of an instrument is the degree of closeness with which a measurement system
responds (i.e., indicates or records) to changes in the measured variable. Thus, fidelity represents how
close the instrument is reading in the actual value of the measuring quantity.
4. Dynamic Error It is the difference between the true value of the quantity (under measurement)
changing with time and the value indicated by the measurement system if no static error is assumed. It
is also called measurement error.
Since errors are unwanted entities in any measurement process, it is imperative to interpret the results
of quantitative measurement in an intelligent manner. An understanding and thorough evaluation of
errors is essential. A study of errors is the first step in finding ways to reduce them. Errors may arise
from different sources and are usually classified as
1. Gross error
2. Systematic error
3. Random error
1. Gross Error Gross error mainly covers human mistakes in reading instruments, and record-
ing and calculating measurement results. The observer may grossly misread the scale. For example, a
person may, due to oversight, read the temperature as 32.5°C while the actual reading may be 22.5°C.
He or she may transpose the reading while recording. For example, the person may read 28.5°C and
record it as 25.5°C. As long as human beings are involved, some gross errors will definitely be com-
mitted. Complete elimination of gross errors is probably impossible. One should try to anticipate and
correct them. Gross errors may be of any amount and, therefore, their total elimination is mathemati-
cally impossible. However, they can be avoided by adopting two means—(a) great care should be taken
in reading and recording the data, and (b) two, three or more readings should be taken for the quantity
under measurement. It is always advisable to take a large number of readings as a close agreement
between readings assures that no gross error has been committed.
Instrumental errors are inherent in instruments because of their mechanical structure. They may be
due to construction or calibration of instruments. Errors may be caused because of friction, hysteresis
or even gear backlash. It is possible to eliminate static errors or at least reduce them to a great extent
by understanding the procedure of measurement. Calibration against standards may be used for the
purpose, correction factors should be applied after determining the instrumental errors and the instru-
ment may be recalibrated carefully.
398 Metrology and Measurement
3. Misuse of Instruments Error The errors caused in measurements are due to the fault of
the operator than that of the instruments. A good instrument used in an unintelligent way may give
erroneous results. Examples for this misuse of instruments may be due to the failure to adjust the zero
of instruments, poor initial adjustments, using leads of too high resistance, etc. These errors can be
eliminated by handling the instrument in a proper manner and by following the manufacturer’s instruc-
tions.
4. Error due to Loading Effects One of the most common types of errors committed by
beginners is the improper use of an instrument for measurement. In measurement system, we deal
with both electrical and mechanical quantities and elements, and hence the loading effect may occur
on account of both electrical and mechanical elements. The loading effects are due to impedances of
various elements connected in a system.
5. Random Errors These errors have unknown or non-determinable causes which can be
treated mathematically using the laws of probability. These errors may be due to improper instru-
ment design, insufficient process parameters, and/or may be due to insufficient knowledge of pro-
cess parameters.
After understanding some basic aspects of measurement systems and its components, we discuss
different types of transducers, intermediate devices and terminating devices in the next chapter.
Review Questions
Intermediate and modifying devices are used to amplify, attenuate, fire, modulate, filter or
otherwise modify input signal format which, will be acceptable to the output device….
…. M J Khurjekar, Professor, E & TC, V.I.I.T., Pune.
ELECTRONIC
INSTRUMENTATION SYSTEM
An electronic instrumentation system The input device receives the measur-
consists of a number of components to and or the quantity under measurement
perform a measurement and record its and delivers a proportional or analogous
results. As explained in the earlier chap- electrical signal to the signal-condition-
ter, a generalized measurement system ing device where the signal is amplified,
consists of three major components — attenuated, fired, modulated, or other-
an input device, a signal-conditioning or wise modified in a format acceptable to
processing device, and an output device. the output device.
16.1 TRANSDUCERS
The input quantity for most instrumentation systems is a ‘Non-electrical Quantity’. In order to use elec-
trical methods and techniques for measurement, manipulation or control, the non-electrical quantity is
generally converted into an electrical form by a device called a ‘transducer’. It can be defined as a device
which, when actuated, transforms energy from one form to another. Broadly speaking, a transducer is a
device that transforms one type of energy into another. For example, a battery is, therefore, a transducer
(chemical energy converted to electrical energy), ordinary glass thermometer (heat energy converted
into mechanical displacement of a liquid column). A device, which converts mechanical force into an
electrical signal forms a very large and important group of transducers commonly used in the industrial
instrumentation area. Many other physical parameters such as heat, intensity of light, flow rate liquid
level, humidity and pH value may also be converted into electrical form by means of transducers. These
transducers provide an output signal when stimulated by a mechanical or a non-mechanical input: a red-
hot conductor converts light intensity into change of resistance, a thermocouple converts heat energy
into electrical voltage, a force produces a change of resistance in a strain gauge, an acceleration produces
Intermediate Modifying and Terminating Devices 401
a voltage in a piezo-electrical crystal, and so on. In all cases, however, the electrical output is measured by
standard methods; giving the magnitude of the input quantity in terms of an analogous output.
1. Operating Principle Transducers are many times selected on the basis of the operating
principles used by them. The operating principles used may be resistive, inductive, capacitive, optoelec-
tronic, piezoelectric, etc.
3. Operating Range The transducer should maintain the range requirements and have a good
resolution over its entire range. The rating of the transducer should be sufficient so that it does not
break down while working in its specific operating range.
4. Accuracy A high degree of accuracy is assured if the transducer does not require frequent cali-
bration and has a small value for repeatability. It may be emphasized that in most industrial applications,
repeatability is of considerably more importance than absolute accuracy.
5. Cross Sensitivity Cross sensitivity is a further factor to be taken into account when measur-
ing mechanical quantities. These are situations where the actual quantity being measured is in one plane
and the transducer is subjected to variations in another plane. More than one promising transducer
design has to be abandoned because the sensitivity to variations of the measured quantity in a plane
perpendicular to the required plane has been such as to give complete erroneous results when the trans-
ducer has been used in practice.
6. Errors The transducer should maintain the expected input–output relationship described by its
transfer function so as to avoid errors.
7. Transient and Frequency Response The transducer should meet the desired time
domain specifications like peak overshoot, rise time, settling time and small dynamic error. It should
ideally have a flat frequency response curve. In practice, however, there will be cut-off frequencies, and
higher cut-off frequency should be high in order to have a wide bandwidth.
402 Metrology and Measurement
8. Loading Effects The transducer should have a high input impedance and a low output imped-
ance to avoid loading effects.
11 Usage and Ruggedness The ruggedness, both of mechanical and electrical intensities of
a transducer versus its size and weight, must be considered while selecting a suitable transducer.
12. Electrical Aspects The electrical aspects that need consideration while selecting a trans-
ducer include the length and type of cable required. Attention also must be paid to signal-to-noise ratio
in case the transducer is to be used in conjunction with amplifiers. Frequency response limitations must
also be taken into account.
13. Stability and Reliability The transducer should exhibit a high degree of stability to be
operative during its operation and storage life. Reliability should be assured in case of failure of a trans-
ducer in order that the functioning of the instrumentation system continues uninterrupted.
14. Static Characteristics Apart from low static error, the transducers should have a low
non-linearity, low hysteresis, high resolution and a high degree of repeatability. The transducer
selected should be free from load alignment effects and temperature effects. It should not need
frequent calibration, should not have any component limitations, and should be preferably small in
size.
pressure is converted into a displacement by the Bourdon tube and then the displacement is converted
into analogous voltage by LVDT. The Bourdon tube is called a primary transducer and the LVDT is
called a secondary transducer.
2. Active and Passive Transducers may be classified according to whether they are active
or passive. Active transducers are those which do not require an auxiliary power source to produce
their output. They are also known as self-generating. The energy required for production of an
output signal is obtained from the physical quantity. As these are active, the size is more compact.
And passive transducers derive the power required for transduction from an auxiliary power supply.
They are also known as externally powered transducers. They also derive part of the power required
for conversion from the physical quantity under measurement. Auxiliary power supply has to be
considered for size.
3. On the Basis of Transduction Form Used The transducer can also be classified on the
basis of the principle of transduction as resistive, inductive, capacitive, etc., depending upon how they
convert the input quantity into resistance, inductance or capacitance respectively. They can be classified
as piezoelectric, thermoelectric, magnetostrictive, electrokinetic and optical transducers.
4. Analog and Digital Transducers The transducers can be classified on the basis of the
output, which may be a continuous function of time, or the output may be in discrete steps.
Analog transducers convert the output quantity into an analog output, which is a continuous func-
tion of time. Thus a strain gauge, an LVDT, a thermocouple or a thermistor may be called ‘analog
transducers’ as they give an output which is a continuous function of time. And digital transducers
convert the input quantity into an electrical output, which is in the form of pulses.
(a) Photoresistors Photoresistors are variable resistors. When the light shining on them increases
in intensity, their resistance is lowered. Working photoresistors into your circuits will allow you to detect
changes in lighting. For example, you could build a circuit to beep if someone turned on your room
lights.
(b) Thermistor Thermistors are also variable resistors, however, instead of being sensitive to light,
they are sensitive to temperature. There are two types of thermistors, viz., positive temperature co-
efficient (PTC) thermistors and negative temperature coefficient (NTC) thermistors. The resistance of
404 Metrology and Measurement
a PTC device increases with temperature increase, and the resistance of an NTC device increases with
temperature decrease and vice versa.
(d) Piezoelectric Devices Piezoelectric devices contain special crystals. These crystals will pro-
duce a voltage if pressure is applied to them in one direction. The crystals will also bend if voltage is
applied to them. A common use of piezoelectric devices is ‘buzzers’, which produce a buzzing noise
when a voltage is applied.
(e) Light-Emitting Diodes Light emitting diodes, or LEDs as they are usually called, generate
light when a current is passed through them.
(f) Capacitors Capacitors store electrical charge. This charge is built up along one of the capac-
itor’s two plates, and is released when there is a short between the plates. Capacitance is measured in
farads. One farad is an enormous quantity of charge. Most capacitors are much smaller, in the micro
and picofarad range. A capacitor can be charged almost instantly if it’s leads are connected directly to a
power supply. It is possible to increase the charging time by adding a resistor between the power supply
and the capacitor.
The actual formula for determining the charging time is q = qinitial[1 − e−t/(RC)], where, RC is a time
constant equal to the time required for the capacitor to accumulate 63.2% of its equilibrium charge. In
addition to releasing it’s charge by shorting, a capacitor may also lose it’s charge by ‘leaking’ after it is com-
pletely charged. This process can be slowed by connecting a resistor across the two leads of the capacitor.
The larger the resistor, the longer the discharge will take. The formula for discharging is q = qinitiale−t/(RC).
Finding total capacitance is the opposite of finding total resistance, in that when determining the total
amount of capacitance in series, you use 1 over the sum of the reciprocals of the capacitors. This gives Ctotal.
When finding the total capacitance in parallel, you just have to add the values of the capacitors together.
Transducers can be, and are, manufactured on many different operating principles—resistive,
inductive, capacitive, piezoelectric, etc. Miniature accelerometers for the measurement of high-range
dynamic acceleration forces, for example, are usually constructed with piezoelectric sensing elements
because of the resulting small size and weight, and the self-generated electrical output. Similarly,
when some special aspect of the application requires it, capacitive or inductive sensors may be used.
The bonded metallic-resistance strain gauge, however, because of its unique set of operational char-
acteristics, has easily dominated the transducer field for the past twenty years or so.
Probe
Reference coil
Active coil
Magnetic
Field
line
Target
(conductive materials)
impedance of the coil is non-linear and temperature dependent. Fortunately, a balance coil can com-
pensate for the temperature effect. As for the non-linearity, careful calibrations can ease its drawback.
It cannot be used for detecting the displacement of nonconductive materials or thin metalized films.
However, a piece of conductive material with sufficient thickness can be mounted on nonconductive
targets to overcome this drawback. A self-adhesive aluminum-foil tape is commercially available for this
purpose. However, this practice is not always possible. Calibration is generally required, since the shape
and conductivity of the target material can affect the sensor response.
406 Metrology and Measurement
Displacement is a fundamental variable whose measurement is involved in many other physical param-
eters such as velocity, acceleration, force, torque, etc. When measurement is direct, it gives displacement
directly but when indirect methods are used, information regarding the other associated variables like
force, velocity, acceleration, vibration, torque, etc., can also be obtained.
The displacement is sensed by the primary sensing element, and the output of the primary sensing
element is given to the data manipulation system; so if the output is a weak signal then it is amplified
using the data-manipulation element. The output of the data manipulation element is converted to an
appropriate form by a data-conversion element for indication after processing and calibration.
3. Optical Measurements Optical methods use photo-detectors, which yield the output ulti-
mately in an electrical quantity like current, voltage, etc.
The transducers used for displacement measurement are (i) potentiometers, (ii) LVDT, (iii) capaci-
tance type, (iv) digital transducer, and (v) nozzle-flapper transducer.
signal in phase with the excitation signal. As the core travels to the right of the centre, the primary coil
is more tightly coupled to the right secondary coil, creating an output signal 180 degrees out-of-phase
with the excitation voltage.
+ Output −
signal
Core
Coil assembly
Signal and its Conditioning Many sensors used in process control and monitoring applica-
tions generate a current signal, usually 4 mA to 20 mA or 0 mA to 20 mA. Current signals are some-
times used because they are less sensitive to errors such as radiated noise and voltage drops due to lead
resistance. Signal-conditioning systems must convert this current signal to a voltage signal. To do this
easily, pass the current signal through a resistor, as shown in Fig. 16.3.
Then with the help of a DAQ system, the voltage VO = ISR that is generated across the resistor,
where IS is the current and R is the resistance, is measured. Select a resistor value that has a usable
range of voltages, and use a high-precision resistor with a low temperature coefficient. For example, a
249-ohm, 0.1%, 5-ppm/°C resistor, converts a 4 mA to 20 mA current signal into a voltage signal that
varies from 0.996 V to 4.98 V.
Is Current-output
+ device
VMEAS = IsR R
−
Limitations The core must contact directly or indirectly with the measured surface which is not
always possible or desirable. However, a non-contact thickness gauge can be achieved by including a
pneumatic servo to maintain the air gap between the nozzle and the workpiece, and dynamic measure-
ments being limited to no more than 1/10 of the LVDT resonant frequency. In most cases, this results
in a 2-kHz frequency cap.
Applications Although the LVDT is a displacement sensor, many other physical quantities can be
sensed by converting displacement to the desired quantity via thoughtful arrangements. Several exam-
ples can be given, viz., extensometers, temperature transducers, butterfly valve control, and servo-valve
displacement sensing. Measurement of deflection of beam, strings, or rings load cells, force transducers
and pressure transducers are discussed in detail in Chapters 17, 19, and 21 respectively.
Measurement of thickness variation of workpieces (Fig. 16.4) can be done by using dimension
gauges, thickness and profile measurements, and product sorting by size.
Fluid level measurement can be done using LVDT by position sensing in hydraulic cylinders.
LVDT
LVDT Float
Fluid
Workpiece
Fig. 16.4 Profile gauge Fig. 16.5 Fluid-level gauge
A typical measurement system consists of individual sensors with necessary data acquisition and signal-
conditioning, multiplexing, data conversion, data processing, data handling and associated transmis-
sion, storage, and display systems. In order to optimize the characteristics of a system in terms of
performance, handling capacity and cost, the relevant sub-systems may often be combined. The analog
Intermediate Modifying and Terminating Devices 409
data is generally acquired and converted to digital form for the purposes of processing, transmission,
display and storage.
Processing of data may consist of a large variety of operations from simple comparison to compli-
cated mathematical manipulations. It can be for such purposes as collecting information (averages, sta-
tistics, etc.), converting the data into a useful form (e.g., calculation of efficiency of a prime mover from
speed, power input and torque developed), using data for controlling a process, performing repeated
calculations to separate out signals buried in noise, generating information for displays and a variety of
other goals. Data may be transmitted over long distances (from one location to another) or short dis-
tances (from a test centre to a nearby computer). The data may be displayed on a digital panel meter or
as a part of a cathode ray tube (CRT) presentation. The same may be stored in either raw or processed
form, temporarily (for immediate use) or permanently (for later reference).
Signal
Transducer 1
Conditioner 1
Signal
Transducer-n
Conditioner-n Program
control
Fig. 16.6 Generalized data acquisition system
410 Metrology and Measurement
divides data acquisition systems into two categories, viz., those suited to favorable environments
(minimum radio frequency interference and electromagnetic induction) and those intended for hostile
environments. The former category may include, among others, laboratory instrument applications,
test systems for gathering long-term drift information on zeners, high-sensitivity calibration tests, and
research or routine investigations, such as ones using mass spectrometers and lock-in amplifiers. In
these, the system designers’ tasks are oriented more towards making sensitive measurements rather
than to the problems of protecting the integrity of the analog data. The second category specifically
includes measurements protecting the integrity of the analog data under the hostile conditions. Situ-
ations of this nature arise in industrial process control systems; aircraft control systems, turbovisory
instrumentation in electrical power stations, and a host of other measurements to be carried out under
industrial environments.
Measurements under hostile conditions often require devices capable of wide temperature-range
operation, excellent shielding and redundant paths for critical measurements, and considerable pro-
cessing of the digital data. In addition, digital conversion of the signal at early stages, thus making full
use of high-noise immunity of digital signals, as well as considerable design effort in order to reduce
common mode errors and avoidable interferences, can also enhance performance and increase reli-
ability. On the other hand, laboratory measurements are conducted over narrower temperature ranges,
with much less ambient electrical noise, employing high sensitivity and precision devices for higher
accuracies and resolution. The prevention of an appropriate signal-to-noise ratio may still have to be
achieved with due emphasis on design and measurement techniques. The important factors that decide
the configuration and the sub-system of the data acquisition system are the following:
1. Resolution and Accuracy The resolution desired for a measurement is often governed by
the overall accuracy required from the system and is typically three to five times better than the desired
accuracy value. The resolution obtainable from a measurement is not only dependent on the resolution
that the measuring device is capable of, but also on the relative time stability of the measurand itself.
When a time-varying stationary parameter is under observation, improvement in stability, and in turn
resolution, is possible by statistical averaging of the measured values. Accuracy being the closeness with
which a measured value agrees with a specified standard, absolute accuracy can always be brought into
a system which has sufficient stability and linearity, by providing a calibration facility. Once the system
has been calibrated, accuracy impairments will depend on the stability of the system variants, such as
gain stability and reference stability. Since the resolution with which a measurement can be made often
decreases for higher measurement rates, for the same cost, the need for a specific resolution desired has
to be examined with great care and full understanding of the requirement.
3. Sampling Rate per Channel When the sample rate desired from a specific number of
channels are lesser by a factor of two or more, it may be possible to employ sub-commutation in order
to reduce the effective number of channels that have to be scanned at the highest rate.
5. Cost
This article first explains a general overview of signal-conditioning and then discusses some of the
converter technologies. Experienced users of signal-conditioning systems may skip the introductory
part and refer directly to the critical technologies section.
Signal-conditioning is one of the most important and most overlooked components of a data-
acquisition system. With it, we can bring real-world signals into our digitizer. Many sensors require
special signal-conditioning technology, and no instrument has the capability to provide all types
of signal-conditioning to all sensors. For example, thermocouples produce very low-voltage sig-
nals, which require amplification, filtering and linearization. Other sensors, such as strain gauges
and accelerometers, require power in addition to amplification and filtering, while other signals may
require isolation to protect the system from high voltages. No single instrument can provide the
flexibility required to make all of these measurements. However, front-end signal-conditioning can
combine the necessary technologies to bring these various types of signals into a single data acquisi-
tion system.
Not all signal-conditioning requirement/options are equal. Most choices are non-intelligent, paral-
lel-in/parallel-out configurations that offer the bare minimum of functionality for a select few sig-
nals or sensor types. However, for computer-based measurement and automation, we want a system
designed to take advantage of the latest PC-based data acquisition and instrumentation technologies.
This system should have programmable input settings, the ability to be automatically detected by your
computer, and tight integration with your software to handle scaling and channel management. The
system under consideration should offer all of the conditioning technologies that is needed, proof of
their accuracy, and the capability to take advantage of the advances in high-speed digitizers.
examples of basic characteristics and conditioning requirements of some common transducers. All of
these preparation technologies are forms (called) as of signal-conditioning.
Because of the vast array of signal-conditioning technologies, the role and need for each technology
can quickly become confusing. Therefore, we’ve provided a list of common types of signal-conditioning,
their functionality, and examples of when you need them.
1. Amplification When the voltage levels measuring are very small, amplification is used to
maximize the effectiveness of the digitizer. By amplifying the input signal, the conditioned signal
uses more of the effective range of the analog-to-digital converter (ADC) and enhances the accuracy
and resolution of the measurement. Typical sensors that require amplification are thermocouples and
strain gauges.
Intermediate Modifying and Terminating Devices 413
3. Isolation Voltage signals well outside the range of the digitizer can damage the measurement
system and harm the operator. For that reason, isolation is usually required in conjunction with attenu-
ation to protect the system and the user from dangerous voltages or voltage spikes. Isolation may also
be required when the sensor is on a different ground plane from the measurement sensor (such as a
thermocouple mounted on an engine).
4. Multiplexing Typically, the digitizer is the most expensive part of a data-acquisition system.
By multiplexing, we can sequentially route a number of signals into a single digitizer, thus achieving a
cost-effective way to greatly expand the signal count of your system. Multiplexing is necessary for any
high-channel-count application.
5. Filtering Filtering is required to remove unwanted frequency components from a signal, pri-
marily to prevent aliasing and reduce signal noise. Thermocouple measurements typically require a
lowpass filter to remove power line noise from the signals. Vibration measurements normally require an
antialiasing filter to remove signal components beyond the frequency range of the acquisition system.
6. Excitation Many sensors, such as RTDs, strain gauges, and accelerometers, require some form
of power to make a measurement. Excitation is the signal-conditioning technology required to provide
this power. This excitation can be a voltage or current source, depending on the sensor type.
7. Linearization Some types of sensors produce voltage signals that are not linearly related to
the physical quantity they are measuring. Linearization, the process of interpreting the signal from the
sensor as a physical measurement, can be done either with signal-conditioning or through software.
Thermocouples are the classic example of a sensor that requires linearization.
9. Simultaneous Sampling When it is critical to measure two or more signals at the same
instant in time, simultaneous sampling is required. Front-end signal-conditioning can provide a much
more cost-effective simultaneous sampling solution than purchasing a digitizer for each channel. Typi-
cal applications that might require simultaneous sampling include vibration measurements and phase-
difference measurements.
414 Metrology and Measurement
1. Integration The ability of a signal-conditioning system to integrate easily with the rest of
your system is technology that is a must. Your system should be modular, thus giving you the ability to
choose the types of signal-conditioning necessary for your system. It is also critical to have a system
that accommodates mixed signal types. For example, the system should be able to connect currents,
high voltages, various sensors, analog outputs, digital I/O, and switching all into the same platform.
2. Calibration One of the most critical technologies that a signal-conditioning system should
possess is the ability to be easily and accurately calibrated. Most measurement devices are calibrated at
the factory, but the accuracy immediately starts to drift with time and temperature changes. To make the
most accurate measurements possible, it is necessary to periodically calibrate the entire data acquisition
system. If the system has precision onboard voltage references, the operator can adjust the measure-
ment system to compensate for temperature changes. In addition, you must have access to external
calibration services to keep your system performing up to the manufacturer’s specifications year after
year. It is very important to learn the calibration process for any signal-conditioning system under
consideration because that is the only way to ensure that investment contains the technology which is
needed to make accurate and reliable measurements.
4. Switching In today’s demanding test environments, the ability to route signals easily through-
out a measurement system is a technology that can lead to huge improvements in test times. As an
example, consider a case where a unit under test (UUT ) must be subjected to four separate mea-
surements in the testing process. Without the proper technology, the UUT must be reconnected to
each different measurement device for each test. Nowadays with state-of-the art switching technol-
ogy, the operator can not only route the UUT leads automatically to each measurement device in
turn, but can also test several UUTs at the same time. Thus, we could achieve more efficient use of
your test equipment, faster test times, and less user intervention. The selection of a signal-condi-
tioning system that offers this technology can have a huge impact on the overall performance of
the system.
5. Isolation Another important technology to consider is isolation. When we are measuring sig-
nals that either are high-voltage signals or are subject to voltage spikes, it is critical that those signals are
isolated from the rest of your system. Inadequate isolation compromises the safety of the operator, as
well as the integrity of the entire data-acquisition system. When determining the isolation requirements
of your system, it is imperative to have reliable and accurate isolation specifications, including both a
safe working voltage rating and an installation rating.
7. Bandwidth In addition to being expandable, a system should also have the bandwidth to handle
the data throughput from a high-channel-count system. The bandwidth should also be high enough
to accommodate future growth in channel count. System bandwidth is typically expressed in samples/
second (Hz). To determine the minimum necessary bandwidth of the system, we should multiply the
total number of expected channel times the maximum sampling rate needed on an individual channel.
For a high-channel-count system, the required bandwidth for a modest acquisition rate can quickly
reach several hundred kHz. Bandwidth is an often overlooked, but extremely important, technology to
consider when selecting a signal-conditioning system.
8. Software A large portion of the total cost of a test and measurement system is application
development. To keep application development costs to a minimum, software tools must be used that
maximize system productivity. The signal-conditioning system should be designed to integrate tightly
with these software tools. Only with the capability to fully control the signal-conditioning system under
consideration, can the software application take full advantage of the latest technologies in computer-
based measurement and automation.
conditioning system will poll the hardware, report which equipment we have, and provide us a software
interface for setting up all signal-conditioning settings. Configuring channels through software, having
the capability to set up channel names and scaling to engineering units should be proper.
the output of the divider is a ratio of the amplifier output voltage to the excitation voltage. An alternative
method, as shown in Fig. 16.8 is to feed the bridge-excitation voltage as an external reference voltage
for the analog to digital (AID) converter in which the conversion factor is inversely proportional to the
reference voltage. The system sensitivity is then independent of the fluctuations in bridge-excitation
voltage.
R1 R2 Instrumentation
ADC Buffer
amplifiers
R4 R3
12-Bit BCD
I/P resolution: Varies from 0.7 μ to 700 μV
(a)
100 μV X 1000
to ADC
amplifier
100 mV
I/P resolution: 100 μV
(b)
Fig. 16.9 Logarithmic compression
This cannot be achieved without loss of performance elsewhere. For example, while the log ampli-
fier can enhance the resolution at high inputs (99.9 mV ), it is definitely poorer. At this input, one least
significant bit (LSB) change in the output of the ADC can occur only if the input is decreased to 92.2 mV,
i.e., an equivalent resolution of only 700 μV. (This would have been uniformly 100 μV without log con-
version.) The log conversion in effect thus distributes the resolution on a ‘percentage of reading’ basis
as against a ‘percentage of full scale’ as with AID conversion. Such conditioning can be advantageous
in systems possessing an output relationship involving the logarithmic of the measurand or where a
moderate accuracy measurement (= 1%) is desired, over a wide range (1 : 105).
Since the log function is inherently unipolar, other types of compression can be employed when
handling bipolar inputs. A particular case of interest is the sinh-1 function, which can be obtained using
complementary logarithmic transconductors.
Analog
signal
(a)
Sampling
pulses
(b)
Sampled
signal
(c)
Sampled
and held
signal
(d)
stages, each of which uses a low-resolution analog-to-digital converter to estimate the input, and an
accurate DAC to convert the output. Subranging also calculates the residue, the difference between
the estimated input and the actual output. A gain block is used to amplify and restore the residue to an
appropriate level for further estimation by the next stage.
Sigma-delta architecture takes a fundamentally different approach than other ADC architectures.
Sigma-delta converters consist of an integrator, a comparator, and a single-bit DAC. The DAC output
is subtracted from the input signal, the resulting signal is integrated, and the comparator converts the
integrator output voltage to a single-bit digital output (1 or 0). The resulting bit becomes the DAC’s
input, and the DAC’s output is subtracted from the ADC’s input signal. With sigma-delta architecture,
the digital data from the ADC is a stream of ones and zeros, and the value of the signal is proportional
to the density of digital ones from the comparator. This bit stream data is then digitally filtered and
decimated to result in a binary-format output.
Intermediate Modifying and Terminating Devices 421
1. Differential Amplifier The input and feedback connections are both made to the inverting (−)
input. The non-inverting input (+) is grounded through a resistor. This is what forces the inverting input
to be a virtual ground: the amplifier output voltage depends on the voltage difference between the two
inputs, rather than the absolute voltage at either input. As the input transistors inside the op-amp do actu-
ally require a very slight input current, there is a very slight corresponding voltage drop across the resistors
connected to those inputs.
The circuit shown in Fig. 16.12 is used for finding the difference of two voltages, each multiplied by
some constant (determined by the resistors).
Rf
R1
V1 −
Vout
R2
V2 +
Rg
Amplified difference
Whenever R1 = R 2 and Rf = Rg,
Rf
V out = (V 2 −V1 )
R1
When R1 = Rf and R 2 = Rg (including previous conditions, so that R1 = R 2 = Rf = Rg):
V out = V 2 −V1
Figure 16.13 shows an inverting amplifier, represented by the triangle. It inverts its polarity and
amplifies a voltage (multiplies by a negative constant) producing an output voltage, Vout.
V out = −V in ( Rf / Rin )
Rf
Rin
Vin −
Vout
Rg
Vin +
3. Non-Inverting Amplifier A non-inverting ampli-
fier amplifies (shown in Fig. 16.14) a voltage (multiplies by a
Vout
constant greater than 1).
−
⎛ R ⎞
V out = V in ⎜⎜⎜1 + 2 ⎟⎟⎟
R1 R2 ⎝ R1 ⎟⎠
Z = ∞ (realistically, the input impedance of the op-amp
itself, 1 MΩ to 10 TΩ)
Fig. 16.14 Non-inverting amplifier
A third resistor, of value Rf � Rin, added between the Vin
source and the non-inverting input, while not necessary, minimizes errors due to input bias currents.
4. Summing Amplifier One of the most common applications for an op-amp is to algebraically
add two (or more) signals or voltages to form the sum of those signals. Such a circuit is known as a sum-
ming amplifier, or just as a summer. The source of these signals might be anything at all. Common input
424 Metrology and Measurement
⎛R ⎞
V out = −⎜⎜ f ⎟⎟⎟(V1 +V 2 + � +V n )
⎜⎝ R1 ⎟⎠
when R1 = R2 = � = Rn = Rf
V out = −(V1 +V 2 + � +V n )
Output is inverted.
Input impedance Zn = Rn, for each input (V- is a virtual C
ground).
(where Vin and Vout are functions of time, Vinitial is the Fig. 16.16 Integrating amplifier
output voltage of the integrator at time t = 0.)
R
⎛ dV ⎞ Vout
V out = −RC ⎜⎜⎜ in ⎟⎟⎟
⎝ dt ⎠ +
Vin +
V out = V in
Programmers in FORTRAN or BASIC generally used software packages, such as the Calcomp
library, or device-independent graphics packages such as Hewlett-Packard’s AGL libraries or BASIC
extensions or high-end packages such as DISSPLA. These would establish scaling factors from world
coordinates to device coordinates, and translate to the low-level device commands.
Early plotters (e.g., the Calcomp 565 of 1959) worked by placing the paper over a roller which
moved the paper back and forth for an X motion, while the pen moved back and forth on a single arm
for a Y motion. Another approach (e.g., Computervision’s Interact I) involved attaching ball-point pens
to drafting pantographs and driving the machines with motors controlled by the computer. This had
the disadvantage of being somewhat slow to move, as well as requiring floor space equal to the size of
the paper, but could double as a digitizer. A later change was the addition of an electrically controlled
clamp to hold the pens, which allowed them to be changed and thus create multi-coloured output.
Hewlett Packard and Tektronix created desk-sized flatbed plotters in the late 1970s. In the 1980s,
the small and lightweight HP 7470 used an innovative ‘grit wheel’ mechanism which moved only the
paper. Modern desktop scanners use a somewhat similar arrangement. These smaller ‘home-use’ plot-
ters became popular for desktop business graphics, but their low speed meant they were not useful
for general printing purposes, and another conventional printer would be required for those jobs. One
category introduced by Hewlett Packard’s MultiPlot for the HP 2647 was the ‘word chart’ which used
the plotter to draw large letters on a transparency. This was the forerunner of the modern Powerpoint
chart. With the widespread availability of high-resolution inkjet and laser printers, inexpensive memory
and computers fast enough to rasterize colour images, pen plotters have all but disappeared.
Other Uses Plotters are used primarily in technical drawing and CAD applications, where they
have the advantage of working on very large paper sizes while maintaining high resolution. Another
use has been found by replacing the pen with a cutter, and in this form plotters can be found in many
garment and sign shops. A niche application of plotters is in creating tactile images for visually handi-
capped people on special thermal cell paper.
Constructional Details The earliest and simplest type of oscilloscope consisted of a cath-
ode ray tube, a vertical amplifier, a timebase, a horizontal amplifier and a power supply. These are
now called ‘analog’ scopes to distinguish them from the ‘digital’ scopes that became common in the
1990s and 2000s. The cathode ray tube is an evacuated glass envelope, with its flat face covered in a
Intermediate Modifying and Terminating Devices 427
phosphorescent material (the phosphor). The screen is typically less than 20 cm in diameter, much
smaller than one in a usual television set.
In the neck of the tube is an electron gun, which is a heated metal plate with a wire mesh (the grid) in
front of it. A small grid potential is used to block electrons from being accelerated when the electron beam
needs to be turned off, as during sweep retrace or when no trigger events occur. A potential difference
of at least several hundred volts is applied to make the heated plate (the cathode) negatively charged rela-
tive to the deflection plates. For higher bandwidth oscilloscopes, where the trace may move more rapidly
across the phosphor target, a positive post-deflection acceleration voltage of over 10,000 volts is often
used, increasing the energy (speed) of the electrons that strike the phosphor. The kinetic energy of the
electrons is converted by the phosphor into visible light at the point of impact. When switched on, a CRT
normally displays a single bright dot in the centre of the screen, but the dot can be moved about electro-
statically or magnetically. The CRT in an oscilloscope uses electrostatic deflection.
Between the electron gun and the screen, two opposed pairs of metal plates called the deflection plates
are arranged. The vertical amplifier generates a potential difference across one pair of plates, giving
rise to a vertical electric field through which the electron beam passes. When the plate potentials are
the same, the beam is not deflected. When the top plate is positive with respect to the bottom plate,
the beam is deflected upwards; when the field is reversed, the beam is deflected downwards. The
horizontal amplifier does a similar job with the other pair of deflection plates, causing the beam to
move left or right. This deflection system is called electrostatic deflection, and is different from the
electromagnetic deflection system used in television tubes. In comparison to magnetic deflection,
electrostatic deflection can more readily follow random changes in potential, but is limited to small
deflection angles.
The timebase in an electronic circuit is incorporated, which generates a ramp voltage. This is a volt-
age that changes continuously and linearly with time. When it reaches a predefined value, the ramp is
428 Metrology and Measurement
reset with the voltage, thus reestablishing its initial value. When a trigger event is recognized, the reset is
released, allowing the ramp to increase again. The timebase voltage usually drives the horizontal ampli-
fier. Its effect is to sweep the electron beam at constant speed from left to right across the screen, then
quickly return the beam to the left in time to begin the next sweep. The timebase can be adjusted to
match the sweep time to the period of the signal.
Meanwhile, the vertical amplifier is driven by an external voltage (the vertical input) that is taken
from the circuit or experiment that is being measured. The amplifier has a very high input impedance,
typically one megaohm, so that it draws only a tiny current from the signal source. The amplifier drives
with vertical deflection plates with a voltage that is proportional to the vertical input. Because the
electrons have already been accelerated by hundreds of volts, this amplifier also has to deliver almost
hundred volts, and this with a very high bandwidth. The gain of the vertical amplifier can be adjusted
to suit the amplitude of the input voltage. A positive input voltage bends the electron beam upwards,
and a negative voltage bends it downwards, so that the vertical deflection of the dot shows the value of
the input. The response of this system is much faster than that of mechanical measuring devices such
as the multimeter, where the inertia of the pointer slows down its response to the input.
When all these components work together, the result is a bright trace on the screen that represents a
graph of voltage against time. Voltage is on the vertical axis, and time on the horizontal.
Observing high speed-signals, especially non-repetitive signals, with a conventional CRO is difficult,
often requiring the room to be darkened or a special viewing hood to be placed over the face of the
display tube. To aid in viewing such signals, special oscilloscopes have borrowed from night-vision
technology, employing a microchannel plate in the tube face to amplify faint light signals.
The power supply provides low voltages to power the cathode heater in the tube, and the ver-
tical and horizontal amplifiers. High voltages are needed to drive the electrostatic deflection plates.
These voltages must be very stable. Any variations will cause errors in the position and brightness of
the trace. Later, analog oscilloscopes added digital processing to the standard design. The same basic
architecture—cathode ray tube, vertical and horizontal amplifiers—was retained, but the electron beam
was controlled by digital circuitry that could display graphics and text mixed with the analog wave-
forms.
The graph, usually called the trace, is drawn by a beam of electrons striking the phosphor coating
of the screen making it emit light, usually green or blue. This is similar to the way a television picture
is produced. In its simplest mode, the oscilloscope repeatedly draws a horizontal line called the trace
across the middle of the screen from left to right. One of the controls, the timebase control, sets the
speed at which the line is drawn, and is calibrated in seconds per division. If the input voltage departs
from zero, the trace is deflected either upwards or downwards. Another control, the vertical control,
sets the scale of the vertical deflection, and is calibrated in volts per division. The resulting trace is a
graph of voltage against time (the present plotted at a varying position, the less recent past to the left,
the most recent past to the right). A dual trace oscilloscope can display two traces on the screen, allow-
ing you to easily compare the input and output of an amplifier, for example. It is well worth paying
the modest extra cost to have this facility. If the input signal is periodic then a nearly stable trace can
be obtained just by setting the timebase to match the frequency of the input signal. To provide a more
Intermediate Modifying and Terminating Devices 429
stable trace, modern oscilloscopes have a function called the trigger. The scope then waits for a speci-
fied event before drawing the next trace. The trigger event is usually the input waveform reaching some
user-specified threshold voltage in the specified direction (going positive or going negative).
The effect is to resynchronise the timebase to the input signal, preventing horizontal drift of the
trace. In this way, triggering allows the display of periodic signals such as sine waves and square waves.
Trigger circuits also allow the display of non-periodic signals such as single pulses or pulses that don’t
recur at a fixed rate. The chief benefit of a quality oscilloscope is the quality of the trigger circuit. If
the trigger is unstable, the display will always be fuzzy. The quality improves roughly as the frequency
response and voltage stability of the trigger increase.
Measurement of Voltage and Time Period The trace on an oscilloscope screen is a graph
of voltage against time. The shape of this graph is determined by the nature of the input signal. In
addition to the properties labeled on the graph, Voltage
there is frequency, which is the number of cycles Amplitude
Peak-peak
per second. Figure 16.21 shows a sine wave but voltage
these properties apply to any signal with a con- 0
stant shape. Time
Time period
1 1
Frequency = and Time period =
Time period frequency
the ac/GND/dc switch to GND (0 V) and use Y-shift (up/down) to adjust the position of the trace if
necessary. Switch back to dc afterwards so you can see the signal again.
Voltage = distance in cm × volts/cm
Example: peak-peak voltage = 4.2 cm × 2 V/cm = 8.4 V
Amplitude (peak voltage) = ½ × peak-peak voltage = 4.2 V
Time Period Time is shown on the horizontal x-axis and the scale is determined by the TIME-
BASE (TIME/CM) control. The time period (often just called period) is the time for one cycle of the
signal. The frequency is the number of cycles per second, frequency = 1/time period. Ensure that the
variable timebase control is set to 1 or CAL (calibrated) before attempting to take a time reading.
Time = distance in cm × time/cm
the colour of the LED. Viewing distance is determined primarily by the minimum size requirements
for objects that the user must see. The viewing angles on the x and y-axis are also important to con-
sider. The viewing angle of the display is the angle, in degrees, between a line normal to the display
surface and the user’s visual axis. Minimum and typical luminous intensity describes the luminous flux
per unit solid angle, and its unit of measurement is the candela (cd). Case dimensions include width,
depth and height. The case or package of the display will have separate dimensions than the actual
viewing area of the display.
Review Questions
1. Explain the term transducer as a device with the help of any one example.
2. Explain the factors influencing the choice of Transducer for measurement of a physical quantity.
3. Explain the terms:
a. Cross sensitivity
b. Transient and frequency response
c. Primary and secondary transducer
d. Active and passive transducer
e. Sampling rate per channel
f. Signal-conditioning
g. Multiplexing
h. Amplification and attenuation
i. Switching
j. Terminating devices
4. Discuss the classification of a transducer and explain any one electro-mechanical transducer in
detail.
5. Discuss the types of displacement measurement.
6. Explain with the help of a neat sketch how a Linear Variable Differential Transformer (LVDT ) is
used for displacement measurement?. State its advantages and limitations.
7. What do you mean by intermediate-modifying devices?
8. Discuss the concept of generalized data acquisition system.
9. What are the important factors that decide the configuration and the sub-system of the data
acquisition system?
432 Metrology and Measurement
‘Force and torque instruments measure the real strength of the entity under test…..’
INTRODUCTION TO FORCE AND is the frequency range over which the
TORQUE MEASUREMENT device meets its accuracy specifications.
Force and torque instruments are used Accuracy is degraded at lower and lower
to measure force, weight or torque. frequencies unless the device is capable
Some can measure force and torque by of dc response, and at higher frequen-
changing the sensor/transducer. Force cies near resonance and beyond, where
or weight measurements include ten- its output response rolls off. Frequencies
sion or compression loading; and the in the database are usually the 3-dB roll-
units are pounds, Newtons, etc. Torque off frequencies.
measuring instruments display torque Common configurations for force and
units (in-oz, ft-lbs, etc.). torque instruments include handheld, por-
Important parameters to consider when table, modular and battery-powered instru-
specifying force and torque instruments ments. Measurement features of force
are the force measurement range and and toque instruments include tare, limits
accuracy, and the torque measurement or set points, peak hold, controller func-
range and accuracy. Sensor or trans- tionality, temperature compensation, biax-
ducer interfaces for force and torque ial measurement, and triaxial measure-
instruments include strain gauge and ment. Units with tare can zero out reading
piezoelectric devices. For strain gauge to measure differences for weighing.
devices, strain gauges (strain-sensitive Limits and set points include hi–lo. Peak
variable resistors) are bonded to parts of hold shows or holds a peak measurement
the structure that deform when making value. Controller functions include set
the measurement. These strain gauges limits, regulator, P/PI/PID, etc. Instruments
are typically used as elements in a Wheat- with temperature compensation have soft-
stone bridge circuit, which is used to ware or adjustments for compensating for
make the measurement. For piezoelectric variations in temperature that may cause
devices, a piezoelectric material is com- measurement errors. Force and torque
pressed and generates a charge that is instruments with biaxial measurement
measured by a charge amplifier. The have an accelerometer capable of mea-
analog bandwidth is another important surement along two, usually orthogonal,
specification to consider. The bandwidth axes. Force and torque instruments with
434 Metrology and Measurement
The International System of Units (SI) is widely used for trade, science, and engineering. The SI units of
force and torque are the Newton (N) and the Newton-Metre (N-m) respectively. The base units rel-
evant to force and torque are
Force is defined as the rate of change of momentum. For an unchanging mass, this is equivalent to
mass × acceleration.
Thus, 1 N = 1 kg·m·s–2
The torque generated about an axis is defined as the product of the component of the force perpen-
dicular to the axis and the perpendicular distance between the line of action of the force and the axis.
SI Prefixes The use of abbreviated forms for large and small numbers is encouraged by the SI
system. SI prefixes represent multiples of 103 or 10–3 as in Table 17.3. There is an exception to the
system caused by adoption of the kilogram as the base unit for mass rather than the gram. The effect
of this is that prefixes above ‘kilo’ are not used for mass. The tonne is 103 kg.
Force and Torque Measurement 435
Force-measurement systems can involve a number of different physical principles but their perfor-
mance can be described by a number of common characteristics and terms, and the behaviour of
a system or transducer can be expressed graphically as a response curve—by plotting the indicated
output value (e.g., voltage) from the system against the force applied to it. The terms used are some-
times applied independently to the force transducer, the force-measurement system as a whole, or some
other part of the system and it is important to establish, for any given application, the way in which the
terms are being used.
An idealized response curve is shown in Fig. 17.1 where the force applied increases from zero to
the rated capacity of the force-measurement system and then back again to zero. The deviation of the
response curve from a straight line is magnified in the figure for the purpose of clarity.
Characterizing the performance of a force-measuring system is commonly based on calculating such
a best-fit least-squares line and stating the measurement errors with respect to it.
Vertical deviation from this line is referred to as non-linearity and generally, the largest value is given
in the specifications of a system.
Force and Torque Measurement 437
Rated
output Decreasing Hysteresis
applied force
Non-linearity
Increasing
applied force
0 Applied
Rated force force
0
The difference of readings between the increasing and decreasing forces at any given force is defined
as hysteresis. The largest value of hysteresis is usually at the mid-range of the system.
Sometimes non-linearity and hysteresis are combined into a single figure—usually by drawing two lines
parallel to the best-fit line such that they enclose the increasing and decreasing force curves as shown. The
maximum difference (in terms of output) is then halved and referred as the ±combined error.
Any difference between the indicated value of force and the true value is known as an error of
measurement (although note that strictly a ‘true’ value can never be perfectly known or indeed defined
and the concept of uncertainty takes this into account). Such errors are usually expressed as either a
percentage of the force applied at that particular point on the characteristic or as a percentage of the
maximum force—see the difference between ‘% reading’ and ‘% full scale reading’. The rated capacity
is the maximum force that a force transducer is designed to measure.
Full-scale output, also known as span or rated output, is the output at the rated capacity minus the
output at zero applied force. Sensitivity is defined as the full-scale output divided by the rated capacity
of a given transducer/load cell.
The ability of a force-measurement system to measure force consistently is covered by the
concepts of repeatability and reproducibility. Repeatability is defined broadly as the measure of
agreement between the results of successive measurements of the differences of output of a force-
measurement system for repeated applications of a given force in the same direction and within the
range of calibration forces applied. The tests should be made by the same observer, with the same
measuring equipment, on the same occasion (i.e., successive measurements should be made in a
438 Metrology and Measurement
relatively short space of time), without mechanical or electrical disturbance, and calibration condi-
tions such as temperature, alignment of loading, and the timing of readings held constant as far as
possible.
Although many manufacturers quote a value for repeatability as a basic characteristic of a trans-
ducer, it can be seen from the definition that it should not be considered as such. The value obtained for
a given force transducer, in a given force standard machine, will depend not only on the inherent char-
acteristics of the device such as its creep and sensitivity to bending moments, but also on temperature
gradients, resolution and repeatability of the electrical measuring equipment, and the degree to which
the conditions of the tests are held constant, all of which are characteristics of the test procedure. The
value of repeatability obtained is important as it limits the accuracy to which the other characteristics
of the force transducer can be measured.
In contrast to repeatability, reproducibility is defined as the closeness of the agreement between
the results of measurements of the same force carried out under changed conditions of measure-
ment. A valid statement of reproducibility requires specification of the particular conditions changed
and typically refers to measurements made weeks, months, or years apart. It would also measure, for
example, changes caused by dismantling and re-assembling equipment. The reproducibility of force-
measurement systems is clearly important if they are to be used to compare the magnitudes of forces
at different times, perhaps months or years apart. It will be determined by several factors, including the
stability of the force transducer’s many components, the protection of the strain gauges or other parts
against humidity, and the conditions under which the system is stored, transported, and used.
A force-measurement system will take some time to
Output adjust fully to a change in force applied, and the creep
F2 of a force transducer is usually defined as the change of
F1 Creep output with time following a step increase in force from
one value to another. Most manufacturers specify the
creep as the maximum change of output over a speci-
fied time after increasing the force from zero to the
Creep recovery
rated force. Figure 17.2 shows an example of a creep
0
t1 t2 Time curve where the transducer exhibits a change in output
Fig. 17.2 Creep curve of a typical force from F1 to F2 over a period of time from t1 to t2 after a
transducer step change between 0 and t1. In figures this might be,
say, 0.03 % of rated output over 30 minutes.
Creep recovery is the change of output following a step decrease in the force applied to the force
transducer, usually from the rated force to zero. For both creep and creep recovery, the results will
depend on how long the force applied has been at zero or the rated value respectively before the change
of force is made.
The frequency response of a force transducer is affected by the nature of the mechanical structure,
both within the transducer and of its mounting. A force transducer on a rigid foundation will have a
natural frequency of oscillation and large dynamic errors occur when the frequency of the vibration
approaches the natural frequency of oscillations of the system.
Force and Torque Measurement 439
The effect of temperature changes is felt on both the zero and rated output of the force-measurement
system. The temperature coefficient of the output at zero force and the temperature coefficient of the
sensitivity are measures of this effect for a given system. A force-measurement system may need to be
kept at constant temperature, or set up well in advance, to settle into the ambient conditions if high-
accuracy measurements are required. In some cases, the temperature gradients within the measurement
installation create a problem even when the average temperature is stable.
Other influence quantities such as humidity, pressure, electrical power changes, or radio-frequency
interference may have analogous effects to those of temperature and may be considered in a similar
manner.
In general, a force transducer has two interfaces through which a force is applied. These may be
the upper and lower loading surfaces of compression force transducer or the upper and lower screw
threads of a tension device. In some load cells, one or both interfaces are part of the elastic element to
which the strain gauges are bonded; in other transducers the interfaces may be remote from the elastic
element.
At each interface, there will be a force distribution, which will depend on the end loading conditions.
A change in these loading conditions, therefore, may cause a change in the force distribution resulting
in a change of the sensitivity of the transducer, even though the resultant force at the interface remains
unchanged. The International Standard BS EN ISO 376 concerned with the calibration of proving
devices for the verification of materials testing machines recognizes the importance of end loading
conditions by requiring compression proving devices to pass a bearing pad (or similar) test. In this test,
a device is loaded through a flat steel pad and then through each of two steel pads that are conically
convex and concave respectively by 1 part in 1 000 of the radius. Depending on the design of the trans-
ducer, the change of sensitivity caused by a change of end loading conditions can be quite large; some
precision compression load cells with low creep, hysteresis, and temperature coefficients can show dif-
ferences of sensitivity in the bearing pad test of 0.3 %, others less than 0.05 %.
True axial alignment of the applied force along the transducer’s principal axis, and the loading con-
ditions across that surface are major factors in the design of a reliable and accurate installation of a
force-measurement system. Force transducers used to measure a single force component are designed
to be insensitive to the orthogonal force components and corresponding moments, provided these are
within specified limits, but although the error due to small misalignments may be calibrated statisti-
cally, the alignment of force relative to the transducer axis may vary through the load cycle of a typical
application giving potentially large and unquantifiable errors of measurement. Users of force-mea-
surement systems should adhere to manufacturers’ recommendations for alignment when installing
force transducers.
Force and load sensors cover electrical sensing devices that are used to measure tension, compression,
and shear forces. Tension cells are used for measurement of a straight-line force ‘pulling apart’ along a
single axis; typically annotated as positive force. Compression tension cells are used for measurement
440 Metrology and Measurement
of a straight-line force ‘pushing together’ along a single axis; typically annotated as negative force. Shear
is induced by tension or compression along offset axes. They are manufactured in many different pack-
ages and mounting configurations.
Important parameters for force and load sensors include the force and load-measurement range and
the accuracy. The measurement range is the range of the required linear output. Most force sensors
actually measure the displacement of a structural element to determine force. The force is associated
with a deflection as a result of calibration. There are many form factors or packages to choose from—
S-beam, pancake, donut or washer, plate or platform, bolt, link, miniature, cantilever, canister, load pin,
rod end, and tank weighing. Shear-cell type can be shear beam, bending beam, or single-point bending
beam. Force and load sensors can have one of many output types. These include analog voltage, analog
current, analog frequency, switch or alarm, serial, and parallel.
Force and load sensors can be many different types of devices including sensor element or chip,
sensor or transducer, instrument or meter, gauge or indicator, and recorder and totalizers. A sensor
element or chip denotes a ‘raw’ device such as a strain gauge, or one with no integral signal condi-
tioning or packaging. A sensor or transducer is a more complex device with packaging and/or signal
conditioning that is powered and provides an output such as a dc voltage, a 4–20mA current loop,
etc. An instrument or meter is a self-contained unit that provides an output such as a display locally
at or near the device. Typically, it also includes signal processing and/or conditioning. A gauge or
indicator is a device that has a (usually analog) display and no electronic output such as a tension
gauge. A recorder or totalizer is an instrument that records, totalizes, or tracks force measurement
over time. It includes simple data-logging capability or advanced features such as mathematical func-
tions, graphing, etc.
The most common force and load sensor technologies are piezoelectric and strain gauge. For
piezoelectric devices, a piezoelectric material is compressed and generates a charge that is conditioned
by a charge amplifier. For strain gauge devices, strain gauges (strain-sensitive variable resistors) are
bonded to parts of the structure that deform when making the measurement. These strain gauges are
typically used as elements in a Wheatstone bridge circuit, which is used to make the measurement.
Strain gauges typically require an excitation voltage, and provide output sensitivity proportional to
that excitation.
Features common to force and load sensors include biaxial measurement, triaxial measurement,
and temperature compensation. Biaxial load cells can provide load measurements along two, typically
orthogonal, axes. Triaxial load cells can provide load measurements along three, typically orthogonal,
axes. Temperature-compensated load cells provide special circuitry to reduce/eliminate sensing errors
due to temperature variations. Other parameters to consider include operating temperature, maximum
shock, and maximum vibration.
Load cells are force sensors that frequently incorporate mechanical packaging for fit into testing
and monitoring systems. They can be used for tension, compression, and/or shear measurement,
and can be configured to measure force or load along multiple axes. Load cells are widely used in
mechanical testing, ongoing system monitoring, and devices such as industrial weigh modules and
scales.
Force and Torque Measurement 441
Important parameters for load cells include the force and load-measurement range and the accu-
racy. The measurement range is the range of required linear output. Load cells can be configured with
multiple axes. Biaxial load cells can provide load measurements along two, typically orthogonal, axes.
Triaxial load cells can provide load measurements along three, typically orthogonal, axes.
Load cells can measure tension, compression, or shear. Tension cells are used for measurement of a
straight-line force ‘pulling apart’ along a single axis; typically annotated as positive force. Compression
tension cells are used for measurement of a straight-line force ‘pushing together’ along a single axis;
typically annotated as negative force. Shear is induced by tension or compression along offset axes.
Most load cells actually measure the displacement of a structural element to determine force. The force
is associated with a deflection as a result of calibration. There are many form factors or packages to
choose from—S-beam, pancake, donut or washer, plate or platform, bolt, link, miniature, cantilever,
canister, load pin, rod end, and tank weighing.
Shear-cell types for load sensors can be shear beam, bending beam, or single-point bending beam.
The most common sensor technologies are piezoelectric and strain gauge. For piezoelectric devices, a
piezoelectric material is compressed and generates a charge that is conditioned by a charge amplifier.
For strain gauge devices, strain gauges (strain-sensitive variable resistors) are bonded to parts of the
structure that deform when making the measurement. These strain gauges are typically used as ele-
ments in a Wheatstone bridge circuit, which is used to make the measurement. Strain gauges typically
require an excitation voltage, and provide output sensitivity proportional to that excitation.
Outputs for load cells can be analog voltage, analog current, analog frequency, switch or alarm, serial,
and parallel. Temperature-compensated load cells provide special circuitry to reduce/eliminate sens-
ing errors due to temperature variations. Other parameters to consider include operating temperature,
maximum shock, and maximum vibration.
System Components In contemporary control applications, weighing systems are used in both
static and dynamic applications. Some systems are technologically advanced, interfacing with comput-
ers for database integration and using microprocessor-based techniques to proportion material inputs
and feed rates. To send the weight information to computers, signal conditioners are utilized to permit
direct communication from the load cell via conversion of the load cell’s analog signal to a digital signal.
An entire system can be constructed, one piece at a time, from basic modules.
Parts of a System Load cells, cable, junction box (summing up the load cell signals up to one
output), instrumentation (indicators, signal conditioners, etc.), peripheral equipment ( printers, score-
boards, etc.)
442 Metrology and Measurement
Force
Compression
strain gauge
Strain gauge
patch
Tension
strain gauge Fig. 17.4 Strain gauge
mounted on component
Force
Fig. 17.3 Load cell
Fundamentals A load cell is classified as a force transducer. This device converts force or weight
into an electrical signal. The strain gauge is the heart of a load cell. A strain gauge is a device that changes
resistance when it is stressed. The gauges are developed from an ultra-thin heat-treated metallic foil and
are chemically bonded to a thin dielectric layer. ‘Gauge patches’ are then mounted to the strain element
with specially formulated adhesives. The precise positioning of the gauge, the mounting procedure, and
the materials used all have a measurable effect on overall performance of the load cell.
Each gauge patch consists of one or more fine wires cemented to the surface of a beam, ring, or
column (the strain element) within a load cell. As the surface to which the gauge is attached becomes
strained, the wires stretch or compress changing their resistance proportional to the applied load. One
or more strain gauges are used in the making of a load cell. Multiple strain gauges are connected to
create the four legs of a Wheatstone-bridge configuration.
When an input voltage is applied to the bridge, the output + Output
becomes a voltage proportional to the force on the cell.
This output can be amplified and processed by conven-
tional electrical instrumentation.
1/2 Modulus
Types of Force/Load Cells 1/2 Modulus 1/2 Calibration
The load or force cell takes many forms to accommodate 1/2 Calibration − Output
the variety of uses throughout research and industrial appli- − Input
cations. The majority of today’s designs use strain gauges as Input shunt + Input
Batch
Engine weighing Bin Bin Bin
A B C
dynamometry
Sollenoid
valves
Checking Spring
connector testing
insertion
force
Load cell
(i) Foil Gauges offer the largest choice of different types and in consequence tend to be the
most used in load cell designs. Strain-gauge patterns offer measurement of tension, compression
and shear forces.
(ii) Semiconductor Strain Gauges come in a smaller range of patterns but offer the advantages of
being extremely small and have large gauge factors, resulting in much larger outputs for the same given stress.
Due to these properties, they tend to be used for the miniature load cell designs.
(iii) Proving Rings are used for load measurement using a calibrated metal ring, the movement of
which is measured with a precision displacement transducer.
A vast number of load-cell types have developed
over the years, the first designs simply using a strain
gauge to measure the direct stress which is introduced
into a metal element when it is subjected to a tensile
or compressive force. A bending-beam-type design uses
strain gauges to monitor the stress in the sensing ele-
ment when subjected to a bending force. More recently,
the measurement of shear stress has been adopted as a
more efficient method of load determination as it is less
dependent on the way and direction in which the force
is applied to the load cell.
Fig. 17.12 Shear beam load cell Fig. 17.13 Sealed weighing load
sensor
The following information presents some of the design and operating characteristics of PCB force
sensors to help you better understand how they function, which in turn, will ‘help you make better
dynamic measurements’.
When a force is applied to this sensor, the quartz crystals generate an electrostatic charge propor-
tional to the input force. This output is collected on the electrodes sandwiched between the crystals and
is then either routed directly to an external charge amplifier or converted to a low-impedance voltage
signal within the sensor. Both these modes of operation will be examined in the following sections.
mode, the instrument provides the constant current–voltage excitation to force sensors and has a zero-based
clamping circuit that electronically resets each pulse to zero. Special circuitry prevents the output from drift-
ing negatively providing a continuous positive polarity signal.
Developing load cells and loading assemblies that address all of these factors is challenging, both
technically and commercially. Nevertheless, we are now seeing the introduction of a new generation
of load cells, capable of achieving new and higher levels of accuracy, stability and reliability in many
process applications.
Torque sensors and torque instruments are used to measure torque in a variety of applications. Torque
sensors are categorized into two main types, reaction and rotary. Reaction torque sensors measure static and
dynamic torque with a stationary or non-rotating transducer. Rotary torque sensors use rotary transducers
to measure torque.
Important specifications to consider when searching for torque sensors include maximum torque,
accuracy, and temperature compensation. Torque is defined as the moment of a force, a measure of its
tendency to produce torsion and rotation about an axis. Temperature compensation prevents measure-
ment error due to temperature increases or decreases.
The technology of torque sensors can be magneto-elastic, piezoelectric, and strain gauge. A magneto-
elastic torque sensor detects changes in permeability by measuring changes in its own magnetic field.
A piezoelectric material is compressed and generates a charge, which is measured by a charge amplifier.
To measure torque, strain-gauge elements usually are mounted in pairs on the shaft, one gauge measur-
ing the increase in length (in the direction in which the surface is under tension), the other measuring
the decrease in length in the other direction.
Torque sensors can be many different types of devices including sensor element or chip, sensor or
transducer, instrument or meter, gauge or indicator, and recorder and totalizers. A sensor element or
chip denotes a ‘raw’ device such as a strain gauge, or one with no integral signal conditioning or packag-
ing. A sensor or transducer is a more complex device with packaging and/or signal conditioning that
is powered and provides an output such as a dc voltage, a 4 – 20mA current loop, etc. An instrument
or meter is a self-contained unit that provides an output such as a display locally at or near the device.
Typically, it also includes signal processing and/or conditioning. A gauge or indicator is a device that
has a (usually analog) display and no electronic output such as a tension gauge. A recorder or totalizer is
an instrument that records, totalizes, or tracks force measurement over time. It includes simple datalog-
ging capability or advanced features such as mathematical functions, graphing, etc.
Common outputs for torque sensors include analog voltage, analog current, analog or modulated
frequency, switch or alarm, serial, and parallel. Other parameters to consider include operating tempera-
ture, maximum shock, and maximum vibration.
Static and Dynamic Torque In a discussion of static vs dynamic torque, it is often easiest to
start with an understanding of the difference between a static and dynamic force. To put it simply, a
dynamic force involves acceleration, whereas a static force does not. The relationship between dynamic
force and acceleration is described by Newton’s second law
F = ma (force equals mass times acceleration). The force required to stop your car with its substantial
mass would be a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in
order to stop that car would be a static force because there is no acceleration of the brake pads involved.
Torque is just a rotational force, or a force through a distance. From the previous discussion, it is
considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static
torque, since there is no rotation and hence no angular acceleration.
The torque transmitted through a car’s drive axle as it cruises down the highway (at a constant
speed) would be an example of a rotating static torque, because even though there is rotation, at a
constant speed there is no acceleration. The torque produced by the car’s engine will be both static
and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there
will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft.
If the torque is measured in the drive shaft, it will be nearly static because the rotational inertia of
the flywheel and transmission will dampen the dynamic torque produced by the engine.
The torque required to crank up the windows in a car (remember those?) would be an example of a static
torque, even though there is a rotational acceleration involved, because both the acceleration and rotational
inertia of the crank are very small and the resulting dynamic torque (torque = rotational inertia × rotational
acceleration) will be negligible when compared to the frictional forces involved in the window movement.
This last example illustrates the fact that for most measurement applications, both static and dynamic torques
will be involved to some degree. If dynamic torque is a major component of the overall torque or is the
torque of interest, special considerations must be made when determining how best to measure it.
Reaction vs Inline Inline torque measurements are made by inserting a torque sensor between torque
carrying components, much like inserting an extension between a socket and a socket wrench (Fig. 17.18).
The torque required to turn the socket will be carried directly by the
socket extension. This method allows the torque sensor to be placed as close
as possible to the torque of interest and avoid possible errors in the measure-
ment such as parasitic torques (bearings, etc.), extraneous loads, and com-
ponents that have large rotational inertias that would dampen any dynamic Sensor
torques.
From the previous example above, the dynamic torque produced by
an engine would be measured by placing an inline torque sensor between
the crankshaft and the flywheel, avoiding the rotational inertia of the fly-
wheel and any losses from the transmission. To measure the nearly static,
steady-state torque that drives the wheels, an inline torque sensor could Fig. 17.18 Inline torque
be placed between the rim and the hub of the vehicle, or in the drive measurement
450 Metrology and Measurement
shaft. Because of the rotational inertia of a typical torque driveline, and other related components,
inline measurements are often the only way to properly measure dynamic torque.
A reaction torque sensor takes advantage of Newton’s third law: for every action there is an
equal and opposite reaction. To measure the torque produced by a motor, we could measure it
inline as described above, or we could measure how much torque is required to prevent the motor from
turning, commonly called the reaction torque (Fig. 17.19).
Motor
Measuring the reaction torque avoids the obvious
Non-Rotating problem of making the electrical connection to the
motor adapter sensor in a rotating application (discussed below), but
does come with its own set of drawbacks. A reaction
torque sensor is often required to carry significant
Rotating extraneous loads, such as the weight of a motor, or at
shaft
least some of the drive line. These loads can lead to
crosstalk errors (a sensor’s response to loads other than
those that are intended to be measured), and sometimes
reduced sensitivity, as the sensor has to be oversized
to carry the extraneous loads. Both of these methods,
Torque sensor inline and reaction, will yield identical results for static
Fig. 17.19 Reaction torque torque measurements. Making inline measurements in a
measurement rotating application will nearly always present the user
with the challenge of connecting the sensor from the rotating world to the stationary world. There are
a number of options available to accomplish this, each with its own advantages and disadvantages.
Slip Ring The most commonly used method to make this connection between rotating sensors and
stationary electronics is the slip ring. It consists of a set of conductive rings that rotate with the sensor,
and a series of brushes that contact the rings and transmit the sensors’ signals (Fig. 17.20).
Slip rings are an economical solution that perform well in a wide variety of applications. They are a rela-
tively straightforward, time-proven solution with only minor drawbacks in most applications. The brushes,
and to a lesser extent the rings, are wear items with limited lives
that don’t lend themselves to long-term tests, or to applications
that are not easy to service on a regular basis. At low to moderate
speeds, the electrical connection between the rings and brushes are
relatively noise free, however at higher speeds, noise will severely
degrade their performance. The maximum rotational speed (rpm)
for a slip ring is determined by the surface speed at the brush/ring
interface. As a result, the maximum operating speed will be lower
for larger, typically higher torque-capacity sensors by virtue of the
fact that the slip rings will have to be larger in diameter, and will,
therefore, have a higher surface speed at a given rpm. Typical maxi-
mum speeds will be in the 5,000-rpm range for a medium-capacity
Fig. 17.20 Slip rings and brushes torque sensor.
Force and Torque Measurement 451
Finally, the brush ring interface is a source of drag torque that can be a problem, especially for very
low-capacity measurements or applications, whereas the driving torque will have trouble overcoming
the brush drag.
Rotary Transformer In an effort to overcome some of the shortcomings of the slip ring, the
rotary transformer system was devised. It uses a rotary transformer coupling to transmit power to the
rotating sensor. An external instrument provides an ac excitation voltage to the strain-gauge bridge via
the excitation transformer. The sensor’s strain-gauge bridge then drives a second rotary transformer
coil in order to get the torque signal off the rotating sensor (Fig. 17.21).
By eliminating the brushes and rings of the slip ring, the issue
of wear is gone, making the rotary transformer system suitable for
Signal Power
long-term testing applications. The parasitic drag torque caused Rotating
by the brushes in a slip ring assembly is also eliminated. However, coils
the need for bearings and the fragility of the transformer cores
still limits the maximum rpm to levels only slightly better than the
slip ring.
The system is also susceptible to noise and errors induced by
the alignment of the transformer primary-to-secondary coils.
Because of the special requirements imposed by the rotary trans- Fig. 17.21 Rotary transformer
formers, specialized signal conditioning is also required in order
to produce a signal acceptable for most data-acquisition systems,
further adding to the system’s cost that is already higher than a typical slip ring assembly.
Infrared (IR) Like the rotary transformer, the infrared (IR) torque sensor utilizes a contactless
method of getting the torque signal from a rotating sensor back to the stationary world. Similarly, using
a rotary transformer coupling, power is transmitted to the rotating sensor. However, instead of being
used to directly excite the strain-gauge bridge, it is used to power a circuit on the rotating sensor. The
circuit provides excitation voltage to the sensor’s strain-gauge bridge,
and digitizes the sensor’s output signal. This digital output signal
is then transmitted, via infrared light, to stationary receiver diodes,
where another circuit checks the digital signal for errors and converts
it back to an analog voltage (Fig. 17.22).
Since the sensor’s output signal is digital, it is much less susceptible to
noise from such sources as electric motors and magnetic fields. Unlike
the rotary transformer system, an infrared transducer can be configured
either with or without bearings for a true maintenance free, no-wear, no-
drag sensor. While more expensive than a simple slip ring, it offers several
benefits. When configured without bearings, as a true non-contact mea-
surement system, the wear items are eliminated, making it ideally suited
for long-term testing rigs. Most importantly, with the elimination of the Fig. 17.22 Infrared (IR)
bearings, operating speeds (rpm’s) go up dramatically, to 25,000 rpm torque sensor
452 Metrology and Measurement
and higher, even for high capacity units. For high-speed applications, this is often the best solution for a
rotating torque transmission method.
FM Transmitter Another approach to making the connection between a rotating sensor and
the stationary world utilizes an FM transmitter. These transmitters are used to remotely connect any
sensor, whether force or torque, to its remote data-acquisition system by converting the sensor’s signal
to a digital form and transmitting it to an FM receiver where it is converted back to an analog voltage.
For torque measurement applications they are typically used for speciality, one of a kind sensors, such
as when strain gauges are applied directly to a component in a driveline. This could be a drive shaft or
half-shaft from a vehicle, for example. The transmitter offers the benefits of being easy to install on
the component as it is typically just clamped to the gauged shaft, and it is re-usable for multiple custom
sensors. It does have the drawback of needing a source of power on the rotating sensor, typically a
9 V battery, which makes it impractical for long-term testing, which is shown in Fig. 17.23.
Understanding the nature of the torque to be measured, as well as what factors can alter that torque
in the effort to measure it, will have a profound impact on the reliability of the data collected. In
applications that require the measurement of dynamic torque, special care must be taken to measure
the torque in the proper location, and not to affect the torque by dampening it with the measurement
system. Knowing the options available to make the connection to the rotating torque sensor can greatly
affect the price of the sensor package.
Slip rings are an economical solution, but have their limitations. More technically advanced solu-
tions are available for more demanding applications, but will generally be more expensive. By thinking
through the requirements and conditions of a particular application, the proper torque measurement
system can be chosen the first time.
is measured with a tachometer, while the turning force or torque of the shaft is measured with a scale
or by another method. Power may be read from the instrumentation or calculated from shaft speed and
torque.
The two types are the transmission dynamometer and the absorption dynamometer. The transmission dyna-
mometer transmits the force while measuring the elastic twist of the output shaft. An absorption
dynamometer absorbs the power and dissipates it as heat by restraining the output shaft mechanically
with a friction brake, hydraulically with a water brake, or electrically with an electromagnetic force.
Since the restraining element tends to rotate with the output shaft, the force of the shaft can be
determined by measuring the force required to arrest the rotation of the restraining element. Torque
is then calculated by multiplying the force times the length of the lever arm, or the distance through
which the force acts.
One type of electric dynamometer consists of a direct-current (dc) machine with the stator cradle-
mounted in antifriction bearings. The rotor is connected to the shaft of the machine under test. The
field current is introduced through flexible leads. The stator is constrained from rotating by a radial
arm of known length to which is attached a scale for measuring the force required to prevent rotation.
The torque of the connected machine is found from the product of the lever arm length and the scale
reading, after correcting the scale reading by the amount of the zero torque reading.
Common applications for dynamometers include general purpose, automotive, aircraft or aerospace,
chain or belt drives, gearboxes, fluid-power systems, gas or diesel engines, industrial, marine, transmis-
sions, and turbines. All dynamometers will typically have speed and power feedback for performance
testing and monitoring. Typical features include encoders or other speed / position sensors, torque arms,
and reaction sensors. Common dynamometer interfaces include integral control console, separate con-
sole, computer, or modem or remote control. Features common to dynamometers include PID control,
flow control or throttling, data acquisition or logging, alarms, motor power analysis, and engine exhaust
analysis.
Motor and engine-testing dynamometers apply braking or drag resistance to motor rotation and mea-
sure torque at various speeds and power input levels. These devices measure the output torque of
motors, engines, gearboxes, transmissions, and other rotary machines. They can include features such as
fuel and exhaust monitoring for internal combustion engines, input power analysis for electric motors,
and temperature and vibration sensing. Air dynamometers use an impeller to assess the power produced
by a jet engine or gas turbine. AC dynamometers are essentially ac motors mounted and configured to
provide drag against the motor being tested and output the resultant torque and power. DC dynamom-
eters are essentially dc motors mounted and configured to provide drag against the motor being tested
and output the resultant torque and power. Eddy-current dynamometers provide restraining torque
that increases with shaft speed. In a hydraulic or water-brake dynamometer, braking drag is applied to
the dynamometer rotor vanes via water circulating between the rotor and the stator housing. Hysteresis
dynamometers use non-contact magnetic braking to apply resistance to motor rotation. A magnetic
454 Metrology and Measurement
powder dynamometer has a friction-braking system using a magnetic-powder medium between the
rotor and the stator. With a prony or friction brake dynamometer, the braking mechanism uses friction
pads or brake shoes to engage the rotating disk or drum coupled to the motor. A combination of two
or more technologies is a tandem or combination dynamometer.
Important performance specifications to consider when searching for dynamometers include maxi-
mum power absorption, torque capacity, maximum rotary speed, and maximum linear speed on chassis
style. Maximum power absorption is the maximum rotational power the dynamometer can be subjected
to and still operate within specifications. This is typically limited by absorption or braking technology
and configuration. The torque capacity is the maximum continuous torque transmission for which
the shaft is designed. Maximum rotary speed is the maximum-rated rotational speed under load. For
chassis-style dynamometers, the maximum linear speed of the vehicle being tested is typically given in
vehicular speed units such as miles per hour.
Mounting types for dynamometers include chassis, stand or pedestal, adjustable or trunnion mount,
flange or shaft mount, and portable. In a chassis-type unit, rollers on the dynamometer support the
wheels of one or more axles. One of the rollers transmits the power from the vehicle to the dyna-
mometer for measurement of horsepower and speed. Vehicles typically drive onto the rollers and/or
the rollers lift up from a pit or recess. Environmental regulations often require a dynamometer during
exhaust emission testing. A stand or pedestal mount is a stationary mount or stand for positioning; and
may be permanent or moveable between tests. With an adjustable or trunnion mount, the dynamom-
eter can be adjusted for horizontal, vertical, or intermediate testing. This is typically achieved through
trunnion mounting so the dynamometer can pivot to the desired angle. A flange or shaft mount dyna-
mometer has a flange that couples with flange on motor or engine for direct, inline mounting. Portable
dynamometer units can be relocated and include wheeled units.
Scales and weigh modules measure static or dynamic loads for a wide range of industrial applications.
They are used to weigh small packages, the contents of hoppers, and extremely heavy loads that are
hauled by trucks or trains. Performance specifications include measurement type, rated load, and accu-
racy. There are three basic measurement types for scales and weigh modules: compression, shear, and
tension. Compression squeezes contents along the same axis. Shear is compression along the offset axes.
Tension weigh modules are used to convert a suspended tank or hopper into a scale. To provide reliable
measurements, mounting hardware is used to ensure that only the vertical load is measured. Rated load
is the maximum load that scales can handle without sustaining permanent damage. Accuracy is the limit
tolerance or average deviation between the actual output and the theoretical output.
Scales and weigh modules provide analog outputs and differ in terms of display type and user inter-
face. Many devices can output a voltage signal or current signal in proportion to the strain on the
sensor. Common voltage ranges include 0–5 VDC and 1–5 VDC. The most common analog current
loop is 4 –20 mA. Devices with a switch or relay that operates at set point are also available. Scales and
weigh modules display values with analog meters, digital readouts, or video display terminals. Analog
meters include a needle or light emitting diode (LED). Digital readouts are numerical or application-
specific. Video display terminals (VDT) include cathode ray tubes (CRT) and flat panel displays (FPD).
Some scales and weigh modules include an analog front panel with potentiometers, dials, and switches.
Force and Torque Measurement 455
Others have a digital front panel. Larger, more complex systems can often be controlled remotely with
a computer interface and include application software.
Scales and weigh modules differ in terms of applications and features. Benchtop devices are relatively
small and measure a limited range of loads. Conveyor scales weigh items as they pass along an assembly
line. Truck, rail, and axle scales are placed under a vehicle’s tire. Floor scales align the measuring platform
with the main floor and are suitable for shipping heavy freight and animals. Dynamometers measure
the amount of power applied. Counting systems, crane scales, hopper or tank scales, weigh checks, and
general-purpose industrial scales are also available. In terms of features, some scales and weigh modules
have a built-in audible or visual alarm. Others are waterproof, washdown-capable, or ruggedized for
harsh environment.
Piezoelectric devices generate electrical signals in response to vibrations and produce mechanical
energy in response to electrical signals. There are several basic types of piezoelectric devices. Piezo-
electric actuators produce a small displacement with a high force capability when voltage is applied.
They are used mainly in ultra-precise positioning and in the generation and handling of high forces or
pressures. Piezoelectric motors use a piezoelectric ceramic element to produce ultrasonic vibrations in a
stator structure. The elliptical movements of the stator are converted into the movement of a slider that
is pressed into frictional contact with the stator. Depending on the stator’s design, the resulting move-
ment can be either rotational or linear. Piezoelectric transducers convert electrical pulses to mechanical
vibrations and then convert the returned mechanical energy into electrical energy. Piezoelectric sensors
measure the electrical potential caused by applying mechanical force to a piezoelectric material. They
are used in a variety of pressure-sensing applications. Piezoelectric drivers and piezoelectric amplifiers
are power sources used to provide the high-voltage levels needed to drive other piezoelectric devices.
Selecting piezoelectric devices requires an analysis of physical and performance specifications. Typi-
cally, manufacturers specify length, diameter or height, thickness and mass as physical specifications.
Performance specifications differ by device type. For example, specifications for piezoelectric actuators
include maximum displacement, blocked force, maximum operating voltage, stiffness, resonance fre-
quency, and capacitance. For piezoelectric motors, important considerations include motor type, oper-
ating frequency, displacement, no-load speed, and capacitance. For piezoelectric sensors, performance
specifications include pressure range, accuracy, and operating temperature.
Piezoelectric devices use several types of electrical connectors. Bayonet Neil–Concelman (BNC)
connectors were designed for military applications, but are used widely in video and RF applications
to 2 GHz. They have a slotted outer conductor and a plastic dielectric that causes increasing losses at
higher frequencies. Both 50 and 75 BNC connectors are commonly available. American wire gauge
(AWG) connectors include connection points that accept two wires. A US standard for non-ferrous
wire conductor sizes, AWG uses the term ‘gauge’ to refer to a wire’s diameter. The higher the gauge
number, the smaller the diameter and the thinner the wire. For example, AWG 26 connectors accom-
modate wires that are 15.9 mils in diameter, while AWG 30 connectors accept wires that are 10.0 mils in
diameter. Some piezoelectric devices use LEMOÒ connectors, push-pull devices that lock in place for
demanding applications. LEMO is a trademark of LEMO SA. Typically, these connectors are marked
456 Metrology and Measurement
with the LEMO name and the first five characters of the part number, which represent the model, size,
and series.
Six-axis force and torque sensors measure the full six components of force and torque: vertical, lat-
eral, and longitudinal forces as well as camber, steer, and torque movements. Six-axis force and torque
sensors provide electrical outputs as analog current loops, analog voltage levels, frequencies, pulses,
switches and relays. Typically, analog current loops are 0–20 mA or 4 –20 mA. Most analog voltage out-
puts are 0–10 V or ± 5 V. Frequency and pulse signals include amplitude modulation (AM), frequency
modulation (FM), and pulse width modulation (PWM). With switch or relay outputs, contacts are open
or closed depending on the state of the variable being monitored. Typically, six-axis force and torque
sensors are used in strain gauges, piezoelectric devices and optical instruments. They are also used to
monitor robotic hand movements and the performance of car and truck tires.
General specifications for six-axis force and torque sensors include sensor height, sensor weight,
and sensing technology. Typically, sensor height is measured in inches and sensor weight is measured
in pounds. There are three basic types of sensing technologies: strain gauge, piezoelectric, and optic.
With strain-gauge devices, strain-sensitive variable resistors are bonded to part of the structure, which
deforms when measurements are taken. Typically, strain gauges are used as measurement elements in
Wheatstone bridge circuits. With piezoelectric devices, compressing a piezoelectric material generates
a charge that is measured by a charge amplifier. Optical devices use photodiodes or other fiber optic
technologies to detect optical power and convert it to electrical power.
Selecting six-axis force and torque sensors requires an analysis of force and torque requirements.
There are three measurement ranges for force. X-axis force is a longitudinal measurement range, Y-
axis force is a vertical measurement range, and Z-axis force is a lateral measurement range. There are
also three measurement ranges for torque. X-axis torque is measured around the longitudinal axis,
Y-axis force is measured around the vertical axis, and Z-axis force is measured around the lateral axis.
Additional considerations include force-measurement accuracy, torque-measurement accuracy, oper-
ating temperature, shock rating, and vibration rating. Typically, force and torque accuracy measure-
ments are expressed as a percentage. Shock and vibration ratings are usually maximum amounts.
Using 4-arm, 350-ohm bonded foil or 500-ohm bonded semiconductor bridges, these tough stain-
less-steel load cells yield high accuracy and linearity in any number of industrial and research appli-
cations, with exceptional structural resistance to off-axis loading, side-loading, and other extraneous
forces (see load-cell side and bending forces), and with safe overload protection for up to 50% over
capacity.
The resistance strain gauge is an electrical sensing device that varies its resistance as a linear function
of the strain experienced by the structural surface to which it is bonded. ‘Strain’ is the deformation of
a solid material as the result of applied forces (internal or external), and is normally expressed in units
of microinches per inch (or ‘microstrain’ ).
Force and Torque Measurement 457
A typical strain gauge consists of a conductive grid pattern of etched metallic foil, mounted on a
thin base of epoxy or fiberglass. It can then be bonded to a surface in such a way that any subsequent
deformation of the surface produces a like deformation of the gauges.
When the gauge is deformed, its electrical resistance changes. This fact is explained partly by simple
geometry. That is, when a conductor is stretched lengthwise, its cross-sectional area decreases, with a
consequent increase in resistance. It is also partly explained by changes in the actual resistivity of the
gauge material when subjected to strain.
For a given amount of unit strain (ΔL/L), the gauge will undergo a corresponding change in resis-
tance (ΔR/R). The ratio of the unit change in resistance to the unit change in length is known as the
gauge factor (Fg) of the gauge:
Fg = (ΔR/R) / (ΔL/L)
Conventional foil gauges have standardized nominal resistance values of 120 and 350 ohms, and typi-
cally exhibit gauge factors between 1.5 and 3.5. In typical transducer applications, they are subjected to
full-scale design strain levels ranging from 500 to 2000 microstrain.
Span adjust Modulus correction If the gauges within a load cell are connected in a bal-
Rs Rm anced Wheatstone Bridge circuit, and are excited by a source
of ac or dc voltage, the transducer will produce an electrical
R1 R3 output which is a direct linear function of the excitation volt-
age and the magnitude of the applied mechanical input:
Ein Eb
Eout(mV) = Ein( V ) • K • F/100
R2 R4
where
E
Transducer sensitivity is expressed in terms of millivolts per
volt (mV/V). The exact value of K for each instrument is
Fig. 17.25 Wheatstone bridge circuit
determined by measurement at the time of manufacture and
is furnished as part of that instrument’s calibration data. For
conventional transducers, this value usually falls between 0.5 and 3.0.
Excitation voltage can be either ac or dc, and is usually limited by heating considerations to a maxi-
mum of 10 volts for 120-ohm bridges and 20 volts for 350-ohm bridges (although good practice dic-
tates somewhat lower values).
Loading axis
T (torque)
S (side force
M (bending or "shear")
moment)
Review Questions
‘Vibrations are measured to minimize, eliminate or control the vibration and thus the
noise result …’
VIBRATION AND DEGREES OF damping is small, it has very little influ-
FREEDOM ence on the natural frequencies of the
There are two general classes of system, and hence the calculations for
vibrations—free and forced. Free vibra- the natural frequencies are generally
tion takes place when a system oscillates made on the basis of no damping. On the
under the action of forces inherent in the other hand, damping is of great impor-
system itself, and when external impressed tance in limiting the amplitude of oscilla-
forces are absent. The system under free tion at resonance.
vibration will vibrate at one or more of its The number of independent coordinates
natural frequencies, which are properties required to describe the motion of a
of the dynamic system established by its system is called degrees of freedom of
mass and stiffness distribution. the system. Thus, a free particle undergo-
Vibration that takes place under the excita- ing general motion in space will have
tion of external forces is called forced vibra- three degrees of freedom, and a rigid
tion. When the excitation is oscillatory, the body will have six degrees of freedom,
system is forced to vibrate at the excitation i.e., three components of position and
frequency. If the frequency of excitation three angles defining its orientation. Fur-
coincides with one of the natural frequen- thermore, a continuous elastic body will
cies of the system, a condition of reso- require an infinite number of coordinates
nance is encountered, and dangerously (three for each point on the body) to
large oscillations may result. The failure of describe its motion; hence, its degrees of
major structures such as bridges, build- freedom must be infinite. However, in
ings, or airplane wings is an awesome pos- many cases, parts of such bodies may be
sibility under resonance. Thus, in the study assumed to be rigid, and the system may
of vibrations, the calculation of the natural be considered to be dynamically equiva-
frequencies is of major importance. lent to one having finite degrees of free-
dom. In fact, a surprisingly large number
Vibrating systems are all subject to damp- of vibration problems can be treated with
ing to some degree because friction and sufficient accuracy by reducing the system
other resistances dissipate energy. If the to one having a few degrees of freedom.
Vibration Measurements 461
Measurements should be made to produce the data needed to draw meaningful conclusions from the
system under test. These data can be used to minimize or eliminate the vibration and thus the resultant
noise. There are also examples where the noise is not the controlling parameter, but rather the quality
of the product produced by the system. For example, in process control equipment, excessive vibration
can damage the product, limit processing speeds, or even cause catastrophic machine failure. The basic
measurement system used for diagnostic analyses of vibrations consists of the three-system compo-
nents shown in Fig. 18.1.
Processing
Vibration
Pre-amplifiers and display
pickups
equipment
The basic vibration model of a simple oscillatory system consists of a mass, a massless spring, and a
damper as shown in Fig. 18.2. The spring supporting the mass is assumed to be of negligible mass. Its
force–deflection relationship is considered to be linear, following Hooke’s law,
F = Kx (1)
where the stiffness k is measured in Newtons/metre.
The viscous damping, generally represented by a dashpot, is described by a force proportional to
the velocity, or
F = cx (2)
The damping coefficient c is measured in Newtons/metre/second.
k
kΔ
Unstretched k(Δ + x)
position Δ Static equilibrium
m x position
m
x· x··
w
Fig. 18.3 Spring–mass system and free-body diagram
Newton’s second law is the first basis for examining the motion of the system. As shown in Fig. 18.3,
the deformation of the spring in the static equilibrium position is D, and the spring force kD is equal to
the gravitational force w acting on mass m
K Δ= w = mg (3)
By measuring the displacement x from the static equilibrium position, the forces acting on m are
k ( Δ+ x ) and w. With x chosen to be positive in the downward direction, all quantities force, velocity,
and acceleration are also positive in the downward direction.
We now apply Newton’s second law of motion to the mass m :
mx =∑ F = w − k( Δ + x ) (4)
and because kD = w, we obtain
mx = − kx (5)
It is evident that the choice of the static equilibrium position as reference for x has eliminated w, the
force due to gravity, and the static spring force kD from the equation of motion. The resultant force
on m is simply the spring force due to the displacement x.
We define the circular frequency n
by the equation
k (6)
ω n2 =
m
Equation 5 can be written as
x + ω 2n x = 0 (7)
and we conclude that the motion is harmonic. Equation (7), a homogeneous second order linear
differential equation, has the following general solution:
x = A sin ωn t + B cos ωn t (8)
Vibration Measurements 463
where A and B are the two necessary constants. These constants are evaluated from initial conditions
x ( 0 ) and x ( 0 ), and Eq. (10) can be shown to reduce to
x(0)
x= sin ωn t + x ( 0 )cos ωn t (9)
ωn
The natural period of the oscillation is established from ωn τ =2π, or
m (10)
τ =2 π
k
and the natural frequency is
1 1 k (11)
fn = =
τ 2π m
These quantities can be expressed in terms of the static def lection D by observing Equation (3),
kΔ= mg . Thus, Equation (11) can be expressed in terms of the static deflection D as
1 g (12)
fn =
2π Δ
Note that τ , f n and ωn , depend only on the mass and stiffness of the system, which are properties
of the system.
general has a smaller usable frequency range than the stud-mounted pickup. In addition, it is important
to note that the magnetic mount, which has both mass and springlike properties, is located between
the velocity pickup and the vibrating surface and, thus, will affect the measurements. This mounting
technique is viable, but caution must be employed when it is used.
The velocity pickup is a useful transducer because it is sensitive and yet rugged enough to with-
stand extreme industrial environments. In addition, velocity is perhaps the most frequently employed
measure of vibration severity. However, the device is relatively large and bulky, is adversely affected by
magnetic fields generated by large ac machines or ac current-carrying cables, and has somewhat limited
amplitude and frequency characteristic.
Cable
Velocity pickup
Velocity pickup Magnet
Surface Surface
Stud
Apply silicon grease
(a) (b)
Fig. 18.4 Two-transducer mounting technique [(a) Stud-mount pickup;
(b) Magnetically help velocity pickup]
18.4.3 Accelerometers
The accelerometer generates an output signal that is proportional to the acceleration of the vibrating
mechanism. This device is, perhaps, preferred over the velocity pickup, for a number of reasons. For
example, accelerometers have good sensitivity characteristics and a wide useful frequency range. They
are small in size and light in weight and, thus, are capable of measuring the vibration at a specific point
without, in general, loading the vibrating structure. In addition, the devices can be used easily with elec-
tronic integrating networks to obtain a voltage proportional to velocity or displacement. However, the
accelerometer mounting, the interconnection cable, and the instrumentation connections are critical
factors in measurements employing an accelerometer. The general comments made earlier concerning
the mounting of a velocity pickup also apply to accelerometers.
Some additional suggestions for eliminating measurement errors when employing accelerometers for
vibration measurements are shown in Fig. 18.5(a). Note that the accelerometer mounting employs an isola-
tion stud and an isolation washer. This is done so that the measurement system can be grounded at only one
point, preferably at the analyzer. An additional ground at the accelerometer will provide a closed (ground)
loop, which may induce a noise signal that affects the accelerometer output. The sealing compound applied
at the cable entry into the accelerometer protects the system from errors caused by moisture.
Vibration Measurements 465
The cable itself should be glued or strapped to the vibrating mechanism immediately upon leaving the
accelerometer, and the other end of the cable, which is connected to the preamplifier, should leave the
mechanism under test at a point of minimum vibration. This procedure will eliminate or at least minimize
cable noise caused by dynamic bending, compression, or tension in the cable. Accelerometers for the mea-
surement of acceleration, shock or vibration come in many types using different principles of operation.
Sealing
compound
Accelerometer
Isolation washer
Isolation stud
Fig. 18.5(a) Mounting technique for eliminating selected measurement errors
q = d33 F
A F
d33 d
q u= F
e33 A
d u A electrode area
d thickness
F force
Piezo disk F q charge
u voltage
d33, e33 piezo constants
Fig. 18.6 Piezoelectric effect, basic calculations
F = m.a
q
Seismic mass Charge sensitivity
m q
u Bqa = —
Piezoceramics a
Voltage sensitivity
Acceleration a
u
Bua = —
a
Over a wide frequency range, both sensor base and seismic mass have the same acceleration magni-
tude. Hence, the sensor measures the acceleration of the test object.
The piezoelectric element is connected to the sensor socket via a pair of electrodes. Some acceler-
ometers feature an integrated electronic circuit, which converts the high-impedance charge output into
a low-impedance voltage signal. Within the useful operating frequency range, the sensitivity is indepen-
dent of frequency, apart from the later-mentioned limitations.
Vibration Measurements 467
A piezoelectric accelerometer can be regarded as a mechanical low-pass with resonance peak. The
seismic mass and the piezoceramics (plus other ‘flexible’ components) form a spring–mass system. It
shows the typical resonance behavior and defines the upper frequency limit of an accelerometer. In
order to achieve a wider operating frequency range, the resonance frequency should be increased. This is
usually done by reducing the seismic mass. However, the lower the seismic mass, the lower the sensitivity.
Therefore, an accelerometer with high resonance frequency, for example, a shock accelerometer, will be
less sensitive whereas a seismic accelerometer with high sensitivity has a low resonance frequency.
Figure 18.8 shows a typical frequency response curve of an accelerometer when it is excited by a
constant acceleration.
1.30
fL lower frequency limit
f0 calibration frequency
ff resonance frequency
1.10
1.05
1.00
0.95
0.90
0.71
fL 2fL 3fL f0 0.2fr 0.5fr fr f
0.3fr
The lower frequency limit mainly depends on the chosen preamplifier. Often it can be adjusted. With
voltage amplifiers, the low frequency limit is a function of the RC time constant formed by the acceler-
ometer, cable, and amplifier input capacitance together with the amplifier input resistance.
along with two fixed capacitors, and alter the peak voltage generated by an oscillator when the structure
undergoes acceleration. Detection circuits capture the peak voltage, which is then fed to a summing
amplifier that processes the final output signal.
Capacitive accelerometers sense a change in electrical capacitance, with respect to acceleration, to
vary the output of an energized circuit. When subject to a fixed or constant acceleration, the capaci-
tance value is also a constant, resulting in a measurement signal proportional to uniform acceleration,
also referred as dc or static acceleration.
PCB’s capacitive accelerometers are structured with a diaphragm, which acts as a mass that under-
goes flexure in the presence of acceleration. Two fixed plates sandwich the diaphragm, creating two
capacitors, each with an individual fixed plate and each sharing the diaphragm as a movable plate. The
flexure causes a capacitance shift by altering the distance between two parallel plates, the diaphragm
itself being one of the plates. The two-capacitance values are utilized in a bridge circuit, the electrical
output of which varies with input acceleration.
18.4.3 Pre-amplifiers
The second element in the vibration measurement system is the pre-amplifier. This device, which may
consist of one or more stages, serves two very useful purposes––it amplifies the vibration pickup signal,
which is in general very weak, and it acts as an impedance transformer or isolation device between the
vibration pickup and the processing and display equipment.
Recall that the manufacturer provides both charge and voltage sensitivities for accelerometers.
Likewise, the pre-amplifier may be designed as a voltage amplifier in which the output voltage is pro-
portional to the input voltage, or a charge amplifier in which the output voltage is proportional to the
input charge. The difference between these two types of pre-amplifiers is important for a number
Vibration Measurements 469
of reasons. For example, changes in cable length (i.e., cable capacitance) between the accelerometer
and preamplifier are negligible when a charge amplifier is employed. When a voltage amplifier is used
however, the system is very sensitive to changes in cable capacitance. In addition, because the input
resistance of a voltage amplifier cannot in general be neglected, the very low frequency response of
the system may be affected. Voltage amplifiers, on the other hand, are often less expensive and more
reliable because they contain fewer components and thus are easier to construct.
Shakers can operate under a number of different principles. Mechanical shakers use a motor with
an eccentric on the shaft to generate vibration. Electrodynamic models use an electromagnet to
create force and vibration. Hydraulic systems are useful when large force amplitudes are required,
such as in testing large aerospace or marine structures or when the magnetic fields of electrody-
namic generators cannot be tolerated. Pneumatic systems, known as ‘air hammer tables,’ use pres-
sure air to drive a table. Piezoelectric shakers work by applying an electrical charge and voltage to
a sensitive piezoelectric crystal or ceramic element to generate deformation and motion.
Common features of shakers are an integral slip table and active suspension. An integral slip allows
horizontal or both horizontal and vertical testing of samples. The slip table is a large flat plate that rests
on an oil film placed on a granite slab or other stable base. An active suspension system compensates
for environmental or floating platform variations.
The most important specifications for shakers are peak sinusoidal force, frequency range, displace-
ment, peak acceleration and peak velocity. Some of these specifications may be ratings without a load,
as the manufacturers cannot always predict how the shakers will be used.
The three main test modes shakers can have are random vibration, sine-wave vibration and shock
or pulse mode. In a random-vibration test mode, the force and velocity of the table and test sample
will vary randomly over time. A sine-wave test mode varies the force and velocity of the table and
tests sample sinusoidally over time. In a shock-test mode, the test sample is exposed to high-amplitude
pulses of force.
Vibration Measurements 471
Review Questions
1. Discuss the general classes of vibrations.
2. Justify the statement ‘In the study of vibrations, the calculation of the natural frequencies are of
major importance’.
3. Describe a basic vibration measurement system.
4. Explain basic vibration model with an example.
5. Explain the construction, working and applications of velocity pickups.
6. Explain the construction, working and applications of accelerometers.
7. Explain the different principles of operation of accelerometers with the help of a neat sketch.
8. Explain how the stroboscope can play the role of the instrument to be used for vibration measure-
ment.
9. Discuss the vibration processing and display equipment.
10. Explain in brief shakers, and vibration-and-shock-testing equipment.
11. Write short notes on
a. Piezoelectric principle of operation of accelerometers
b. Stroboscope
c. Shakers, and vibration-and-shock-testing equipment
19 Pressure Measurement
‘With the steam age came the demand for pressure measuring instruments which can
be expressed relative to various zero references…..’
PRESSURE-MEASURING It is important to select a pressure range
INSTRUMENTS that accommodates all anticipated pres-
With the steam age came the demand for sure swings, and which prevents exces-
pressure-measuring instruments. Pres- sive needle movement. It is recommended
sure gauges are used for a variety of to confine normal operating pressure to
industrial and application-specific pres- 25% to 75% of the scale. With fluctuating
sure-monitoring applications. Their uses pressure (e.g., pulsation by a pump or
include visual monitoring of air and gas compressor), the maximum operating
pressure for compressors, vacuum equip- pressure should be lower (50% of the full
ment, process lines and specialty tank range). Choices for pressure-gauge mea-
applications such as medical gas cylin- surement ranges include positive pres-
ders and fire extinguishers. In addition to sure, vacuum measurement, compound
visual indication, some pressure gauges measurement, differential pressure, abso-
are configured to provide electrical output lute pressure, and sealed pressure. A
of indicated pressure and monitoring of positive pressure gauge measures a pres-
other variables such as temperature. sure range from zero pressure to a higher,
Bourdon tubes or bellows, where mechan- positive pressure. Vacuum measurement
ical displacements were transferred to an switches measure vacuum pressure
indicating pointer were the first pressure (negative pressure). A compound pres-
instruments, and are still in use today. sure gauge measures a pressure range
from negative pressure (vacuum) to posi-
Pressure metrology is the technology of tive pressure. Differential pressure gauges
transducing pressure into an electrical give the relative pressure between two
quantity. Normally, a diaphragm con- points. If both operating pressures are the
struction is used with strain gauges, same, the measuring element cannot
either bonded to, or diffused into it, move and no pressure will be indicated. A
acting as resistive elements. Under the differential pressure is indicated when
pressure-induced strain, the resistive one pressure is higher or lower. Low dif-
values change. In capacitive technology, ferential pressures can be measured
the pressure diaphragm is one plate of a directly in cases of high static pressures.
capacitor that changes its value under Absolute gauges are used where pres-
pressure-induced displacement.
Pressure Measurement 473
Pressure measurements may be expressed relative to various zero references.. Absolute pressure of a
fluid is referenced against a perfect vacuum. Gauge pressure is referenced against ambient air pressure,
so it is equal to absolute pressure minus atmospheric pressure. Atmospheric pressure is typically about
100 kPa, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant,
the gauge pressure of the same fluid will vary as atmospheric pressure changes. For gauge pressures,
474 Metrology and Measurement
several times larger than atmospheric pressure, this variation is small as a percentage of reading and
may be ignored. Differential pressure is the difference in pressure between two points.
Examples of absolute pressure measurements include barometric pressure, altimeters, and the Manifold
Absolute Pressure (MAP) sensor used in the engine control systems of modern fuel-injected automobiles.
Examples of gauge pressure measurements include the tyre-pressure gauge and sphygmomanometer. Dif-
ferential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is
to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through
mechanical means, obviating the need for an operator or control system to watch two separate gauges and
determine the difference in readings.
Gauge pressure of vacuum is usually indicated and expressed without a negative sign, so it is equal
to the atmospheric pressure minus the absolute pressure.
1594 Galileo Galilei, born in Pisa ( Italy), obtains the patent for a machine to pump water from a river for
the irrigation of land. The heart of the pump was a syringe. Galileo Galilei found that 10 metres
was the limit to which the water would rise in the suction pump, but had no explanation for this
phenomenon. Scientists then devoted themselves to find the cause for this.
1644 Evangelista Torricelli, the Italian physicist filled a 1-metre long tube, hermetically closed at one end,
with mercury and set it vertically with the open end in a basin of mercury. The column of mercury
invariably fell to about 760 mm, leaving an empty space above its level. Torricelli attributed the cause
of the phenomenon to a force on the surface of the earth, without knowing where it came from.
He also concluded that the space on the top of the tube was empty, that nothing was in there, and
called it a ‘vacuum’.
Pressure Measurement 475
1648 Blaise Pascal, French philosopher, physicist and mathematician, heard about the experiments of Torricelli
and was searching for the reasons of Galileo’s and Torricelli’s findings. He came to the conviction that
the force, which keeps the column at 760 mm, is the weight of the air above. Thus, on a mountain, the
force must be reduced by the weight of the air between the valley and the mountain. He predicted that
the height of the column would decrease which he proved with his experiments at the mountain Puy de
Dome in central France. From the decrease he could calculate the weight of the air. Pascal also formu-
lated that this force, which he called ‘pressure’, acts uniformly in all directions.
1656 Otto von Guericke was born in Magdeburg, Germany. Torricelli’s conclusion of an empty space
or ‘nothingness’ was contrary to the doctrine of an omnipresent God and was thus attacked by
the church. Guericke developed new air pumps to evacuate larger volumes and staged a dramatic
experiment in Magdeburg by pumping the air out of two metal hemispheres which had been fitted
together with nothing more than grease. Even eight horses pulling at each hemisphere were not
strong enough to separate them.
1661 Robert Boyle, an Anglo-Irish chemist, used J-shaped tubes closed at one end to study the relation-
ship between the pressure and volume of trapped gas and stated the law of X V = K (P: Pressure,
V: Volume, K: Constant) which means that if the volume of a gas at a given pressure is known, the
pressure can be calculated if the volume is changed, provided that neither the temperature nor the
amount of gas is changed.
1820 Almost 200 years later, Joseph Louis Gay-Lussac, French physicist and chemist, detected that the
pressure increase of a trapped gas at constant volume is proportional to the temperature. Twenty
years later, William Thomson (Lord Kelvin) defined the absolute temperature.
1930 The first pressure transducers were transduction mechanisms where the movements of diaphragms,
springs or Bourdon tubes are part of an electrical quantity. Pressure diaphragms are part of a capaci-
tance. The indicator movement is the tap of a potentiometer.
1938 The bonded strain gauges were independently developed by E E Simmons of the California Insti-
tute of Technology and A C Ruge of Massachusetts Institute of Technology. Simmons was faster
to apply for a patent.
1955 The first foil-strain gauges came up with an integrated full resistor bridge, which, if bonded on a
diaphragm, induce opposite stress in the centre and at the edge.
(Continued )
476 Metrology and Measurement
1965 The bonding connection of the gauges to the diaphragm was always the cause for hysteresis and
instability. In the 1960’s, Statham introduced the first thin-film transducers with good stability and
low hysteresis. Today, the technology is a major player on the market for high pressure.
1973 William R Poyle applied for a patent for capacitive transducers on glass or quartz basis, and Bob Bell
of Kavlico did the same on ceramic basis a few years later in 1979. This technology filled the gap for
lower pressure ranges (for which thin film was not suited) and is today, also with resistors on ceramic
diaphragms, the widest spread technology for non-benign media.
1967 Honeywell Research Center, Minneapolis/USA, 1967 Art R Zias and John Egan applied for a patent for
the edge-constrained silicon diaphragm. In 1969, Hans W Keller applied for a patent for the batch-
fabricated silicon sensor. The technology is profiting from the enormous progresses of IC-technology.
A modern sensor typically weighs 0.01 grams. If all non-crystalline diaphragms have inherent hysteresis,
the precision limit of this item is not detectable by today’s means.
2000 The piezoresistive technology is the most universal one. It applies for pressure ranges from 100 mbar
to 1500 bar in the absolute, gauge and differential pressure mode. The slow spread of the technology in
high-volume applications for non-benign media resulted from the inability of US companies to develop
a decent housing. In 30 years, KELLER has perfected it at costs comparable to any other technology.
There are several common types of mechanical analog pressure gauges including bellows, Bourdon
tubes, capsule elements and diaphragm element gauges. Analog pressure gauges should be selected
considering the media and ambient operating conditions. Gauge selection should take into consider-
ation the corrosive environment in which it is to operate. The media being measured must be compat-
ible with the wetted parts of the pressure instrument. Improper application can damage the analog
pressure gauge, causing failure or personal injury and property damage. Diaphragm seals (also called
gauge isolators) can be added to the system to protect the gauge from corrosive attack, and prevent
viscous or dirty media from clogging Bourdon tube analog pressure gauges.
There are several common types of mechanical analog pressure gauges including bellows, Bourdon
tube, capsule element and diaphragm element gauges.
Indicating
needle
Deformed
state
Original Bourdon
state tube
P
(a) (b)
Fig. 19.1 Typical Bourdon-tube pressure gauges
The pressure of the media acts on the inside of this tube resulting in the oval cross section becoming
almost round. Because of the curvature of the tube ring, the Bourdon tube bends when tension occurs.
The end of the tube (which is not fixed) moves, thus being a measurement of the pressure. Bourdon tubes
with a number of superimposed coils of the same diameter (helical coils) are used for measuring high
pressures. In 1849, the Bourdon tube pressure gauge was patented in France by Eugene Bourdon.
In a Bourdon tube, internal linkages are simplified. The external pressure is guided into the tube
and causes it to flex, resulting in a change in curvature of the tube. These curvature changes are linked
to the dial indicator for a number readout. Alternatively, a strain-gauge circuit can be attached on the
tube to convert the pressure-induced deflections into electric voltage signals. These signals can then
be output electronically, rather than mechanically, with the dial indicator. A mercury barometer can be
used to calibrate and check Bourdon tubes.
Limitations Limited to static or quasi-static measurements, accuracy may be insufficient for many
applications
PExt − PRef
PExt − PRef
W
Diaphragm
Diaphragm
Fig. 19.3 Diaphragm element
The plate deflection depends upon its material properties, geometric properties, and boundary con-
ditions, and on the magnitude of the loading. Some particular results can be found in most textbooks
or handbooks on theory of plates, such as Roark’s Formulas for Stress and Strain by Young and Roark and
Formulas for Stress, Strain, and Structural Matrices by Pilkey.
It uses the elastic deformation of a diaphragm (i.e., membrane) instead of a liquid level to measure
the difference between an unknown pressure and a reference pressure. A typical diaphragm pressure
gauge contains a capsule divided by a diaphragm, as shown in Fig. 19.4. One side of the diaphragm is
open to the external targeted pressure, PExt , and the other side is connected to a known pressure, PRef ,.
The pressure difference, PExt − PRef , mechanically deflects the diaphragm.
PExt
Diaphragm
PExt − PRef
W
Diaphragm
PRef
The membrane deflection can be measured in any number of ways. For example, it can be detected
via a mechanically coupled indicating needle, an attached strain gauge, a linear variable differential
transformer (LVDT; see Fig. 19.5 ), or with many other displacement/velocity sensors. Once known,
the deflection can be converted to a pressure loading using plate theory.
Diaphragm
Diaphragm PExt − PRef
PExt
W
PRef
LVDT
Advantages Much faster frequency response than U-tubes, accuracy up to ±0.5% of full scale,
good linearity when the deflection is no larger than the order of the diaphragm thickness
p + ρA g (h + Δh ) = ρB g Δh + ρC gh + pRef
⇒ p = pRef + (ρB − ρA ) g Δh + (ρC − ρA ) gh
If Fluid C is the atmosphere, Fluid B is the liquid in the U-tube (e.g., water or mercury), and Fluid
A is a gas then we can assume that r B » rA, r C. The pressure contributed by the weight of gas within the
U-tube can, therefore, be neglected.
480 Metrology and Measurement
Reference
pressure
Unknown
pressure PRef
Fluid C
Fluid A h (atmospheric pressure
(gas in most cases) in most cases)
Δh
Fluid B
(liquid, e.g.,
water or mercury)
mated by
ef
R
ef
R
p ≈ pRef + ρB g Δh
Unknown
pressure Vout ⇒ pgauge = p − pRef = ρB g Δh
Reference
pressure To automate the pressure measurement in a
Rw + ΔR/2
Rw − ΔR/2
ΔR
Δp = c ΔR = k
RW
≈ ρB g Δh
where c and K = cRw are factors that can be obtained during calibration.
For an initially balanced Wheatstone Bridge, the voltage output is given by
r ΔR
Vout = 2
V in
(1+ r ) RW
where r = Rw / RRef is the efficiency of the bridge circuit.
Thus, the unknown gas pressure (with respect to the reference pressure) is proportional to the output
voltage;
2
ΔR (1 + r ) Vout
Δp = k = k
RW r V in
Advantages Low cost, simple and reliable
Limitations Low dynamic response rate, requires time to damp out oscillations, measurement
accuracy dependent on precise leveling of U-tube, and cannot be used in weightless (0 g) environments.
The liquid in the U-tube must NOT interact with a measured fluid (be it gas or liquid). Mercury or
water-vapour contamination can occur, especially in low-pressure measurements.
Force
Area
Pressure Force
Pressure =
Area
Gravity varies significantly with geographical location and this variation has a direct effect on the force of
the weights and the accuracy of the deadweight tester. Each instrument can be calibrated to local gravity. If
unspecified, instruments will be supplied and calibrated to a standard gravity of 980.665 cm/s2. Instruments
are generally supplied with an integral carrying case, making them neat, compact and easily portable. Com-
ponents are stored in the detachable lid, which also provides excellent protection from dirt and damage
when the tester is in transit or storage. Unique test station connections allow quick hand-tight sealing. A
spirit level and adjustable feet are provided to enable the operator to level the instrument. A floatation
indicator is mounted on the top plate, eliminating guesswork when floating the piston. Weights are stored
in a separate box.
In case of hydraulic deadweight testers, with an accuracy better than 0.015% of reading, dual-piston
systems allow calibration over a wide range––the piston is automatically selected without valving or
piston exchange. The overhanging weight carrier protects the carbide piston, improves rotational spin,
sensitivity and stability, water systems eliminate oil contamination, and pressure is generated by a ram
screw.
Suppose that initial pressure and volume in a McLeod gauge are given by
P1 = Pi
V1 = V + A·h0
where, V is the reservoir volume and A is the cross-sectional
area of the sealed tube, as shown in Fig. 19.10.
Suppose that the final compressed pressure and volume are given by
P2 = Pgauge
V2 = A·h
According to Boyle’s law, we have
Pi ⋅(V + A ⋅ h0 ) = Pgauge ⋅ A ⋅ h
For a typical manometer, Pgauge = P − PRef = ρ gh − Pi . The
unknown pressure Pi can be reduced to a function of the height
difference h:
ρ gh 2 Fig. 19.10 McLeod gauge
Pi =
V + A (h0 − h )
Furthermore, the volume of the reservoir is usually much larger than the tube:
V » A .( h − h)
0
This allows us to drop the area term, resulting in a simple quadratic function for the pressure:
ρ gh 2
Pi ≈
V
It is considered the standard for low-pressure (vacuum) measurements, where the pressure is below
10−4 torr (10−4 mmHg, 1.33×10−2 Pa, 1.93×10−6 psi). A McLeod gauge compresses a sample
of low-pressure gas to a sufficiently high pressure, obtains the compressed pressure from a standard
manometer, and then calculates the original low pressure through Boyle’s law. The compression is
passed through a dense, nearly incompressible, low-vapour pressure fluid, such as mercury. A sche-
matic of the McLeod gauge is shown in Fig. 19.11.
The error in a typical McLeod gauge measurement is usually larger than 1% and may be much larger,
due to the possibility of gas-to-liquid (or solid) phase change during compression, and to the contami-
nation by mercury vapours.
Limitations Limited to static measurements, accuracy may not be high enough for some applica-
tions, cannot be used in weightless (0 g) environments, the liquid in the McLeod gauge must NOT
interact with the targeted gas, condensation of low-pressure gas to the liquid/solid phase may occur
during the compression stage, contamination by mercury vapours may occur
484 Metrology and Measurement
Low-Pressure gas
Fluid, e.g., mercury
Unknown
pressure
Pi Pi Pi
Plunger Sealed 0
h
Tube h0
cross
section
area
A
Reservoir
volume
V
Fluid
density
ρ
Pi
mA
Cold surface (glass tube)
Adjust heating
Hot surface current
Thermocouple
mo
2. Resistance Thermometer (Pirani Gauge) In this case, the functions of heating and
temperature measurement are combined in a single element. A construction is shown in Fig. 19.13.
The resistance element is in the form of four coiled tungsten wires connected in parallel and supported
inside a glass tube to which the gas is admitted. Again, the cold surface is the glass tube. Two identical
tubes generally are connected in a bridge circuit, as shown in the second figure of 19.13. One of the
tubes is evacuated to a very low pressure and then sealed off while the other has the gas admitted to it.
The evacuated tube acts as a compensator to reduce the effect of bridge-excitation voltage changes and
the temperature changes on the output reading. Current flowing through the measuring elements heats it
to a temperature depending upon the gas pressure. The electrical resistance of the element changes with
the temperature, and this resistance change causes a bridge unbalance. Generally, the bridge is used as a
deflection rather than a null device. To balance the bridge initially, the pressure in the measuring element is
made very small and the balance pot is set for zero output. Any changes in the pressure will cause a bridge
unbalance. This gauge covers the range of measurement from 10−5 to 1 torr.
Thermister vacuum gauges operate on the same principle as that of a Pirani gauge except that the
resistance elements are temperature-sensitive semiconductors called thermistors, rather than metals
such as tungsten or platinum.
486 Metrology and Measurement
Tungsten
wire
C
Compensating
element
R
A B
Vo
R Pi
Measuring
D element
Excitation
voltage, V
Fig. 19.13 Pirani gauge
Digital pressure gauges are devices that convert applied pressure into signals. Readouts are then displayed
numerically. Many pressure-gauging technologies are available. Devices that use mechanical deflection
include an elastic or flexible element such as a diaphragm that responds to changes in pressure. Digital
pressure gauges that include a bridge circuit also use a diaphragm, but only to detect changes in capaci-
tance. Typically, strain gauges or strain-sensitive variable resistors are used as elements in Wheatstone
bridge circuits that perform measurements. Other digital pressure gauges use pistons, vibrating ele-
ments, MicroElectroMechanical Systems (MEMS), or thin films to sense changes in pressure. Some
devices use piezoelectric sensors to measure dynamic and quasi-state pressure. Generally, these sensors
have two modes: charge and voltage. Charge mode generates a high-impedance charge and voltage
mode uses an amplifier to convert the high-impedance charge into a low-impedance output voltage.
Digital pressure gauges are capable of performing various pressure measurements and displaying
amounts in different units. Absolute pressure is a pressure measurement that is relative to a perfect
vacuum. Typically, vacuum pressures are lower than the atmospheric pressure. Gauge pressure, the
most common type of pressure measurement, is relative to the local atmospheric pressure. By contrast,
sealed gauge pressure is relative to one atmosphere of pressure (oz) at sea level. Differential pressure
reflects the difference between two input pressures. In terms of units, some digital pressure gauges dis-
play measurements in pounds per square inch (PSI), kilo pascals, bars or millibars, inches or centimetres
of mercury, or inches or feet of water. Other devices display measurements in ounces per square inch
or kilograms per square centimetre.
Pressure Measurement 487
19.8 IMPACT
After years of manufacturing melt-pressure sensors, GEFRAN engineers thought they could greatly
improve the design, and have spent several years creating a new instrument called ‘Impact.’ Impact is
radically different from the fluid transmission type of sensors. The new design requires, in the manu-
facturing process, extensive use of lasers and special alloys and the coupling of different materials like
steel and ceramics. In creating the new design, Gefran generated four patents. A major design com-
mitment was a new and highly sensitive monolithic piezoresistive sensor, made with MEMS technol-
ogy. The square silicon chip contains both the membrane and sensitive element. It is shown in Fig.
19.17 and mounted in its carrier on the front of the cylinder in Fig. 19.18. The new sensor is so sensi-
tive that its maximum deflection is of the order of one ten-thousandth of a millimetre. The engineers
Process contact
membrane and push rod
System Overview To do real-time sensing of the exact pressure inside the tyre, the sensing
device must be located in the tyre. This pressure-measurement information must then be carried to
the driver and displayed in the cabin of the car. The remote-sensing module is comprised of a pressure
sensor, a signal processor, and an RF transmitter. The system must compensate pressure variations due to
temperature. Hence, a temperature sensor is also required.
The power supply is provided by a long-life battery that the embedded intelligence helps to manage
as effectively as possible. The receiver could be either dedicated to TPM use, or shared with the other
functions in the car.
490 Metrology and Measurement
Remote Sensing Module (RSM) Once mounted in the tyre, the RSM is a stand-alone device.
Its embedded intelligence has to independently manage the sensing functions, the measurement pro-
cessing, the RF transmission, and the power management.
To address each of these functions, Motorola offers two new components as a solution. The TPMS
Sensor is an integrated monolithic chip device. It is comprised of both a temperature and pressure
sensor with on-board circuitry.
The second component is a microcontroller and an RF transmitter, with both chips housed in the
same package.
TPM Sensor The Motorola TPM pressure sensor uses less than 0.5 μA in standby mode. The
pressure-sensing cell is capacitive and requires a C to V (capacitance to voltage) conversion stage. The
sensor’s built-in non-volatile memory can store calibration data while the ADC allows a direct digital
serial connection to the controller. In standby mode, all analog and digital blocks are switched off,
except an internal low frequency oscillator that sends a wake-up pulse over an output pin to the control-
ler periodically.
A pressure-measurement mode allows the pressure cell, and the C to V converter to be activated.
The temperature measurement mode activates the temperature cell (a PTC resistor) and its condition-
ing block.
Finally, the read mode enables the measurements to be stored in a sampling capacitor. The read
mode activates the A to D converter and enables the controller to serially read the measurement. These
four modes are coded through two input pins controlled by the microcontroller. The coding is chosen
so as to make the standby mode coded with logic zero on both pins.
Microcontroller The 68HC08RF2 device was chosen for its combination of an HC08 micro
together with an RF transmitter in a single 32-pin LQFP package. The dual chip HCO8RF2 has no
internal connections between the controller die and the RF die, but the pinout is optimised to shorten
the necessary external connections. The 2 Kbytes of user Flash memory with an embedded charge
pump allow designers to implement the necessary software routines to address the TPMS application’s
functional requirements.
Pressure Measurement 491
The RF transmitter is PLL based, addressing both ASK (amplitude modulation) and FSK (frequency
modulation) and its transmission rate is configurable up to 9600 baud. With a reference quartz oscilla-
tor of 13.56 MHz, the PLL is able to generate 315, 433, 868 MHz carriers.
System Architecture The HCO8RF2 controls the sensor-state by setting the different operat-
ing modes. When the sensor is set in standby mode, its internal low-frequency oscillator periodically
wakes up the controller. After each wake-up, the controller may run different and configurable tasks
according to the software program. Between two wake-up pulses, the microcontroller is in the stop
mode, all functions are disabled to minimise the power consumption, and only an external stimulus can
wake it up again.
To improve the battery management, an inertial switch can be employed to detect the parking mode.
In parking conditions, the RF transmissions can be stopped or reduced, improving power management
and reducing the data collision risk between RKE and TPM transmissions. The RSM must be as small
and lightweight as possible since it is mounted inside the tyre. An oversized RSM could result in wheel
imbalance.
Single Receiver A single receiver can be shared between both the RKE and TPM systems since
the same transmitting format is used in both. The TPM function must use as little CPU time as pos-
sible and to achieve this, a highly integrated RF receiver such as the MC33591, also called Romeo 2, is
required.
This RF receiver was developed in order to provide a comprehensive RF link that is integrable in
RKE and TPM systems with Romeo 2 at one end, and the HCO8RF2 at the other end. Thanks to its
embedded RF decoding and data registers, the chip minimizes the communication with the receiver
microcontroller. The MCU is not called until a valid data frame is received, validated, and stored by the
Romeo 2 device.
Tyre Identification The simplest to perform tyre identification is the manual initialization per-
formed in the factory, or in the garage each time a tyre is replaced or moved (rotated). The second
method is by automatic identification. Using this method, the system locates each tyre automatically
by a learning procedure that is activated regularly, or upon request. Combining different information
sources could be the path taken to meet these needs. TPM is in fact, destined to become more inte-
grated into the vehicle architecture.
Review Questions
5. Describe the bellows pressure gauge with the help of neat figures.
6. Discuss the construction, working and applications of a U-Tube manometer.
7. Justify the ststement, ‘Deadweight tester is the basic primary standard used worldwide for the
accurate measurement of pressure’.
8. Discuss the construction, working and applications of a McLeod gauge.
9. Compare different pressure gauges.
10. Write short notes on
a. Bourdon tube
b. Error in typical McLeod gauge measurements
c. Digital pressure gauges
d. Pressure transmitters
e. Pressure measurement at high temperatures
20 Temperature
Measurement
Several temperature scales have been developed to provide a standard for indicating the temperatures
of substances. The most commonly used scales include the Fahrenheit, Celsius, Kelvin, and Rankine
temperature scales. The Fahrenheit (°F) and Celsius (°C) scales are based on the freezing point and
boiling point of water. The freezing point of a substance is the temperature at which it changes its
494 Metrology and Measurement
physical state from a liquid to a solid. The boiling point is the temperature at which a substance changes
from a liquid state to a gaseous state. To convert a Fahrenheit reading to its equivalent Celsius reading,
Réaumur, Kelvin, and Rankine readings, the following equations are used.
°C °Réaumur °F K °Rankine
Boiling point of water 100 80 212 373.15 671.67
(at 1 atm = 101325 Pa)
Freezing point of water 0 0 32 273.15 491.67
(at 1 atm = 101325 Pa)
Interval freezing point–boiling 100 80 180 100 180
point of water
(at 1 atm = 101325 Pa)
Triple point of water 0.01 0.008 32.02 273.16 491.69
(solid–liquid–gas
equilibrium)
The Kelvin (K) and Rankine (°R) scales given in Table 20.1 are typically used in engineering calcu-
lations and scientific research. They are based on a temperature called absolute zero. Absolute zero is a
theoretical temperature where there is no thermal energy or molecular activity. Using absolute zero as
a reference point, temperature values are assigned to the points at which various physical phenomena
occur, such as the freezing and boiling points of water.
Temperature-measuring devices are classified into two major groups, temperature sensors and absolute
thermometers. Sensors are classified according to their construction. Three of the most common types of
temperature sensors are thermocouples, resistance temperature devices (RTDs), and filled systems. Typi-
cally, temperature indications are based on material properties such as the coefficient of expansion, tem-
perature dependence of electrical resistance, thermoelectric power, and velocity of sound. Calibrations for
temperature sensors are specific to their material of construction. Temperature sensors that rely on material
properties never have a linear relationship between the measurable property and temperature. The accuracy
of absolute thermometers does not depend on the properties of the materials used in their construction.
The temperature of an object or substance can be calculated directly from measurements taken
with an absolute thermometer. Types of absolute thermometers include the gas-bulb thermometer,
radiation pyrometer, noise thermometer, and acoustic interferometer. The gas-bulb thermometer is
the most commonly used. Temperature measuring devices can also be categorized according to the
manner in which they respond to produce a temperature measurement. In general, the response will be
either mechanical or electrical. Mechanical temperature devices respond to temperature by producing
mechanical action or movement. Electrical temperature devices respond to temperature by producing
or changing an electrical signal.
Titanium melts 1941 1668 3034 3494 −2352 550 1334 883
Surface of the
5800 5526 9980 10440 −8140 1823 4421 2909
sun
i. Thermometer
ii. Thermocouple
iii. Resistance Temperature Detectors
iv. Thermistor
v. Pyrometers
20.3 THERMOMETER
One of the most common devices for measuring temperature is the glass thermometer. This consists
of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature
increases cause the fluid to expand, so the temperature can be determined by measuring the volume
of the fluid. Such thermometers are usually calibrated, so that one can read the temperature simply by
observing the level of the fluid in the thermometer. Another type of thermometer that is not really
used much in practice, but is important from a theoretical standpoint, is the gas thermometer.
The theoretical basis for thermometers is the zeroth law of thermodynamics which postulates that
if you have three bodies, A, B and C, and if A and B are at the same temperature, and B and C are at
the same temperature then A and C are at the same temperature. B, of course, is the thermometer.
The practical basis of thermometry is the existence of triple-point cells. Triple points are conditions
of pressure, volume and temperature such that the three phases(matter) are simultaneously present.
The temperature of the air near the surface of the earth is usually determined by a thermometer in a
Stevenson screen. The thermometers should be between 1.25 m (4 ft 1 in) and 2 m (6 ft 7 in) above the
ground as defined by the World Meteorological Organization ( WMO). The true daily mean, obtained
from a thermograph, is approximated by the mean of 24 hourly readings and may differ by 1.0 degree
C from the average based on minimum and maximum readings.
20.3.1 A Mercury-In-Glass Thermometer
A mercury-in-glass thermometer is a thermometer consisting of mercury in a glass tube shown in Fig. 20.1.
Calibrated marks on the tube allow the temperature to be read by the length of the mercury within the tube,
Temperature Measurement 497
which varies according to the temperature. To increase the sensitivity, there is usually a bulb of mercury
at the end of the thermometer which contains most of the mercury; expansion and contraction of this
volume of mercury is then amplified in the much narrower bore of the tube. The space above the mercury
may be filled with nitrogen or it may be a vacuum. The break in the column of mercury is visible.
A special kind of mercury thermometer, called a maximum
thermometer, works by having a constriction in the neck close to the
bulb. As the temperature rises, the mercury is pushed up through the
constriction by the force of expansion. When the temperature falls,
the column of mercury breaks at the constriction and cannot return to
the bulb, thus remaining stationary in the tube. The observer can then
read the maximum temperature over a set period of time. To reset the Fig. 20.1 Mercury-in-glass
thermometer, it must be swung sharply. This is similar to the design of thermometer
a medical thermometer.
Mercury will solidify (freeze) at –38.83°C (–37.89°F) and so may only be used at higher temperatures.
Mercury, unlike water, does not expand upon solidification and will not break the glass tube, making
it difficult to notice when frozen. If the thermometer contains nitrogen, the gas may flow down into
the column and be trapped there when the temperature rises. If this happens, the thermometer will be
unusable until returned to the factory for reconditioning. To avoid this, some weather services require that
all mercury thermometers be brought indoors when the temperature falls to −37°C (−34.6°F). In areas
where the maximum temperature is not expected to rise above −38.83°C (−37.89°F), a thermometer
containing a mercury–thallium alloy may be used. This has a solidification (freezing) point of −61.1°C
(−78°F ). The thermometer was used by the originators of the Fahrenheit and Celsius temperature scales.
Today mercury thermometers are still widely used in meteorology, however in other usage they are
becoming increasingly rare, as mercury is highly and permanently toxic to the nervous system and many
countries have banned them outright from medical use. Some manufacturers use a liquid alloy of gal-
lium, indium, and tin, galinstan, as mercury replacement.
Rivet
Contact
Two metals make up the bimetallic strip (hence the name). In Fig. 20.2, Metal B would be chosen to
expand faster than Metal A if the device were being used in an oven. In a refrigerator, we could use the
opposite set up, so that as the temperature rises, Metal A expands faster than Metal B. This causes the
strip to bend upward, making contact so that current can flow. By adjusting the size of the gap between
the strip and the contact, you control the temperature. We will often find long bimetallic strips coiled
into spirals. This is the typical layout of a backyard dial thermometer. By coiling a very long strip, it
becomes much more sensitive to small temperature changes. In a furnace thermostat, the same tech-
nique is used and a mercury switch is attached to the coil. The switch turns the furnace on and off.
20.4 THERMOCOUPLE
The thermocouple is a thermoelectric temperature sensor, which consists of two dissimilar metallic
wires, e.g., one chromel and one constantan, coupled at the probe tip (measurement junction) and
extended to the reference (known temperature) junction. The temperature difference between the probe
tip and the reference junction is detected by measuring the change in voltage (electromotive force, EMF)
at the reference junction. The absolute temperature reading can then be obtained by combining the
information of the known reference temperature and the difference of temperature between probe tip
and the reference. Thomas Seebeck made this discovery in 1821. This effect is called Seebeck effect.
All dissimilar metals exhibit this effect. The most common combinations of two metals are listed
in Table 20.4 along with their important characteristics. For small changes in temperature, the Seebeck
voltage is linearly proportional to temperature:
eAB = α ΔT
where, α = the Seebeck coefficient, the constant of proportionality.
J3
Cu Cu
+
−
V
Cu
C
+
−
V1 J1
=
J2
Equivalent circuits
Cu + − Cu Cu
V3
J3
+ −
+
V1
−
J1
= +
V1
−
J1
+ −
Cu V2 C Cu C
V2
J2
J2
We would like the voltmeter to read only V1, but by connecting the voltmeter in an attempt to
measure the output of Junction J1, we have created two more metallic junctions: J2 and J3. Since J3 is a
copper-to-copper junction, it creates no thermal EMF (V3 = 0), but J2 is a copper-to-constantan junc-
tion which will add an emf (V2) in opposition to V1. The resultant voltmeter reading V will be propor-
tional to the temperature difference between J1 and J2. This says that we can’t find the temperature at J1
unless we first find the temperature of J2.
One way to determine the temperature of J2 is to physically put the junction into an ice bath, forcing its
temperature to be 0°C and establishing J2 as the reference junction. Since both voltmeter terminal junctions
are now copper–copper, they create no thermal emf and the reading V on the voltmeter is proportional to
the temperature difference between J1 and J2. Now the voltmeter reading is (see Fig. 20.5):
V = (V1 − V2) ≈ α (tJ1 − tJ2)
If we specify TJ1 in degree Celsius:
TJ1 (°C) + 273.15 = tJ1
500 Metrology and Measurement
then V becomes
V = V1 − V2 = α [(TJ1 + 273.15) − (TJ2 + 273.15)] Cu Cu
+ +
= α (TJ1 − TJ2) = α (TJ1 − 0) V V1 J1
− + − −
V = αTJ1 Cu Cu V2 C
Voltmeter
We use this protracted derivation to emphasize that
J2
the ice-bath-junction output, V2, is not zero volts. It
is a function of absolute temperature. By adding the
voltage of the ice-point reference junction, we have Ice bath
now referenced the reading V to 0°C. This method Fig. 20.5 External reference junction
is very accurate because the ice-point temperature
can be precisely controlled. The ice point is used by
the National Bureau of Standards (NBS) as the fundamental reference point for their thermocouple
tables, so we can now refer the NBS tables and directly convert from voltage V to temperature TJ1. The
copper–constantan thermocouple shown in Fig. 20.5 is a unique example because the copper wire is the
same metal as the voltmeter terminals. While using thermocouples for temperature we should follow the
two considerations:
b. Compensating Cables Consider that we are using platinum and a 90% platinum/10% rhodium
(refer Table 20.3) thermocouple. Its output is in mV. If ordinary copper cables are used to convey the signal
to the mV/mA transducer then the copper wires will form two more thermocouples with the two metals
of the thermocouples itself. This would give an enormous error in the measurement of temperature. To
overcome this problem, a compensating cable is used to convey the signal from the thermocouple to the
mV/mA transducer. These compensating lead wires have two wires, which are exactly identical in thermal
properties of platinum and platinum/rhodium wires of the thermocouple. It must however be remem-
bered that each thermocouple has a distinct compensating cable which can only be used for that thermo-
couple. Some of the examples of thermocouple wires and lead wires are given in Table 20.3.
transfer from the outside, through the probe wall to the thermocouple junction. In an ungrounded
probe, the thermocouple junction is detached from the probe wall. Response time is slowed down
from the grounded style, but the ungrounded offers electrical isolation of 1.5 M1/2 at 500 Vdc in all
diameters. The thermocouple in the exposed junction style protrudes out of the tip of the sheath and
is exposed to the surrounding environment. This type offers the best response time, but is limited in
use to non-corrosive and non-pressurized applications.
The grounded junction is recommended for the measurement of static or flowing corrosive gas and
liquid temperatures and for high-pressure applications. The junction of a grounded thermocouple is
welded to the protective sheath giving faster response than the ungrounded junction type.
An ungrounded junction is recommended for measurements in corrosive environments where it is
desirable to have the thermocouple electronically isolated from and shielded by the sheath. The welded
wire thermocouple is physically insulated from the thermocouple sheath by MgO powder (soft).
+ − + − + −
Insulated Groundad
Bare wire
junction junction
An exposed junction is recommended for the measurement of static or flowing non-corrosive gas
temperatures where fast response time is required. The junction extends beyond the protective metallic
sheath to give accurate fast response. The sheath insulation is sealed where the junction extends to
prevent penetration of moisture or gas, which could cause errors.
Advantages
i. Small units that can be mounted conveniently
ii. Rugged and inexpensive construction; hence, low cost
iii. No moving parts, less likely to be broken
iv. Wide temperature range from −270°C to 2800°C
v. Reasonably short response time
vi. Reasonable repeatability and accuracy
vii. The output is in electrical form, which is suitable for indicating and controlling devices. More-
over, these electrical signals can be transmitted over distance, and hence sensing and indicating
elements can be away from each other.
Limitations
i. Sensitivity is low, usually 50 μV/°C (28 μV/°F) or less. Its low-voltage output may be masked
by noise. This problem can be improved, but not eliminated by better signal filtering, shielding,
and analog-to-digital (A/V) conversion.
ii. Accuracy, usually no better than 0.5°C (0.9°F), may not be high enough for some applications.
iii. Requires a known temperature reference, usually 0°C (32°F) ice water. Modern thermocouples,
on the other hand, rely on an electrically generated reference.
iv. Non-linearity could be bothersome. Fortunately, detail calibration curves for each wire material
can usually be obtained from vendors.
v. They can’t be used bare in conducting fluid.
Temperature Measurement 503
Sensitivity
Temperature
Material @ 25 °C (77°F )
ISA Range °C Error∗ Applications
(+ and −) μV/°C
(°F )
(μV/°F )
E Chromel and Constantan −270∼ 1000 60.9 LT:±1.67°C(±3°F) I,O
(Ni–Cr and Cu–Ni) (−450∼ 1800) (38.3) HT:±0.5%
The application of the property of electrical conductors to increase electrical resistance with rise in
temperature was first descried by Sir William Siemens at Bakerian lecture in 1871 before the Royal
Society of Great Britain. Callender, Griffiths, Holborn and Wein established the necessary methods of
constructions from 1885 to 1990.
change in the temperature of a metal can be measured in terms of a change in its electrical resistance.
The electrical conductivity of a metal depends on the movement of electrons through its crystal lattice.
Due to thermal excitation, the electrical resistance of a conductor varies according to its temperature
and this forms the basic principle of resistance thermometry.
The effect is most commonly exhibited as an increase in resistance with increasing temperature, a
positive temperature coefficient of resistance. When utilizing this effect for temperature measurement,
a large value of temperature coefficient (the greatest possible change of resistance with temperature)
is ideal; however, stability of the characteristic over the short and long term is vital if practical use is
to be made of the conductor in question. The relationship between the temperature and the electrical
resistance is usually non-linear and described by a higher order polynomial:
R (t ) = R0(1 + At + Bt 2 + Ct 3 + ………)
Resistance Ohm
400
350
300
250
200
150
138
100
0 100 200 300 400 500 600 700 800 900
Temperature °C
Fig. 20.7 Resistance/temperature characteristics of Pt100
The very high accuracy demanded of primary standard resistance thermometers requires the use
of a more pure form of platinum for the sensing resistor. This results in different R0 and alpha values.
Conversely, the platinum used for Pt100 versions is ‘doped’ to achieve the required R0 and alpha
values. Platinum is usually used due to its stability with temperature. The platinum-detecting wire needs
to be kept free of contamination to remain stable. A platinum wire or film is created and supported on
506 Metrology and Measurement
a former in such a way that it gets minimal differential expansion or other strains from its former, yet
is reasonably resistant to vibration.
Commercial platinum grades are produced which exhibit a change of resistance of 0.385 ohms/°C
(European Fundamental Interval). The sensor is usually made to have 100 ohms at 0°C. This is defined
in BS EN 60751:1996. The American Fundamental Interval is 0.392 ohms/°C. Resistance thermometers
require a small current to be passed through in order to determine the resistance. This can cause self-heating
and manufacturer’s limits should always be followed along with heat-path considerations in design. Care
should also be taken to avoid any strains on the resistance temperature thermometer in its application.
Lead wire resistance should be considered and adopting three- and four-wire connection strategies
can result in eliminating connection lead-resistance effects from measurements.
Resistance temperature thermometer elements are available in a number of forms. The most common
are wire wound in a ceramic insulator––high temperatures to 850°C; wires encapsulated in glass––resists
the highest vibration and offers most protection to the platinum; and thin film with a platinum film on
a ceramic substrate, which is inexpensive and hence mass production is possible. Constructional details
are shown in Fig. 20.8.
Connection
to leads Sheath
Resistance thermometer Connection leads Insulator
Fig. 20.8 RTD construction
These elements will nearly always require insulated leads attached. At low temperatures, PVC, silicon
rubber or PTFE insulators are common to 250°C. Above this, glass fiber or ceramic are used. The mea-
suring point and usually most of the leads require a housing or protection sleeve. This is often a metal
alloy, which is inert to a particular process. Often more consideration goes into selecting and designing
protection sheaths than sensors as this is the layer that must withstand chemical or physical attack along
with offering convenient process-attachment features.
Resistance temperature thermometer elements can be supplied which function up to 850°C. Sensor
tolerances are calculated as follows:
Table 20.6 Calculation of sensor tolerances
Here |t| = absolute temperature in °C. If elements have a resistance of n × 100 ohms then the basic
values and tolerances also have to be multiplied by n.
Resistance
R1 R2
element
V0 S
RT R3 Power
supply
Bridge output
Fig. 20.9 Two-wire configuration
configuration because an assumption is made that the two lead resistances are the same. This configura-
tion allows for up to 600 metres of cable.
Resistance
R1 R2
element
V0 S
RT Lead resistance
R3 Power
supply
Bridge output
Fig. 20.11 Four-wire configuration
RTD Accuracy Accuracy problems can occur when RTDs from different manufacturers are used
in the same system, or when an RTD from one manufacturer is replaced with an RTD from another
manufacturer. Self-heating can also affect accuracy.
RTDs for Specialized Applications Designs include averaging RTDs, annular element RTDs,
and combination RTD-thermocouples. An averaging RTD has long-resistance elements. In annular
element RTDs, the sensors are made with annular elements that provide a tight fit against the inner
wall of a thermo-well. Combination RTD-thermocouple designs are available with both an RTD and a
thermocouple enclosed in the same sheath.
Temperature Measurement 509
Advantages
i. No drift over long period
ii. Fast speed response
iii. High accuracy and good reproducibility
iv. Doesn’t require any ambient temperature compensation
Limitations
i. High cost
ii. Requires external electrical supply
iii. Bulb size is larger than that of a thermocouple and filled thermometer
20.6 THERMISTOR
Thermistors are made of solid semiconductor materials having a high coefficient of resistivity.
The relationship between resistance and temperature and linear current–voltage characteristics are
of primary importance. Typical thermistors are suitable for temperature measurements in the range
of –100οC to 300οC. However, some thermistors measure as high as 600οC. Thermistors are semi-
conductors formed from complex metal oxides, such as oxides of cobalt, magnesium, manganese, or
nickel. They are available with positive temperature coefficients of resistance (PTC thermistors) and
with negative temperature coefficients of resistance (NTC thermistors). NTC thermistors are used
almost exclusively for temperature measurement. Thus, any change in temperature around the therm-
istor can be measured in terms of change in its electrical resistance. Despite the non-linear nature of
thermistors, readout instrument circuits have also been developed to provide a nearly linear output
voltage versus temperature or resistance versus temperature. Their resistance temperature relation is
generally given by
R = R0 . e β [1/T − 1/T0]
where,
R is the resistance at the measured temperature, T
R0 is the resistance at the reference temperature, T0
β is the experimentally determined constant for a given thermistor material, generally in the order
of 4000.
T0 is the reference temperature generally taken as 298 K (25°C).
Thermistors can convert changes in ambient or contact temperatures into the corresponding change
in voltage as a current. The standard Wheatstone bridge circuit is used for the measurement of change
in resistance with change in temperature.
510 Metrology and Measurement
20.6.1 Construction
The bead thermistor is made of a small bead of thermistor material to which a pair of leads is attached.
The bead is usually enclosed in glass. A disc thermistor consists of a disc of thermistor material and
a pair of leads. The leads may be attached radially or axially to the top and or bottom of the disc.
Some disc thermistors have no leads, and are fabricated with metal-plated faces that can be clipped
or soldered in the circuit. A washer thermistor resembles a disc thermistor but has a centre hole and
metal-plated faces for contact. The centre hole enables the thermistor to be held by a mounting bolt or
stacked with other washer thermistors and electrical components. A rod thermistor is basically a stick
of thermistor material to which a pair of leads is attached. The leads may be attached axially or radially
to each end of the rod. The most common problem related to thermistor accuracy is interchangeabil-
ity. Thermistor accuracy can also be affected by several mechanical or chemical actions that change its
electrical resistance.
Lead
Spring washers
Fiber bushing
Fiber washer Lead washers
Terminal
Rod type
Disc type
20.6.2 Applications
Thermistors are used for dynamic temperature measurements, and their operating range is −100°C
to 300°C with an accuracy of ±0.01°C. Thermistors are used for protecting equipments, e.g., +ve
TCR thermistors are used for transformers from heavy current. When current exceeds the safe limit,
heat generated raises the temperature of the thermistor, which increases the resistance. This acts as a
feedback signal to reduce the current through circuit to safe value. Moreover, thermistors can also be
used for temperature compensation in complex electronic equipment, magnetic amplifiers, warning
devices, etc.
Temperature Measurement 511
Advantages
i. Low thermal capacity and high resistance value, also has ability to withstand electrical and
mechanical stress
ii. Available in small size, low cost and increased stability with age
iii. Narrow span can be obtained
iv. High sensitivity and fast response
Limitations
i. Non-linear response
ii. Unstable at high temperature
iii. Wide temperature can’t be obtained
iv. Interchangeability of individual elements often creates a problem
20.7 PYROMETERS
human eye, an optical pyrometer is designed to respond to very narrow bands of wavelengths that fall
within the visible light portion of the electromagnetic spectrum.
Another type of pyrometer that is commonly used for industrial temperature measurement is the
total radiation pyrometer. A total radiation pyrometer responds to wavelengths in both the visible and
infrared portions of the spectrum. Ideally, it would measure all wavelengths within this range. How-
ever, the glass window filters out some wavelengths. Any gases or vapours between the target and the
pyrometer will also attenuate certain wavelengths. Total radiation pyrometers are based on the Stefan–
Boltzmann law, which states that total radiation is proportional to the fourth power of temperature.
These pyrometers are calibrated using a black body and, therefore, measure the temperature based on
the total radiation a black body would emit.
Field Exit
stop stop Entrance
stop
Fig. 20.13 Disappearing filament principle
Figure 20.13 shows a schematic diagram of an optical pyrometer, which is similar to the telescope
having the objective at one end and the eyepiece at the other end. A red filter is placed in between the
eyepiece and the source of energy, which cuts out the shorter wavelengths and passes radiations. The
filament lamp acts as the standard source, which is placed exactly at the focus of the objective; so that
the image of the hot target is on the plane of the filament. Due to this, the target image and filament
lamp appear superimposed on one another when viewed through the eyepiece. A two-volt battery along
with a millimeter and rheostat is connected in series with the lamp. The intensity of a filament lamp can
Temperature Measurement 513
Image of filamer
(hotter)
Image of hot
target
Image of
filament
(cooler)
be varied by varying the current with the help of the rheostat. The procedure to match the intensity of
the filament lamp is discussed as follows:
i. An operator sights a hot target, and adjusts the range until its image is seen in red. The lamp
filament is initially cooler than the target and its image appears as a darker red or black spot
superimposed on the target’s image (see Fig. 20.14). What the operator sees when looking into
the eyepiece is the target in red, its surroundings in black (cooler) or red (hot) and superimposed
on the target, the filament. The view is circular because the optical system is made up of circular
lenses, apertures, etc.
ii. The lamp current is raised until the image of the filament becomes hotter than the target and it
appears brighter red than the target (refer Fig. 20.15).
514 Metrology and Measurement
Other filters reduce the intensity so that one instrument can have a relatively wide temperature range
capability. Needless to say, by restricting the wavelength response of the device to the red region of
the visible, it can only be used to measure objects that are hot enough to be incandescent, or glowing.
This limits the lower end of the temperature measurement range of these devices to about 700°C. Some
experimental devices have been built using light amplifiers to extend the range downwards, but the
devices become quite cumbersome, fragile and expensive.
Modern radiation thermometers provide the capability to measure within and below the range of the
optical pyrometer with equal or better measurement precision plus faster time response, precise emissiv-
ity correction capability, better calibration stability, enhanced ruggedness and relatively modest cost.
c. Infrared Pyrometer Infrared pyrometers offer a great method for accurately and quickly
measuring temperature of objects at a distance and/or in motion. They offer the ability to measure
temperature of objects precisely without needing to touch the item being measured, and without needing
to be placed within what can be an extremely hot and dangerous environment (where most traditional
close-proximity thermometers will be destroyed). This is an image of a specialized industrial infrared
thermometer being used to monitor temperature of molten material (such as metal or glass) at a distance,
for quality control purposes within a manufacturing process. Portable, battery-operated devices using
similar technology are also available in the market.
Infrared pyrometers measure temperature using electromagnetic radiation (i.e., infrared) emitted
from an object. They are sometimes called laser thermometers if a laser is used to help aim the
thermometer, or non-contact thermometers to describe the device’s ability to measure temperature
from a distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the
object’s temperature can be determined.
The most basic design consists of a lens to focus the infrared energy on to a detector, which converts
the energy to an electrical signal that can be displayed in units of temperature after being compensated
for ambient temperature variation. This configuration facilitates temperature measurement from a dis-
tance without contact with the object to be measured. As such, the infrared thermometer is useful for
measuring temperature under circumstances where thermocouples or other probe-type sensors cannot
be used or do not produce accurate data for a variety of reasons.
Some typical circumstances are where the object to be measured is moving; where the object is
surrounded by an electromagnetic field, as in induction heating; where the object is contained in a
vacuum or other controlled atmosphere; or in applications where a fast response is required. Infrared
516 Metrology and Measurement
pyrometers can be used to serve a wide variety of temperature-monitoring functions. A few examples
provided to this article include
There are many varieties of infrared temperature-sensing devices available today, including configura-
tions designed for flexible and portable handheld use, as well as many designed for mounting in a fixed
position to serve a dedicated purpose for long periods. Typical sensor varieties include the following:
a. Spot Infrared Thermometers Also known as infrared pyrometers, designed for monitor-
ing a finite area or “spot” of space.
b. Infrared Line Scanning Systems Typically incorporating what is essentially a spot ther-
mometer pointed at a rotating mirror, for continuously scanning a wide area of space. These devices
are widely used in manufacturing involving conveyors or ‘web’ processes, such as large sheets of glass
or metal exiting an oven, fabric and paper, or continuous piles of material along a conveyor belt.
c. Infrared Cameras These are essentially infrared thermometers designed as a camera, moni-
toring a thousand points at once, output as a two-dimensional image, and with each pixel representing
a temperature. This technology is typically more processor and software intense than the items above,
and is used for monitoring large areas of space. Typical applications include perimeter monitoring used
by military or security personnel, inspection/process quality monitoring of manufacturing processes,
and equipment or enclosed-space hot or cold-spot monitoring for safety and efficiency maintenance
purposes.
Pyrometer Accuracy One technique for ensuring that emitted radiation rather than reflected
radiation is being observed is to drill a hole in the target object and aim the pyrometer into the hole.
It is recommended that the depth of the hole is about five times its diameter. Measurement accuracy
can also be affected by the presence of gases or vapours between the target and pyrometer. Gases and
vapours can filter out some radiation wavelengths. One technique for resolving this problem is to use
fans to disperse any gases or fumes. A film of dirt on the viewing window or lens will also affect mea-
surement accuracy. In some applications, it may be necessary to use a purge to prevent soot or other
particles from being deposited on the viewing window or lens.
Temperature Measurement 517
Review Questions
1. Explain the factors, that can cause steady-state temperature measurement errors.
2. Explain construction and working of bimetallic strip thermometer.
3. State and explain Seeback effect, Peltier effect and Thomson effect for thermocouple.
4. Explain principle, construction and working of thermocouple temperature measurement.
5. Explain cold-junction compensation and compensating cables.
6. Discuss common thermocouple specifications.
7. Explain principle, construction and working of Resistance Temperature Detectors (RTD).
8. Discuss resistance temperature thermometer wiring configurations.
9. What is the function of lead wires?
10. Explain the principle and working of a thermistor.
11. Explain what do you mean by term ‘pyrometry’?
12. Discuss common types of pyrometers and explain one in detail.
13. Explain selective radiation pyrometer/optical pyrometer.
14. Explain what do you mean by total radiation pyrometer?
15. Write short notes on
a. Temperature scales
b. International Practical Temperature Scale
c. Mercury-in-glass thermometer
d. Thermocouple junctions
e. Platinum-sensing resistors
f. Applications, advantages and limitations of thermistor
g. Infrared pyrometer
h. Pyrometer accuracy
21 Introduction to
Strain Measurement
Metrology
‘Strain has formed a basic component of our life applications which, has to be sensed,
measured and analysed….’
INTRODUCTION TO STRAIN suit a variety of applications. In either
GAUGE case the conductor is bonded to a back-
The strain gauge has been in use for ing sheet. In turn, the backing is
many years and is the fundamental sens- securely bonded to the structure to be
ing element for many types of sensors, measured such that a surface strain
including pressure sensors, load cells, also strains the conductor. They oper-
torque sensors, position sensors, etc. ate on the principle that as the foil is
The accurate measurement of strain can subjected to stress, the resistance of
be made using strain gauges. Given a the foil changes in a defined way.
measurement of strain, stress and load In general, wire gauges are used for high-
may also be calculated via the definition temperature applications, and foil gauges
of Young’s modulus as stress divided by are used for routine applications. Foil
strain, and the definition for stress as gauges offer the following characteristics.
force (or load) divided by area. The most
common form of strain gauge is the elec- i. High stability
trical resistance strain gauge––originally
ii. Good proportionality
invented by Lord Kelvin in circa 1856.
Kelvin observed that the resistance of a iii. Manufacturing process based on etch-
conductor varies deterministically when ing which is cheap and allows complex
the conductor is stretched (or strained). conductor designs to be obtained
Therefore, if a conductor is bonded to a
iv. Low price [per gauge is 25 p to £ 10
structure such that the change in length
(installation and calibration costs are
of the structure is equal to the change in
significantly higher)]
length of the conductor, the change of
resistance of the conductor is directly v. Low output voltage––requires ampli-
proportional to strain. fication.
The gauges are formed by either a A strain gauge’s conductors are very thin—
length of wire arranged in an axial grid if made of round wire, about 1/1000 inch in
pattern, or by etching a thin metal foil diameter. Alternatively, strain gauge con-
into the desired shape. The majority of ductors may be thin strips of metallic film
strain gauges are foil types, available in deposited on a non-conducting substrate
a wide choice of shapes and sizes to material called the carrier.
Strain Measurement 519
The name ‘bonded gauge’ is given to strain gauges that are glued to a larger structure under stress
(called the test specimen). The task of bonding strain gauges to test specimens may appear to be very
simple, but it is not. ‘Gauging’ is a craft in its own right, absolutely essential for obtaining accurate,
stable strain measurements. It is also possible to use an unmounted gauge wire stretched between two
mechanical points to measure tension, but this technique has its limitations.
Unbonded strain-gauge elements are made of one or more filaments of resistance wire stretched
between supporting insulators. The supports can be attached directly to an elastic member used as a
sensing element or can be fastened independently using a rigid insulator to couple the elastic member to
the filaments of the resistance wire. The displacement (strain) of the sensing element causes a change
in the filament length. The change in length results in changes in resistance. Because they are fragile,
transducers that use unbonded gauges are becoming less popular.
Typical strain-gauge resistances range from 30 Ω to 3 kΩ (unstressed). This resistance may change
only a fraction of a per cent for the full force range of the gauge, given the limitations imposed by the
elastic limits of the gauge material and of the test specimen. Forces great enough to induce greater
resistance changes would permanently deform the test specimen and/or the gauge conductors them-
selves, thus ruining the gauge as a measurement device. Thus, in order to use the strain gauge as a
practical instrument, we must measure extremely small changes in resistance with high accuracy. Such
demanding precision calls for a bridge measurement circuit.
The resistance, R, of a conductor is defined in terms of its resistivity ρ(Ωm), length L (m), and
ρL
ycross-sectional area A (m2) by R =
A
If we consider an elongation of the wire, L →L + ΔL , by Poisson’s effect there will also be reduc-
tion in cross-sectional area, A → A − Δ A. From the expression for the resistance, it can be seen that
both effects contribute to an increase in the resistance.
The gauge shown in Fig. 21.1 here is primarily sensitive to strain in the X direction, as the majority
of the wire length is parallel to the X axis. There will be a small amount of cross-sensitivity, i.e., the
resistance will change slightly for a strain in the Y direction. This cross sensitivity is typically < 2% of
the primary axis sensitivity.
Inculated backing
Solder tags —
for
attachment of
wires
Y
Gauge, approx. 0.025-mm thick wire / foil
X
Ω Ω + − Ω
ΔR Δρ ΔA ΔL
= − + (1)
R ρ A L
The change in area can be related to the change in length via Poisson’s effect. If the cross section is
circular of initial diameter D then as the area goes from A to A + ΔA, D → D + ΔD. If we define
ΔL ΔD
the axial strain as εa = , and the transverse strain as εt = then
L D
ΔD ΔL
εt = = −νεa = − ν (2)
D L
where ν is Poisson’s ratio.
We can also expand the term for the area change in Eq. 1 as
π⎡
( D + ΔD ) − D 2 ⎤⎦⎥
2
ΔA ⎢
⎣ D 2 + 2 DΔD + ΔD 2 − D 2 2ΔD
= 4 = 2
≈ (3)
A π 2
D D D
4
by neglecting terms with the square of small quantities.
Combining Eqs 2 and 3, wet get
ΔA ΔL
≈ −2ν
A L
This can be substituted into Eq. 1 to give
522 Metrology and Measurement
ΔR Δρ ΔL ΔL
= + 2ν +
R ρ L L
⎜⎜
⎛ Δρ ⎞⎟
⎟
ΔR ⎜ ρ ⎟⎟⎟ ΔL
= ⎜⎜1 + 2ν +
R ⎜⎜ ΔL ⎟⎟⎟ L
⎜⎜ ⎟
⎝ L ⎟⎠
The bracketed term is defined as the gauge factor GF, giving
ΔR ΔL
= GF = GF εa
R L
This expression means that the change in resistance is directly proportional to the axial strain of
the sample. The gauge factor is approximately constant and for most types of strain gauges has a value
slightly more than 2.
The measured values of strain vary between applications. The maximum measurable strain is typi-
cally 0.001, or 0.1%. Strain is most often expressed in microstrain, μstrain (10−6). Therefore, a strain of
0.001 is normally written as 1000 μstrain. (Note: Strain is dimensionless.)
Wheatstone developed a bridge circuit containing four identical resistances, one of which was the strain
gauge, from Fig. 21.3.
Rgauge = R1
The excitation can be either dc or ac. Dc voltages are normally used for sensitive measurements. Ac
voltages are used in electrically noisy environments with an excitation frequency about a factor of 10
higher than the maximum strain variation frequency to be measured (typically excitations of > 8 kHz).
Nominal resistance values are between 120 Ω and 350 Ω.
R1 R2 R + ΔR R
A B
Vo Vo
R4 R3 R R
D
Excitation Excitation
voltage, V voltage, V
Fig. 21.3 Wheatstone’s Fig. 21.4 Output of a Wheat-
bridge circuit stone bridge
at a value equal to the strain-gauge resistance with no force applied. The two ratio arms of the bridge
are set equal to each other. Thus, with no force applied to the strain gauge, the bridge will be symmetri-
cally balanced and the voltmeter will indicate zero volts, representing zero force on the strain gauge.
As the strain gauge is either compressed or tensed, its resistance will decrease or increase, respectively,
thus unbalancing the bridge and producing an indication at the voltmeter. This arrangement, with a
single element of the bridge changing resistance in response to the measured variable (mechanical
force), is known as a quarter-bridge circuit. As the distance between the strain gauge and the three other
resistances in the bridge circuit may be substantial, the wire resistance has a significant impact on the
operation of the circuit.
We will consider a system with a constant dc excitation voltage, V, and where the input resistance
of the voltmeter is infinite, i.e., no current flows through CD. With ΔR = 0, the bridge is perfectly
balanced and hence the output voltage, Vo = 0.
The current flowing through the upper half of the bridge is given by
V
I ABC =
2 R + ΔR
V ( R + ΔR ) V V (2 R + 2ΔR − 2 R − ΔR ) V ΔR
V o = V AC − V AD = − = =
2 R + ΔR 2 4 R + 2ΔR 4 R + 2ΔR
Typically the change in resistance is low compared to the original resistance value, hence
V ΔR
Vo = ×
4 R
ΔR
Noting that previously we related change in resistance to axial strain = GF ε a ,
R
4V o 1
we now obtain εa = ×
V GF
Unlike the Wheatstone bridge, using a null-balance detector and a human operator to maintain a
state of balance, a strain-gauge bridge circuit indicates measured strain by the degree of imbalance,
and uses a precision voltmeter in the centre of the bridge to provide an accurate measurement of that
imbalance:
Strain Measurement 525
Strain gauge
(unstressed)
R1
V0
R3
Strain gauge
(stressed)
An unfortunate characteristic of strain gauges is that of resistance change with changes in tempera-
ture. This is a property common to all conductors, some more than others. Thus, our quarter-bridge
circuit as shown (either with two or with three wires connecting the gauge to the bridge) works as a
thermometer just as well as it does a strain indicator. If all we want to do is measure strain, this is not
good. We can transcend this problem, however, by using a ‘dummy’ strain gauge in place of R2, so that
both elements of the rheostat arm will change resistance in the same proportion when temperature
changes, thus canceling the effects of temperature change:
Resistors R1 and R3 (refer Fig. 21.5) are of equal resistance value, and the strain gauges are identical
to one another. With no applied force, the bridge should be in a perfectly balanced condition and the
voltmeter should register 0 volts. Both gauges are bonded to the same test specimen, but only one is
placed in a position and orientation so as to be exposed to physical strain (the active gauge). The other
gauge is isolated from all mechanical stress, and acts merely as a temperature compensation device (the
‘dummy’ gauge). If the temperature changes, both gauge resistances will change by the same percent-
age, and the bridge’s state of balance will remain unaffected. Only a differential resistance (difference
of resistance between the two strain gauges) produced by physical force on the test specimen can alter
the balance of the bridge.
526 Metrology and Measurement
Wire resistance doesn’t impact the accuracy of the circuit as much as before, because the wires con-
necting both strain gauges to the bridge are approximately of equal length. Therefore, the upper and
lower sections of the bridge’s rheostat arm contain approximately the same amount of stray resistance,
and their effects tend to cancel.
Strain gauge
(unstressed)
Rwire1
R1
Rwire3
V0
Rwire2
R3
Strain gauge
(stressed)
Even though there are now two strain gauges in the bridge circuit shown in Fig. 21.6, only one is respon-
sive to mechanical strain, and thus we would still refer to this arrangement as a quarter-bridge. However, if
we were to take the upper strain gauge and position it so that it is exposed to the opposite force as the lower
gauge (i.e., when the upper gauge is compressed, the lower gauge will be stretched, and vice-versa), we
will have both gauges responding to strain, and the bridge will be more responsive to applied force. This
utilization is known as a half-bridge. Since both strain gauges will either increase or decrease resistance
by the same proportion in response to changes in temperature, the effects of temperature change remain
canceled and the circuit will suffer minimal temperature-induced measurement error.
An example of how a pair of strain gauges (shown in Fig. 21. 7) may be bonded to a test specimen
so as to yield this effect is illustrated here using Fig. 21. 8 and Fig. 21.9.
With no force applied to the test specimen, both strain gauges have equal resistance and the bridge
circuit is balanced. However, when a downward force is applied to the free end of the specimen, it will
bend downward, stretching gauge #1 and compressing gauge #2 at the same time:
In applications where such complementary pairs of strain gauges can be bonded to the test speci-
men, it may be advantageous to make all four elements of the bridge ‘active’ for even greater sensitivity.
This is called a full-bridge circuit’ which is shown in Fig. 21.10.
Both half-bridge and full-bridge configurations grant greater sensitivity over the quarter-bridge cir-
cuit, but often it is not possible to bond complementary pairs of strain gauges to the test specimen.
Thus, the quarter-bridge circuit is frequently used in strain-measurement systems.
Strain Measurement 527
Strain gauge
(unstressed)
R1
R3
Strain gauge
(stressed)
(+)
Test specimen V
(−)
Bridge balanced
When possible, the full-bridge configuration is the best to use. This is true not only because it is
more sensitive than the others, but also because it is linear while the others are not. Quarter-bridge
and half-bridge circuits provide an output (imbalance) signal that is only approximately proportional
to applied strain-gauge force. Linearity, or proportionality, of these bridge circuits is best when the
amount of resistance change due to applied force is very small compared to the nominal resistance of
the gauge(s). With a full-bridge, however, the output voltage is directly proportional to applied force,
with no approximation (provided that the change in resistance caused by the applied force is equal for
all four strain gauges!).
Unlike the Wheatstone and Kelvin bridges, which provide measurement at a condition of perfect
balance and, therefore, function irrespective of source voltage, the amount of source (or ‘excitation’)
528 Metrology and Measurement
(+)
FORCE
Strain gauge #1 R Rgauge#1
+ −
Test specimen V
R Rgauge#2
Strain gauge #1
(−)
Bridge unbalanced
Strain gauge
Strain gauge (stressed) (stressed)
voltage matters in an unbalanced bridge like this. Therefore, strain-gauge bridges are rated in millivolts
of imbalance produced per volt of excitation, per unit measure of force. A typical example for a strain
gauge of the type used for measuring force in industrial environments is 15 mV/V at 1000 pounds.
That is, at exactly 1000-pounds applied force (either compressive or tensile), the bridge will be
Strain Measurement 529
unbalanced by 15 millivolts for every volt of excitation voltage. Again, such a figure is precise if the
bridge circuit is full-active (four active strain gauges, one in each arm of the bridge), but only approxi-
mate for half-bridge and quarter-bridge arrangements.
To correctly install a strain gauge, all surfaces must be clean and free from grease before assembly. Strain
gauges can be protected from the environment in a number of ways. Techniques offering increasing
protection are
• Polyurethane varnish
• Varnish + silicone rubber
• Varnish + rubber + steel cover and sealed cable conduits
In electrically noisy environments, it is important that the wires leading to a gauge are made as a
twisted pair. Hence any ‘pick-up’ (by induction) is common to both wires and the voltage difference is
unaffected. Installing strain gauges is a skilled art, it is easy to install gauges––in tension (by stretching),
in compression, with poor adhesion.
Particular combinations of gauges can be utilized in certain applications offering both increased
sensitivity––with 2 sensing gauges in a half-bridge––and simultaneously temperature compensated.
Further, a full bridge can be used, offering 2.6 times the sensitivity of a quarter bridge. Strain gauges
are frequently used in mechanical engineering research and development to measure the stresses
generated by machinery. Aircraft-component testing is one area of application, using tiny strain-
gauge strips glued to structural members, linkages, and any other critical component of an airframe
to measure stress.
Tension causes
resistance increase
Compression causes
resistance decrease
Fig. 21.11 Bonded strain gauge
530 Metrology and Measurement
V ΔR
The output is given by Vo = ×
2 R T:R + ΔR C:R − ΔR
(i.e., the output is double that from a quarter bridge circuit).
Further, you can demonstrate that if the resistance of both Vo
gauges increases (due to temperature or axial strain) then the
output voltage remains unaffected (try it by putting the resistance
of gauge C as R + ΔR). R R
R1 R2
R3
R1
Vo
R4 R4 R3
R2
Excitation
voltage, V
Fig. 21.14 Measurement of axial strains
Strain Measurement 531
R3
R4
Torque
R1
R2
R1 R2
Vo
R4 R3
Excitation
voltage, V
Fig. 21.15 Measurement of strain in torsion
532 Metrology and Measurement
b. Strain Gauges for Concrete Embedment strain gauges measure strain in concrete.
Typical applications include the following:
• Measuring strains in reinforced concrete and mass concrete
• Measuring curing strains
• Monitoring for changes in load
• Measuring strain in tunnel linings and supports
c. Spot-Weldable Strain Gauge It is designed to measure strain in steel; this vibrating wire
strain gauge is spot-welded to the surface of the steel. A sensor is then fixed atop the gauge. Readings
are obtained with a data logger.
The examples discussed above demonstrate that many different styles of gauges are needed. Manufac-
turer’s data sheets provide selection criteria; a few points are listed below.
a. Physical size and form––the strain gauge may be small (∼6 mm active gauge length) but this size
sets the spatial resolution limit of the measurement
Strain Measurement 533
b. Gauge resistance
c. Sensitivity––or the gauge factor
d. Component environment, especially temperature
e. Strain limits to be measured
f. Flexibility of gauge backing––affects whether a gauge can be bonded, e.g., to a circular shaft
g. Requirements for protection
h. Cost
Review Questions
‘Industries require the determination of the flow quantity of a fluid either gas, liquid, oil,
power, chemical, food, water, steam and waste for their processing and control….’
NEED OF FLOW MEASUREMENT a check point, either a closed conduit or
Flow measurement is essential in many an open channel, in their daily process-
industries such as oil, power, chemical, ing or operating. The quantity to be
food, water, and waste-treatment indus- determined may be volume-flow rate,
tries. These industries require the deter- mass-flow rate, flow velocity, or other
mination of the quantity of a fluid, either quantities related to the previous three.
gas, liquid, or steam, that passes through
i. Coriolis
ii. Differential pressure––elbow
iii. Flow nozzle
iv. Orifice
v. Pitot tube
vi. Pitot tube (averaging)
vii. Venturi
viii. Wedge
ix. Magnetic
x. Positive displacement nutating disc
xi. Oscillating piston
xii. Oval gear
xiii. Roots
xiv. Target
xv. Thermal
xvi. Turbine
Flow Measurement 535
The instrument to conduct flow measurement is called a flowmeter. The development of a flow-
meter involves a wide variety of disciplines including the flow sensors, the sensor and fluid interactions
through the use of computation techniques, the transducers and their associated signal-processing
units, and the assessment of the overall system under ideal, disturbed, harsh, or potentially explosive
conditions in both the laboratory and the field.
To select a flowmeter that suits one’s application, many factors need to be considered. The most impor-
tant ones are fluid phase (gas, liquid, steam, etc.) and flow condition (clean, dirty, viscous, abrasive, open
channel, etc.) The matching of fluid phase and flowmeter technology can be found in the flowmeter
selection page.
The second-most important factors are line size and flow rate (they are closely related). This infor-
mation will further eliminate most submodels in each flowmeter technology.
Other fluid properties that may affect the selection of flowmeters include density (specific gravity),
pressure, temperature, viscosity, and electronic conductivity. On the flow part, one needs to pay atten-
tion to the state of fluid (pure or mixed) and the status of flow (constant, pulsating, or variable).
Moreover, the environment temperature, the arrangements (e.g., corrosive, explosive, indoor, out-
door), the installation method (insertion, clamped-on, or inline), and the location of the flowmeter also
need to be considered, along with other factors which include the maximum allowable pressure drop,
the required accuracy, repeatability, and cost (initial set-up, maintenance, and training).
Flowmeters need to be integrated into the existing/planning piping system to be useful. There are
two types of flowmeter (as shown in Fig. 22.1) installation methods––inline and insertion. The inline
models include connectors to the upstream and downstream pipes while the insertion models insert the
sensor probe into the pipes.
Most flowmeters need to be installed at a point where the pipes on both sides remain straight for a
certain distance. For inline models, the inner diameters of the pipes have to be the same as the flowmeter’s
line size. Between the flowmeter and the pipes, there are two types of mostly used connecting methods—
flanged and wafer.
536 Metrology and Measurement
Among different types of connection methods, insertion design is more flexible and more economi-
cal in larger line sizes while inline design is more confined and usually easier to calibrate. The wafer
connection is usually less expansive than flanged connection. However, it may require extra parts to
allow the threading with pipes at both ends.
Since flowmeters are integrated instruments that measure different flow quantities by different technologies,
many characteristics can be referred in categorizing flowmeters. Some of the references are listed below:
i. Technology employed
ii. Instrumentation configuration
iii. Physical quantity measured
iv. Flow quantity converted
1. Orifice Plate A flat plate with an opening shown in Fig. 22.2 is inserted into the pipe and
placed perpendicular to the flow stream. As the f lowing f luid passes through the orifice plate, the
Flow Measurement 537
Orifice plate
Fig. 22.2 Orifice plate
restricted cross-sectional area causes an increase in velocity and decrease in pressure. The pressure dif-
ference before and after the orifice plate is used to calculate the flow velocity.
2. Venturi Tube A section of tube forms a relatively long passage with smooth entry and exit.
A Venturi tube shown in Fig. 22.3 is connected to the existing pipe, first narrowing down in diameter
then opening up back to the original pipe diameter. The changes in cross-sectional area cause changes
in velocity and pressure of the flow, which is used to calculate the flow velocity.
Venturi tube
Fig. 22.3 Venturi tube
3. Nozzle A nozzle with a smooth guided entry and a sharp exit is placed in the pipe to change the
flow field and create a pressure drop that is used to calculate the flow velocity.
Nozzle
Nozzle shrinks down the cross-sectional area of the pipe and create pressure differential.
Fig. 22.4 Nozzle
5. V-Cone A cone-shaped obstructing element that serves as the cross-sectional modifier is placed
at the centre of the pipe for calculating flow velocities by measuring the pressure differential.
6. Pitot Tube A probe with an open tip (Pitot tube) is inserted into the flow field. The tip is the
stationary (zero velocity) point of the flow. Its pressure, compared to the static pressure, is used to cal-
culate the flow velocity. Pitot tubes can measure flow velocity at the point of measurement.
7. Averaging Pitot Tube Similar to Pitot tubes but with multiple openings, averaging Pitot
tubes take the flow profile into consideration to provide better overall accuracy in pipe flows.
Flow Measurement 539
V-cone
Fig. 22.6 V-cone
8. Elbow When a liquid flows through an elbow, the centrifugal forces cause a pressure difference
between the outer and inner sides of the elbow. This difference in pressure is used to calculate the flow
velocity. The pressure difference generated by an elbow flowmeter is smaller than that by other pressure
differential flowmeters, but the upside is an elbow flowmeter that has less obstruction to the flow.
9. Dall Tube A combination of venturi tube and orifice plate, it features the same tapering intake
portion of a Venturi tube but has a ‘shoulder’ similar to the orifice plate’s exit part to create a sharp
pressure drop. It is usually used in applications with larger flow rates.
Differential pressure flowmeters, although simple in construction and widely used in industry, have
a common drawback: They always create a certain amount of pressure drop, which may or may not
be tolerated in a particular application. Common specifications for commercially available differential
pressure flowmeters are listed below. (These specifications are for differential pressure flowmeters in
general. Individual numbers may vary from product to product.)
540 Metrology and Measurement
tion
ec
Upstream pressure tap dir
w
Flo
n
tio
ec
dir
w
Flo
Flow direction
tion
irec
wd
Flo
n
irectio
wd
Flo
Types of Fluid Phases Cryogenic, gas clean, dirty, liquid (clean, dirty, viscous, corrosive), super-
heated, steam saturated, liquid slurry abrasive
Advantages Low to medium initial set-up cost, can be used in wide ranges of f luid phases and
f low conditions, simple and sturdy structures
Magnetic field
Voltage gauge
Conductive fluid
n
tio
ec
dir
w
Flo
Magnetic field
Flow direction
Conductive field
is equivalent to a conductor cutting across the magnetic field. This induces changes in voltage reading
between the electrodes. The higher the flow speed, the higher the voltage.
According to Faraday’s law of electromagnetic induction, any change in the magnetic field with time
induces an electric field perpendicular to the changing magnetic field:
d ( BA) dΦ
E = −N = −N
dt dt
Flow Measurement 543
where E is the voltage of induced current, B is the external magnetic field, A is the cross-sectional area
of the coil, N is the number of turns of the coil, Φ = ΒΑ is the magnetic flux, and finally the negative
sign indicates that the current induced will create another magnetic field opposing to the build up of
magnetic field in the coil based on Lenz’s law.
When applying the above equation to magnetic flowmeters, the number of turns N and the strength
of the magnetic field B are fixed. Faraday’s law becomes
dA dl
E = −NB = −NB D = −NBVD
dt dt
where D is the distance between the two electrodes (the length of conductor), and V is the flow velocity.
If we combine all fixed parameters N, B, and D into a single factor K =−NBD, we have
E
V =
K
It is clear that the voltage developed is proportional to the flow velocity.
A prerequisite of using magnetic flowmeters is that the fluid must be conductive. The electrical con-
ductivity of the fluid must be higher than 3 μS/cm in most cases. A lining of non-conductive material
is often used to prevent the voltage from dissipating into the pipe section when it is constructed from
conductive material. Common specifications for commercially available magnetic flowmeters are listed
below:
Types of Fluid Phases Liquid (clean, corrosive, dirty, viscous ), slurry (abrasive, fibrous), liquid
(non-Newtonian, open channel)
Advantages Minimum obstruction in the flow path yields minimum pressure drop, low maintenance
cost because of no moving parts, high linearity, two and multibeam models have higher accuracy than
other comparably priced flowmeters, can be used in hazardous environments or measure corrosive or
slurry fluid flow
Limitations Requires electrical conductivity of fluid higher than 3 μS/cm in most cases, zero
drifting at no/low flow (may be avoided by low flow cut-off; new designs improve on this issue)
544 Metrology and Measurement
a. Rotating Vane
Flow direction
Flow direction
b. Nutating Disc
Disc-nutating direction
Disc-rotating direction
Disc-oscillating direction
Fl
ow
n
io
di
ct
re
re
ct
di
io
n
ow
Fl
c. Oscillating Piston
Piston
Measuring chamber
Piston hub
Control roller
(centre of piston)
(centre of chamber)
Inlet Outlet
d. Oval Gear
Oval gear
Oval-gear
rotating
direction
Flow direction
Flow direction
g. Rotating Impeller
The accuracy of positive displacement flowmeters relies on the integrity of the capillary seal that sepa-
rates incoming fluid into discrete parcels. To achieve the designed accuracy and ensure that the positive
displacement flowmeter functions properly, a filtration system is required to remove particles larger
than 100 μm as well as gas (bubbles) from the liquid flow.
Positive displacement flowmeters, although simple in principle of operation and widely used in the
industry, all cause a considerable pressure drop which has to be considered for any potential application.
Impeller
Impeller
rotating
direction
Types of Fluid Phases Phase condition of flowing liquid may be clean, viscous, corrosive, dirty
and this flowmeter is recommended for limited applicability. Its line size is 6 ∼ 300 mm
(1/4 ∼ 12 inch) and turndown ratio is 5 ∼ 15 : 1, and might go as high as 100 : 1.
Advantages Low to medium initial set-up cost, can be used in viscous liquid flow
Limitations Higher maintenance cost than other non-obstructive flowmeters, high pressure drop
due to its total obstruction on the flow path, not suitable for low flow rate, very low tolerance to sus-
pension in flow ( particles larger than 100 μm need to be filtered before the liquid enters the flowmeter)
gas ( bubbles) in liquid could significantly decrease the accuracy.
Flow tube
Driving unit,
e.g., driving coils
Displacement sensors,
e.g., pickoff coils
Fig. 22.19 Coriolis flowmeters
Advantages Higher accuracy than most flowmeters, can be used in a wide range of liquid flow
conditions, capable of measuring hot (e.g., molten sulphur, liquid toffee) and cold (e.g., cryogenic
helium, liquid nitrogen) fluid flow, low pressure drop, suitable for bi-directional flow
Flow Measurement 549
Limitations High initial set-up cost, clogging may occur and difficult to clean, larger in overall size
compared to other flowmeters, limited line-size availability
1. Doppler Ultrasonic Flowmeters It relies on the Doppler effect to relate the frequency
shifts of acoustic waves to the flow velocity. It usually requires some particles in the flow to reflect the
signals. The rule of thumb is 25 PPM suspended solid or bubbles with diameters of 30 micron or larger
for 1 MHz or higher transducers. Lower frequency transducers may require ‘dirtier’ fluid conditions.
Transducer
Transmitted waves
Reflected waves
Dispersed particles
Fig. 22.20 Doppler ultrasonic flowmeters
The Doppler formula for a sound or light source moving toward the observer at a velocity V is
f
f ref =
V
1−
c
Since the input signal from the transducer forms an angle θ with the flow direction, the velocity V
should be replaced by the projected velocity V cos θ. The acoustic waves traveling upstream and down-
stream will have the observed frequencies
⎧
⎪ f
⎪
⎪ fu =
⎪ V cos θ
⎪
⎪ 1−
⎪ c
⎨
⎪
⎪ f
⎪ fd =
⎪
⎪ V cos θ
⎪ 1+ c
⎪
⎪
⎩
550 Metrology and Measurement
since the flow velocity V is much smaller than the speed of sound c in the fluid. By re-arranging the
above equation, the flow velocity can be written as
c Δf
V =
2 f cosθ
Flow Measurement of Fluid Gas Dirty and liquid, corrosive, dirty, clean, viscous recom-
mended for limited applicability:-
Line Size: Inline: 10 ∼ 1200 mm (0.4 ∼ 48 inch)
Clamped-on model: 75 mm (3 in) and up
Turndown Ratio: 100: 1
Advantages No obstruction in the flow path, no pressure drop, no moving parts, low mainte-
nance cost, can be used in corrosive or slurry fluid flow, portable models available for field analysis and
diagnosis
2. Transit-Time Ultrasonic Flowmeter A pair (or pairs) of transducers, each having its
own transmitter and receiver, are placed on the pipe wall, one (set) on the upstream and the other
(set) on the downstream. The time for acoustic waves to travel from the upstream transducer to the
Downstream
transducer
td
Flow direction Flow direction
L
D
θ
tu
Upstream
transducer
Fig. 22.21 Transit-time ultrasonic flowmeter
Flow Measurement 551
downstream transducer td is shorter than the time it reqires for the same waves to travel from the
downstream to the upstream tu. The larger the difference, the higher the flow velocity. And flow is
measured in terms of this difference.
td and tu can be expressed in the following forms:
⎧
⎪ L
⎪
⎪td =
⎪
⎪ c +V cos θ
⎨
⎪
⎪ L
⎪tu =
⎪
⎪
⎩ c −V cos θ
where c is the speed of sound in the fluid, V is the flow velocity, L is the distance between the transduc-
ers and θ is the angle between the flow direction and the line formed by the transducers.
The difference of td and tu is
L L
Δt = t u − t d = −
c −V cos θ c +V cos θ
2V cos θ
=L
c −V 2 cos2 θ
2
2VL cos θ
=
c −V 2 cos2 θ
2
2VX
= c2
2
⎛V ⎞⎟
⎜
1−⎜⎜ ⎟⎟ cos2 θ
⎝c ⎠
where X is the projected length of the path along the pipe direction (X = L cos θ).
To simplify, we assume that the flow velocity V is much smaller than the speed of sound c, that is,
2
⎛V ⎞⎟
V � c ⇒ ⎜⎜⎜ ⎟⎟ ≈ 0 � 1
⎝c ⎠
We then have
2VX
Δt ≈
c2
or,
c 2 Δt
V =
2X
552 Metrology and Measurement
Note that the speed of sound c in the fluid is affected by many factors such as temperature and den-
sity. It is desirable to express c in terms of the transit times td and tu to avoid frequent calibrations:
⎧
⎪ L
⎪
⎪c +V cos θ
⎪
⎪ td
⎨
⎪
⎪ L
⎪c −V cos θ =
⎪
⎪ tu
⎩
The speed of sound c becomes
1 ⎡ ⎛ 1 1 ⎞⎤ (t d + t u ) L
c= ⎢ L ⎜⎜ + ⎟⎟⎥ =
2 ⎢ ⎜⎝ t
⎢⎣ d t u ⎟⎟⎠⎥⎦⎥ tdt u 2
The flow velocity is now only a function of the transducer layout (L, X) and the measured transit
times tu and td.
2
c 2 Δt ⎡⎢ (t u + t d ) L ⎤⎥ Δt
V = =
2X ⎢ t t 2 ⎥⎦ 2 X
⎣ ud
2
L2 ⎡⎢ (t u + t d ) ⎤⎥
= Δt
8 X ⎢⎣ t u t d ⎦⎥
L2 ⎡⎢ (t u + t d ) (t u − t d ) ⎤⎥
2
= ⎥
8 X ⎢⎢ t u2 t d2 ⎥⎦
⎣
The above formula can be further simplified by utilizing the following approximation:
⎛ t + t d ⎞⎟ ⎛ t u + t d ⎞⎟ ⎛ Δt ⎞⎛ Δt ⎞
⎟⎟ = 4 ⎜⎜⎜t u − ⎟⎟⎟⎜⎜⎜t d + ⎟⎟⎟
2
(t u + t d ) = 4 ⎜⎜⎜ u ⎟⎟ ⎜⎜⎜
⎝ 2 ⎠⎝ 2 ⎠ ⎝ 2 ⎠⎝ 2 ⎠
⎡ Δt Δt 2 ⎤
= 4 ⎢t u t d + ( t u − t d )− ⎥
⎢ 2 4 ⎥⎦
⎣
⎡ Δt 2 ⎤
= 4 ⎢t u t d + ⎥
⎢ 4 ⎥
⎣ ⎦
≈ 4t u t d
The flow velocity can therefore be written as
L2 ⎡ (t + t )2 (t − t ) ⎤
⎢ u d u d ⎥
V = ⎢ 2 2 ⎥
8X ⎢⎣ t t
u d ⎥⎦
2
L Δt
≈
2 Xt u t d
Flow Measurement 553
Types of Fluid Phases Clean gas, liquid––clean, corrosive, dirty; Gas––Dirty; liquid––open chan-
nel, viscous, recommended for limited applicability
Inline model: 10 ∼ 1200 mm (0.4 ∼ 48 inch)
Clamped-on model: 75 mm (3 in) and up
Turndown Ratio: 100: 1
Advantages No obstruction in the flow path, no pressure drop, no moving parts, low mainte-
nance cost, multi-path models have higher accuracy for wider ranges of Reynolds number, can be used
in corrosive or slurry fluid flow, portable models available for field analysis and diagnosis
Limitations Higher initial set-up cost, single path (one-beam) models may not be suitable for flow
velocities that vary over a wide range of Reynolds number
Review Questions
1. Justify that flow measurement is essential in many industries
2. Discuss the types of flowmeters and explain any one in detail.
3. Discuss the criteria for selection of a flowmeter.
4. Explain any one type of differential pressure flowmeters.
5. Explain the working and applications of magnetic flowmeters.
6. What do you mean by positive displacement flowmeters?
7. Explain any one positive displacement flowmeter in detail using a sketch.
8. Write short notes on
a. Rotating vane
b. Coriolis flowmeters
c. Doppler ultrasonic flow meters
d. Orifice plate
e. Pitot tube
f. Magnetic flowmeters
9. Explain the working of ultrasonic flowmeters.
Index
A C
Accelerometers, 464 Calibration, 5, 35, 37, 38, 220, 414
Accelerometers capacitive, 467 Equipments used for Calibration, 37
Accuracy, 4, 7, 392 Standard Procedure, 36
Addendum, 304 Calibration of Gauge Block, 41
Air Gauge, 259 Calipers, 49
Alignment tests, 109 Centre-measuring calipers, 51
Allowance, 133 Machine travel calipers, 51
Amplification, 412 Rolling-mill calipers, 50
Amplifier, 422 Sliding calipers, 50
Differential, 422 Capacitors, 404
Differentiating, 424 Ceramic gauge-block, 65
Integrating, 424 CMM probes, 371
Inverting, 422 Coaxiality, 108
Summing, 423 Cold-Junction Compensation, 516
Analog-to-Digital Converter, 418 Combination set, 209
Angle Dekkor, 218 Comparator, 236
Angle Gauges, 203 Composite Error, 334
Attenuation, 413 concave radius, 368
Autocollimator, 90, 116, 213 Concentricity, 109
Average Roughness, 284 Constant Chord, 339
Conversion, 461
B Logarithmic, 417
Base tangent, 341 Ratiometric, 416
Beam, 443 Coordinate Measuring Machines, 367
Beam comparator, 80 Coriolis flowmeters, 547
Bearing ratio, 294 Creep, 438
Bench Micrometer, 307 Crest, 313
Best Size Wire, 312 CRO, 426
Bias, 4 Cross Sensitivity, 401
Bonded gauge, 535 Cylindrical Convex Radius, 370
Bourdon Tube, 476 Cylindricity, 105
Index 555
D Calibration Error, 14
Data Acquisition System, 409 Characteristic Error, 13
Data Presentation, 391 Controllable Error, 14
Data Transmission, 390 Dynamic Error, 14
Dead Zone, 394 Environmental Error, 13
Deadweight Tester, 481 Loading Error, 13
Dedendum, 304 Random Error, 14
Definitions, 2 Reading Error, 12
Dial Calibration Tester, 39 Relative Error, 11
Dial gauges, 110 Static Error, 12
Dial Indicator, 239 Stylus Pressure Error, 14
Dial Thickness Gauges, 249 Excitation, 413
Diaphragm Pressure Gauge, 477
Digital Universal Caliper, 72 F
Digital-to-Analog Converter, 421 Filtering, 413
Displacement Measurement, 406 Fits, 133
Dovetail, 356 Clearance, 138
Drift, 5, 393 Interference, 138, 140
Span, 393 Transition, 138, 142
Zero, 393 Flank Error, 317
Zonal, 393 Flanks, 303
Drilling Machine, 116 Flatness, 80
Dynamic, 396 Floating Carriage Micrometer, 308
Dynamometers, 452 Flowmeters, 534
Foil Gauges, 443
E Force-measurement, 436
Eccentricity, 104 Force-ring, 446
Eddy – Current Transducer, 405 Form, 269
Electromagnetic flowmeters, 541 Form-tester, 109, 382
Electro-mechanical Gauges, 88 Fundamental Deviations, 132
Electro-mechanical Transducer, 404
Electronic comparator, 260 G
End Standard, 27 Gauge Factor, 522
End Bar, 28 Gauge length, 231
Slip Gauges, 28 Gauges, 167
Errors, 5, 397 Air, 169
Gross, 397 Bore, 174
In Gear, 343 Filler, 178
Loading, 398 Plug, 167
Misuse, 398 Radius, 177
Random, 398 Ring, 167
Systematic, 397 Snap, 169
Errors in measurement, 10 Splined, 177
Absolute Error, 10 Taper Limit, 175
Alignment Error, 12 Thread, 175
Avoidable Error, 14 gear inspection centres, 351
556 Index
O Resolution, 6
Operational Amplifier (op-amp), 421 Resolution or Discrimination, 394
Optical Flats, 224 Response time, 6
Optical square, 88 Root, 303
Optical-Electrical Comparators, 256 Rotary transformer, 451
Optical-Mechanical Comparators, 254 Rotating Vane, 544
Orifice plate, 536 Roughness, 268
Oscillating Piston, 545 Roundness, 97
RTD, 503
P
Parallelism, 84 S
Performance test, 110 Screw Threads, 300
Photoresistors, 403 Selective assembly, 129
Piezoelectric, 455, 465 Sensitivity, 6, 393
Piezoelectric Devices, 404 Shaft-basis, 143
Pirani Gauge, 405 Shakers, 469
Piston Diameter Tester, 99 Shear-cell, 440
Pitch Errors, 304, 333 Shrinkage, 349
Pitch Measurement, 334 SI, 19
Pitot tube, 539 Sigma Mechanical Comparator, 248
Pneumatic Comparator, 256 Signal-conditioning, 407, 411
Pocket Surf, 294 Sine Bar, 205
Pre-amplifier, 468 Sine Centre, 210
Precision, 5, 7, 392 Skid, 277
Pressure gauges, 476 Slip gauges, 27
Primary Sensing, 389 Wringing of slip gauges, 29
Profile projector, 253, 335 Slip Ring, 450
Protractor, 199 Solex Comparator, 257
Proving Rings, 443 Spherical Convex Radius, 358
Pyrometers, 511 Spirit level, 75, 110
Optical, 512 Square Master, 90
Total Radiation, 514 Squareness, 87
Selective Radiation, 512 Squares, 200
Static Calibration, 395
Q Static, 391
Quartz force sensors, 445 Steel rule, 48
Stiffness, 461
R Straight Edges, 76, 110
Radian, 197 Straightness, 75
Radius gauges, 357 Strain gauge, 442, 456, 519
Rake angle, 304 Stroboscope, 468
Range, 6 Styli, 373
Readability, 6 Surface Plate, 83
Repeatability, 393, 438 Surface Roughness, 267
Reproducibility, 6, 393, 438 ‘S’, 443
558 Index
T Units Of Measurement, 14
Talysurf, 278 Units, 19
Taper, Measurement, 354 Universal Measuring Instrument, 71
Taylor’s Principles, 162 Universal Measuring Machine, 363
Temperature Scales, 493
Terminologies, 4 V
Test Mandrels, 110 Vacuum Gauge, 484
Thermistor, 403, 509 Vacuum Measurement, 482
Thermocouple, 498 Variable Conversion, 389
Thermometer, 496 Variable Manipulation, 390
Three-Wire Method, 313 Velocity Pickups, 463
Threshold, 395 Venturi Tube, 537
Tolerance, 133 Vernier bevel protractor, 201
Class, 133 Vernier Caliper, 51
Grade, 133 Vernier Clinometer, 210
Zone, 133 Vernier Depth Gauge, 58
Tolerances, 135 Vernier Height Gauge, 56
Bilateral, 135 Vibration, 463
Unilateral, 135
Tomlinson Surface Meter, 278 W
Torque Measurement, 448 Wavelength Standards, 31
Inline torque, 449 Waviness, 268
Reaction torque, 450 Wear Allowance, 183
Tow-Wire Method, 310 Wheatstone’s Bridge, 523
TPMS, 489
Traceability, 6
X
Transducers, 400
X–Y Plotter, 425
Transfer gauge, 112
Types, 2
Z
Zesis Ultra Optimeter, 255
U
Ultrasonic Flowmeters, 549
Uncertainty, 7
Plate - 1
(b) (c)
Fig. 2.2(b) International Standard Prototype Metre (c) Historical Standard platinum–iridium
metre bars
(a) (b)
Fig. 2.4 Set of slip gauges: (a) Set of ceramic slip gauges (b) Set of cast-steel slip gauges
Plate - 2
Straightness beam-splitter
Straightness
reflector
(a) System
Turf
Magnified image
of test object
(a)
Screen
Projector
lens
Condenser
lens
(b) (c)
Main
scale
Vernier
scale
(d)
Fig. 9.19 (a) Principle of profile projector, (b) Magnified image of small dimension plastic threads
(c) Magnified image of small-sized gears of rack, (d) Enlarged view of profile projector screen
(Courtesy, Metrology lab, Sinhgad College of Engg., Pune University, India)
Plate - 9
Fig. 9.25 Velocity differential-type air gauge with bar graph and digital display
(Figure shows measurement with air bore plug gauge)
(Courtesy, Mahr GMBH Esslingen)
(a) (b)
Fig. 10.27 (a) Ultra powers form Talysurf series surface roughness and form-measuring systems
(b) For measuring small parts with outside diameters up to 25 mm
(Courtesy, Mahr Gmbh Esslingen)
(a) (b)
Fig. 10.28 (a) Universal stand—a heavy duty, multipurpose stand equipped with an
adjustable clamp and bracket for measuring a wide variety of parts, up to 231 mm/8.375
in tall, with or without fixturing. (b) Height adjustment from 0 mm to 300 mm
(0 in to 11.81 in), of PFM mounting device by means of a hand wheel table
surface = 400 mm x 250 mm (15.75 in x 9.84 in), granite
(Courtesy, Mahr Gmbh Esslingen)
Plate - 11
Measuring wires
Micrometer
thimble
graduated in
0.002 mm
Fludicial
indicator Centre
Base
Top slide
Lower
slide
Screen
Magnified
profile of
test gear
Test gear
(a)
(b) (c)
(d)
(a) Inspection of dial gauges (b) Internal measurement on plain ring gauges with one pair of calipers (c) Measurement of
a limit plug gauge located on a self-centering support (d) Universal one-coordinate measuring instrument for accurate
internal and external measurements.
Fig. 14.9 New horizontal arm-type CMM inspecting the profile of the car body
(Courtesy, Mitutoyo Company)
Fig. 14.20
Inspection
robot