Sie sind auf Seite 1von 599

METROLOGY &

MEASUREMENT
About the Authors

Anand K Bewoor is presently working as Asst. Prof. in the Department of


Mechanical Engineering, Vishwakarma Institute of Information Technology,
Pune. He holds a bachelor’s degree in Mechanical Engineering from Walchand
Institute of Technology, Solapur, and a master’s degree in Mechanical
Engineering, with specialization in Production Engineering, from Walchand
College of Engineering, Sangli. Currently, he is pursuing a PhD in Mechanical
Engineering. He has worked as Vendor Development and Quality Control
Engineer in the industry and also as a faculty in engineering colleges of Pune and
Shivaji universities. He has published several books on Production Planning and
Control, Industrial Engineering and Management, Manufacturing Processes, and
Industrial Fluid Power. He has presented several technical/research papers at
national and international conferences and published papers in reputed national
and international journals. Apart from these, Prof. Bewoor has also filed two
patents. He is also a member of various professional bodies and has worked as
a resource person at IIPE, Pune.

Vinay A Kulkarni is presently working as Lecturer in the Production


Engineering Department, D Y Patil College of Engineering, Pune. He holds
a bachelor’s degree in Production Engineering from Walchand Institute
of Technology, Solapur, and a master’s degree in Production Engineering
(specializing in Production Management), from B V B College of Engineering
and Technology, Vishweshwariah Technological University, and is a gold-
medal recipient of the university. He has presented several technical papers
at the national level and has published papers in reputed national journals. He
is also a member of various professional bodies and has worked as a resource
person at IIPE, Pune.
METROLOGY &
MEASUREMENT

Anand K Bewoor
Assistant Professor
Department of Mechanical Engineering
VIIT, Pune

Vinay A Kulkarni
Lecturer
Department of Production Engineering
D Y Patil College of Engineering, Pune

Tata McGraw-Hill Education Private Limited


NEW DELHI
McGraw-Hill Offices
New Delhi New York St. Louis San Francisco Auckland Bogotá
Caracas Kuala Lumpur Lisbon London Madrid Mexico City
Milan Montreal San Juan Santiago Singapore Sydney Tokyo Toronto
Tata McGraw-Hill
Published by the Tata McGraw-Hill Education Private Limited,
7 West Patel Nagar, New Delhi 110 008.

Copyright © 2009 by Tata McGraw-Hill Education Private Limited.


No part of this publication may be reproduced or distributed in any form or by any means, electronic, mechanical,
photocopying, recording, or otherwise or stored in a database or retrieval system without the prior written permission of
the publishers. The program listings (if any) may be entered, stored and executed in a computer system, but they may not
be reproduced for publication.

This edition can be exported from India only by the publishers,


Tata McGraw-Hill Education Private Limited

ISBN (13): 978-0-07-014000-4


ISBN (10): 0-07-014000-6

Managing Director: Ajay Shukla

General Manager: Publishing—SEM & Tech Ed: Vibha Mahajan


Dy Manager—Sponsoring: Shukti Mukherjee
Asst. Sponsoring Editor: Suman Sen
Executive—Editorial Services: Sohini Mukherjee
Senior Production Manager: P L Pandita

General Manager: Marketing—Higher Education & School: Michael J Cruz


Product Manager: SEM & Tech Ed: Biju Ganesan
Controller—Production: Rajender P Ghansela
Asst. General Manager—Production: B L Dogra

Information contained in this work has been obtained by Tata McGraw-Hill, from sources believed to be reliable. However,
neither Tata McGraw-Hill nor its authors guarantee the accuracy or completeness of any information published herein,
and neither Tata McGraw-Hill nor its authors shall be responsible for any errors, omissions, or damages arising out of use
of this information. This work is published with the understanding that Tata McGraw-Hill and its authors are supplying
information but are not attempting to render engineering or other professional services. If such services are required, the
assistance of an appropriate professional should be sought.

Typeset at Mukesh Technologies Pvt. Ltd., #10, 100 Feet Road, Ellapillaichavadi, Pondicherry 605 005 and printed at
Avon Printers, Plot No. 16, Main Loni Road, Jawahar Nagar Industrial Area, Shahdara, Delhi 110 094
Cover: SDR
RCXCRRCFDXQAA

The McGraw-Hill Companies


Dedicated to
Lord LaxmiVenkatesh,
Our Families, Teachers and Students
Anand K Bewoor
Vinay A Kulkarni
Contents

Preface xii
List of Important Symbols xiv
List of Important Abbreviations xvi
Visual Walkthrough xvii
1. Introduction to Metrology 1
1.1 Definitions of Metrology 2
1.2 Types of Metrology 2
1.3 Need of Inspection 3
1.4 Metrological Terminologies 4
1.5 Principal Aspects of Measurement 7
1.6 Methods of Measurements 8
1.7 Measuring Instruments and their Selection 9
1.8 Errors in Measurement 10
1.9 Units of Measurement 14
1.10 Metric Units in Industry 19
Review Questions 21
2. Measurement Standards 22
2.1 Introduction 23
2.2 The New Era of Material Standards 24
2.3 Types of Standards 25
2.4 Subdivision of Standards 33
2.5 Calibration 34
Review Questions 45
3. Linear Metrology 46
3.1 Introduction 47
3.2 Steel Rule (Scale) 48
3.3 Calipers 49
3.4 Vernier Caliper 51
3.5 Vernier Height Gauge 56
3.6 Vernier Depth Gauge 58
3.7 Micrometers 59
3.8 Digital Measuring Instrument for External and Internal Dimensions 71
3.9 Digital Universal Caliper 72
Review Questions 73
viii Contents

4. Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity


Measurements 74
4.1 Introduction 74
4.2 Straightness Measurement 75
4.3 Flatness Measurement 77
4.4 Parallelism 84
4.5 Squareness Measurement 87
4.6 Roundness Measurement 92
4.7 Cylindricity 100
4.8 Coaxiality 103
4.9 Eccentricity and Concentricity 104
4.10 Industrial Applications 104
Review Questions 106
5. Metrology of Machine Tools 108
5.1 Geometrical (Alignment Tests) 109
5.2 Performance Test (Practical Test) 110
5.3 Machine-Tool Testing 112
Review Questions 124
6. Limits, Fits and Tolerances 126
6.1 Introduction 127
6.2 Concept of Interchangeability 127
6.3 Selective Assembly 129
6.4 System’s Terminologies 130
6.5 Limits and Tolerances 133
6.6 Fits 137
6.7 System of Fit 142
6.8 Indian Standards Specifications and Application 144
6.9 Geometrical Tolerances 159
6.10 Limit Gauges and Design of Limit Gauges 162
Review Questions 194
7. Angular Metrology 196
7.1 Introduction 196
7.2 Radians and Arc Length 197
7.3 Angle-Measuring Devices 198
Review Questions 219
8. Interferometry 221
8.1 Introduction 221
8.2 Monochromatic Light as the Basis of Interferometry 222
8.3 The Principle of Interference 222
8.4 Interference Bands using Optical Flat 224
8.5 Examples of Interference Patterns 227
8.6 NPL Flatness Interferometer 230
8.7 Gauge Length Interferometer 231
Review Questions 234
Contents ix

9. Comparator 236
9.1 Introduction 236
9.2 Desirable Features of Comparators 238
9.3 Classification of Comparators 238
Review Questions 264
10. Metrology of Surface Finish 266
10.1 Introduction 267
10.2 Terms Used in Surface-Roughness Measurement 267
10.3 Factors Affecting Surface Finish in Machining 272
10.4 Surface-Roughness Measurement Methods 276
10.5 Precautions for Surface-Roughness Measurement 281
10.6 Surface Texture Parameters 282
10.7 Pocket Surf 295
10.8 Specifying the Surface Finish 296
Review Questions 298
11. Metrology of Screw Threads 300
11.1 Understanding Quality Specifications of Screw Threads 300
11.2 Screw Thread Terminology 302
11.3 Types of Threads 305
11.4 Measurement of Screw Threads 307
11.5 Measurement of Thread Form Angle 316
11.6 Measurement of Internal Threads 318
Review Questions 322
12. Metrology of Gears 324
12.1 Introduction 324
12.2 Types of Gears 326
12.3 Spur Gear Terminology 328
12.4 Forms of Gears 330
12.5 Quality of (Spur) Gear 331
12.6 Errors in Spur Gear 332
12.7 Measurement and Checking of Spur Gear 334
12.8 Inspection of Shrinkage and Plastic Gears 349
12.9 Measurement Over Rollers 349
12.10 Recent Development in Gear Metrology 349
Review Questions 352
13. Miscellaneous Measurements 354
13.1 Measurement of Taper on One Side 354
13.2 Measurement of Internal Taper 355
13.3 Measurement of Included Angle of Internal Dovetail 356
13.4 Measurement of Radius 357
Review Questions 360
14. Study of Advanced Measuring Machines 361
14.1 Concept of Instrument Overlapping 362
14.2 Metrology Integration 362
x Contents

14.3 Universal Measuring Machine 363


14.4 Use of Numerical Control for Measurement 367
14.5 Optical 3D Measuring Instruments: Laser Vision 376
14.6 In-process Gauging 380
14.7 Form Testing: Case Study 382
14.8 Improvement Opportunities 384
Review Questions 384
15. Introduction to Measurement Systems 386
15.1 Definition of Measurement 386
15.2 Methods of Measurement 387
15.3 Classification of Measuring Instruments 388
15.4 Generalized Measurement System 389
15.5 Performance Characteristics of Measuring Devices 391
15.6 Types of Errors 397
Review Questions 398
16. Intermediate Modifying and Terminating Devices 400
16.1 Transducers 400
16.2 Use of Transducers for Displacement Measurement 406
16.3 Introduction to Intermediate Modifying Devices 408
16.4 Signal-Conditioning Systems 411
16.5 Introduction to Terminating Devices 425
Review Questions 431
17. Force and Torque Measurement 433
17.1 SI Units of Force and Torque 434
17.2 Force-measurement System 436
17.3 Force and Load Sensors 439
17.4 Dynamic Force Measurement 445
17.5 Torque Measurement 448
17.6 Motor and Engine-testing Dynamometers 453
17.7 Strain Gauges 456
Review Questions 459
18. Vibration Measurements 460
18.1 Vibration-Measurement System 461
18.2 Modeling Vibration System 461
18.3 Concept of Equation of Motion: Natural Frequency 461
18.4 Vibration-Measurement System Elements 463
Review Questions 471
19. Pressure Measurement 472
19.1 Zero Reference for Pressure Measurement 473
19.2 Interesting Development of Pressure Measurement 474
19.3 Mechanical Analog Pressure Gauges 476
19.4 Low Pressure (Vacuum) Measurement 482
19.5 Digital Pressure Gauges 486
19.6 Pressure Transmitters 487
Contents xi

19.7 Measuring Pressure at High Temperatures 487


19.8 Impact 488
19.9 Case Study of Pressure Measurement and Monitoring 489
Review Questions 491
20. Temperature Measurement 493
20.1 Temperature Scales 493
20.2 Temperature-Measuring Devices 495
20.3 Thermometer 496
20.4 Thermocouple 498
20.5 Resistance Temperature Detectors (RTD) 503
20.6 Thermistor 509
20.7 Pyrometers 511
Review Questions 517
21. Strain Measurement 518
21.1 Bonded Gauge 519
21.2 Unbonded Strain Gauge 519
21.3 Resistance of a Conductor 519
21.4 Wheatstone’s Bridge Circuit 523
21.5 Strain-Gauge Installation 529
21.6 Axial, Bending and Torsional Strain Measurement 529
21.7 Gauge-Selection Criteria 532
Review Questions 533
22. Flow Measurement 534
22.1 Types of Flowmeters 534
22.2 Selection of a Flowmeter 535
22.3 Installation of a Flowmeter 535
22.4 Classification of Flowmeters 536
Review Questions 553
Index 554
Preface

Nowadays, trade is leading to a greater awareness worldwide of the role that dimensional and mechani-
cal measurement plays in underpinning activities in all areas of science and technology. It provides a
fundamental basis not only for the physical sciences and engineering, but also for chemistry, the bio-
logical sciences and related areas such as the environment, medicine, agriculture and food. Laboratory
programmes have been modernized, sophisticated electronic instrumentation has been incorporated
into the programme and newer techniques have been developed. Keeping these views in mind, this
book is written which deals with not only the techniques of dimensional measurement but also the
physical aspects of measurement techniques.
In today’s world of high-technology products, the most important requirements of dimensional
and other accuracy controls are becoming very stringent as a very important aspect in achieving quality
and reliability in the service of any product in dimensional control. Unless the manufactured parts are
accurately measured, assurance of quality cannot be given. In this context, the first part of the book
deals with the basic principles of dimensional measuring instruments and precision measurement tech-
niques. This part of the book starts with discussing the basic concepts in metrology and measurement
standards in the first two introductory chapters. Then, linear, angular, machine tool and geometrical
shape metrology along with interferometry techniques and various types of comparators are explained
thoroughly in the subsequent chapters. Concepts of limits, fits and tolerances and measurement of
surface finish are illustrated in detail. Chapters 11 and 12 discuss the metrology of standard machine
parts like screw threads and gears respectively. Miscellaneous measurement and recent advancements in
the field of metrology are discussed in the last two chapters of the first part of the book.
The second part of this book begins with the explanation of measurement systems and transducers.
The methods of measuring mechanical quantities, viz., force, torque, vibration, pressure, temperature,
strain and flow measurement are discussed subsequently, covering both the basic and derived quantities.
Effort has been made to present the subject in SI units. Some of the recent developments such as use
of laser techniques in measurement have also been included.
The Online Learning Center of the book can be accessed at http://www.mhhe.com/bewoor.mm
and contains the following material:
For Instructors
• Solution Manual
• PowerPoint lecture slides
• Full-resolution figures and photos from the text
• Model syllabi
For Students
• Interactive quiz
• Objective-type questions
Preface xiii

Our objective is to provide an integrated presentation of dimensional and mechanical measurement.


This book has been developed in recognition not only with the interdisciplinary nature of engineering prac-
tice, but also with the trend in engineering curriculum. The authors have consistently crafted a text such that
it gives the reader a methodical and well-thought-out presentation that covers fundamental issues common
to almost all areas of dimensional and mechanical measurement. Information on particular instruments
and concepts has been combined to improve the logical flow of the manuscript. The coverage is such that
the book will be useful both for post-graduate, graduate, polytechnic engineering ITI students and other
graduation-level examinations (like AMIE), and competitive examinations and entrance examinations like
GATE. We believe that the concise presentation, flexible approach readily tailored to individual instructional
needs and the carefully structured topics of the book allow the faculty a wide scope in choosing the coverage
plan for students and will prove to be a good resource material for teachers. It would also be equally helpful
to professionals and practicing engineers in the field of design, manufacturing and measurement.
We wish to acknowledge our special thanks to measurement instrument manufacturers’, viz.,
M/s Mahr Gmbh for permitting us to use the figures from their product catalogue in the present text.
We owe our gratitude to many of our colleagues and the management of Vishwakarma Institute of
Information Technology, Pune; Sinhgad College of Engineering, Pune; and D Y Patil College of Engi-
neering, Akurdi. We extend our sincere thanks to all experts for giving introductory comments in the
chapters, something which we feel will motivate the reader to study the topic. We also wish to thank the
following reviewers who took out time to review the book. Their names are given below.
Ajay G Chandak Jankibai Trust
“Shamgiri’, Deopur
Dhule, Maharashtra
Manzoor Hussain Department of Mechanical Engineering
College of Engineering (JNTU)
Kukatpalli, Andhra Pradesh
C P Jesuthanam Noorul Islam College of Engineering
Nagercoil, Tamil Nadu
P Chandramohan Department of Mechatronics Engineering
Sri Krishna College of Engineering and Technology
Coimbatore, Tamil Nadu
P S Sreejith Cochin University of Science and Technology (CUSAT)
Cochin, Kerala
Shankar Chakraborty Department of Production Engineering
Jadavpur University,
Kolkata, West Bengal
We are very much grateful to our family members for their patience, encouragement and under-
standing. Thanks are also due to many individuals in Tata McGraw-Hill Education Private Limited who
have contributed their talents and efforts in bringing out this book.
Suggestions and feedback to improve the text will be highly appreciated. Please feel free to write to
us at anandbewoor@rediffmail.com and kulkarnivinay@rediffmail.com.

ANAND K BEWOOR
VINAY A KULKARNI
List of Important Symbols

H : Combination slip gauge set


ΔL : Change in conductor length
V : Excitation voltage
L : Length (m)
L : Fixed distance between two roller centers of Sine-Bar
n : Number of half wavelengths
Nf : Nominal fraction of the surface
R : Resistance of a conductor
ρ : Resistivity
β : Experimentally determined constant for a given thermistor material,(generally in order of 4000)
Δa : Average Absolute Slope
λa : Average Wavelength
Δq : RMS Average Slope
λq : RMS Average Wavelength
θ : Angular postposition
φ : Pressure angle
K : Kelvin
ºR : Rankine
C : Constant
D : Depth of the thread
Db : Constant pitch value
E : Effective diameter
F : Force
fb : Lead error
fn : Natural frequency
fp : Accumulated pitch error
fpb : Normal pitch error
fpt : Single pitch error
Fr : Run-out error of gear teeth
GF : Gauge factor
H : Chordal addendum on gear at which magnitude of ‘W’ is to be measured respectively
List of Important Symbols xv

K : Stiffness
Lo : Actual Profile Length/Profile Length Ratio
m : Mass of the body
m : module = (Pitch circle diameter)! (No. of teeth) = 2 R/z
P : Pitch of the thread
p : Constant pitch value
Pc : Peak count
r : Radius at the top and bottom of the threads
R : Resistance at the measured temperature, t
Ro : Resistance at the reference temperature, to
R1, R2, R3, R4 : Resistance
Ra : Average roughness value
Rku : Measure of the sharpness of the surface profile
Rmax : Maximum height of unevenness/maximum peak to valley height within a sample
length
Rp : Maximum peak height
Rq : Root mean square roughness
Rsk : Measurement of skewness
Rv : Maximum valley height
Rz(ISO) : Sum of the height of the highest peak plus the lowest valley depth within a sampling
length
Rz(JIS) : The ISO I O-point height parameter in ISO
S : Number of tooth space contained within space ‘W’
Sk : Skewness
Sm : Mean spacing
T : Dimension under the wires
T0 : Reference temperature generally taken as 298 K (25°C)
Vo : Output voltage
W : Chordal tooth thickness
x : Displacement
z : Number of teeth on gear
σ : Standard deviation
μ : Micron
δθ : Small angle (increment/change)
αβθ : Angles
List of Important Abbreviations

AA : Arithmetic average IR : Infrared


ADC : Analog-to-Digital Converter ISO : International Organization for
AF : Audio Frequency Standards
AFD : Amplitude Distribution Function LC : Least Count
AM : Amplitude Modulation LCD : Liquid Crystal Display
BIPM : International Bureau of Weights and LVDT : Linear Variable Differential
Measures Transformer
BIS : Bureau of Indian Standards MEMS : Microelectromechanical Systems
BS : British Standards NBS : National Bureau of Standards
CIM : Computer Integrated Manufacturing NTC : Negative Temperature Coefficient
CIPM : International Committee for Weights Thermistors
and Measures OD : Outer Diameter
CMM : Coordinate Measuring Machines Op-amp : Operational Amplifier
CNC : Computer Numerical Controls PSI : Pounds Per Square Inch
DAC : Digital-to-Analog Converter PTC : Positive Temperature Coefficient
DAQs : Data Acquisition Devices Thermistors
DIP : Dual In-line Package QS : Quality System
DNL : Differential Non-Linearity RMS : Root Mean Square
DPM : Standard Digital Panel Meter RSM : Remote Sensing Module
EWL : Effective Working Length RTDs : Resistance Temperature Devices
FD : Fundamental Deviations SAR : Successive-Approximations Register
FM : Frequency Modulation SI : International System of Units
HSC : High Spot Count SINAD : Signal-to-Noise Distortion Ratios
I/O : Input/Output SIP : Single In-line Package
IC : Integrated Circuit SNR : Signal-to-Noise Ratios
ID : Internal Diameter SPC : Statistical Process Control
INL : Integral Non-Linearity UUT : Unit Under Test
IPTS : International Practical Temperature
Scale
2 Measurement Standards

| Introductory Quotation “Precision is religion and measurement standards make it happen!”


Arun Kudale, MD, Kudale Calibration Laboratory (P), Ltd., Pune

WHAT ARE MEASUREMENT holds the secondary standard for India.


STANDARDS? My company holds tertiary standards
Line and End standards are referred as and is accredited by the National Accred-

Each chapter begins with an introductory ‘measurement standards’ in industries,


which are used as references for cali-
itation board for Testing and Calibration
Laboratories. The type of standards
being calibrated will govern the use of
bration purpose. In the modern metro-
quotation (by an eminent personality in the logical era, digital instruments such as
a periodically calibrated digital height
primary/secondary standards as a refer-
ence, e.g., slip gauges are calibrated
gauge are commonly used. In India, once in three years. Determination and
respective field) that is not only motivating light wave standards (wavelength) are
used for laboratory purposes only and
confirmation of length and calibration
must be made under specified condi-
tions. The National Accreditation board
but also gives the importance of the sub- are not commercially used. Owing to its
cost LASER is restricted in use for
alignment testing and assessment of
for Testing and Calibration Laboratories
specifies that a calibration laboratory

ject matter of the chapter. movement of subassemblies only. should be adequately free from vibra-
tions generated by the central air-condi-
tioning plant vehicular traffic and other
In general, there are four levels of stan-
dards used as references all over the sources. In other words, there should be
world, viz., primary, secondary, tertiary vibration-free operational conditions, the
and working standards. Primary stan- illumination should be 450 lux to 700 lux
dard is the one that is kept in Paris and on the working table with a glass index
secondary is the one kept with NPL of 19 for lab work, a generally dust-free
India; tertiary standard is the standard, atmosphere, temperature should be
which we use in our industries as a ref- controlled between 20 ± 1°C and humid-
erence for calibration purpose. Working ity should be controlled between 50 ±
standards are used on the shop floor. 10%. To avoid any such adverse effect
Hence it could be said that there is an on instruments, a calibration laboratory
unbroken chain for tracing the stan- is required to be set underground.
dards. Every country has a custodian In our opinion, quality should be built
who looks after secondary standards. up at the design stage, which is an
The National Physical Laboratory (NPL) important key factor in designing a

Linear Metrology 47

rather than the sliding scale of the ver- Length Metrology is the measuring hub
nier caliper. This allows the scale to be of metrological instruments and sincere
placed more precisely, and, conse- efforts must be made to understand the
quently, the micrometer can be read to a operating principles of instruments
higher precision. used for various applications.

3.1 INTRODUCTION

Length is the most commonly used category of measurements in the world. In the ancient days, length
measurement was based on measurement of different human body parts such as nails, digit, palm,
handspan, pace as reference units and multiples of those to make bigger length units.
Linear Metrology is defined as the science of linear measurement, for the determination of the dis-

|
tance between two points in a straight line. Linear measurement is applicable to all external and internal
measurements such as distance, length and height-difference, diameter, thickness and wall thickness,
straightness, squareness, taper, axial and radial run-out, coaxiality and concentricity, and mating mea-
surements covering all range of metrology work on a shop floor. The principle of linear measurement
is to compare the dimensions to be measured and aligned with standard dimensions marked on the
Introduction
measuring instruments. Linear measuring instruments are designed either for line measurements or end
measurements discussed in the previous chapter.
Linear metrology follows two approaches:
Each chapter begins with an introduction
1. Two-Point Measuring-Contact-Member Approach Out of two measuring contact
members, one is fixed while the other is movable and is generally mounted on the measuring spindle of
that gives a brief summary of the back-
an instrument, e.g., vernier caliper or micrometer for measuring distance.
ground and contents of the chapter.
2. Three-Point Measuring-Contact-Member Approach Out of three measuring
contact members, two are fixed and the remaining is movable, e.g., To measure the diameter of a bar
held in a V-block, which provides two contact points, the third movable contact point, is of the dial
gauge.
The instruments used in length metrology are generally classified into two types:

i. Non-precision measuring instruments, e.g., steel rule


ii. Precision measuring instruments, e.g., vernier calipers, micrometer

In our day-to-day life, we see almost all products made up of different components. The modern
products involve a great deal of complexity in production and such complex products have interchange-
able parts to fit in another component. The various parts are assembled to make a final end product,
which involves accurate inspection. If there are thousands of such parts to be measured, the instruments
will require to be used thousands of times. The instruments in such a case require retaining their accuracy
Metrology of Screw Threads 307

11.4 MEASUREMENT OF SCREW THREADS


1. Geometrical Parameter
a. Major Diameter ---- Bench Micrometer
b. Minor Diameter ---- Bench Micrometer
c. Thread angle and profile ---- Optical Profile Projector, Pin Measurement

2. Functional Parameters
a. Effective Diameter ---- Screw Threads Micrometer, Two-or Three-wire
methods, Floating Carriage Micrometer
b. Pitch ---- Screw Pitch Gauge, Pitch Error Testing Machine

| Sections and Sub-sections Measurement of screw threads can be done by inspection and checking of various components
of threads. The nut and other elements during mass production are checked by plug gauges or
ring gauges.

11.4.1 Measurement of Major Diameter


A bench micrometer serves for measuring the major diameter of parallel plug screw gauges. It
Each chapter has been neatly divided into consists of a cast-iron frame on which are mounted a micrometer head with an enlarged thimble
opposite a fiducial indicator; the assembly makes a calliper by which measurements are reproducible
sections and sub-sections so that the sub- within ±0.001 mm (±0.00005 in). The microm-
eter is used as a comparator. Thus, the bench

ject matter is studied in a logical progres- micrometer reading R B is taken on a standard


cylindrical plug of known diameter B of about
the same size as the major diameter to be mea-
sion of ideas and concepts. sured. A reading R G is then taken across the
crests of the gauge. Its major diameter D is
given by D = B + R B − R G
Readings should be taken along and round the
gauge to explore the variations in major diameter. Fig. 11.9 Bench micrometer
Finally, the reading R B on the standard should be
checked to confirm that the original setting has not changed. It is recommended that the measurement
should be repeated at three positions along the thread to determine the amount of taper which may be
present.

11.4.2 Measurement of Minor Diameter


For checking the minor diameter, the anvil end and spindle end have to reach roots on opposite sides,
but it doesn’t happen. Therefore, the wedge-shaped pieces are held between the anvil face root of
the thread and spindle face root of the thread. One reading is taken over a dummy minor diameter

Limits, Fits and Tolerances 187

Illustrative Examples

Example 1 Design a plug gauge for checking the hole of 70H8. Use i = 0.45 3 D + 0.001D, IT8 = 25i,
Diameter step = 50 to 80 mm.

Solution: Internal dimension = 70H8 d1 = 50, d 2 = 80


D = d1 ×d 2 = 50×80 = 63.245 mm

i = 0.45 3 63.245 + 0.001D = 1.8561 micron Limits, Fits and Tolerances 189

(c) Now consider gaugemaker’s tolerance (refer Article 6.9.4 (c)) = 10% of work tolerance.
Tolerance for IT8 = 25i, = 25. 1.8561 = 46.4036 microns
= 0.03268(0.1) mm
Hole dimensions
= 0.00327 mm
GO limit of hole = 70.00 mm

|
(d ) wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
NO GO limit of hole = 70.00 + 0.04640 = 70.04640 mm
= 0.00327 (0.1) mm = 0.000327 mm
GO plug gauge design
(e ) For designing general-purpose gauge
Workmanship allowance = 10 % hole tolerance = 10/100 × 0.4640 = 0.004640 mm
∴ Size of GO plug gauge after considering wear allowance = (22.06386 + 0.000327) mm
Illustrative Examples
Hole tolerance is less than 87.5 micron. It is necessary to provide wear allowance on a GO plug
= 22.0641 mm
gauge.
Lower limit of GO∴=GO is 22.0641−0.00
size mm
70.000
+0.00327
mm and NO-GO size is 22.965−0.00
+0.00327
mm. Illustrative Examples are provided in suf-
Upper limit of GORefer Fig. 6.49
= 70.0000 + 0.004640 = 70.00464 mm
+0.004640
ficient number in each chapter and at
Sizes of GO = 70 +0.00000
NO GO plug gauge
22.0997
NO-GO appropriate locations, to aid in under-
Workmanship allowance = 0.004640
+0.004640 +0.04640 +0.004640 22.0965
standing of the text material.
+0.04640−0.004640 +0.04176
NO GO Sizes = 70 = 70
Work
tolerance
= 0.0326 mm
22.06737
Example 2 Design and make a drawing of general purpose22.0641
‘GO’ and ‘NO-GO’ plug
GO gauge for inspecting
a hole of 22 D8. Data with usual notations:
Wear
i. i (microns) = i = 0.45 3 D + 0.001D allowance
= 0.003
ii. Fundamental deviations for hole D = 16 0.44. 22.0965
= 25i
iii. Value for IT8Fig. 6.51 Graphical representetion of genral-purpose gauge

Example 3 Design a ‘Workshop’ type GO and NO-GO Gauge suitable for 25 H7 .Data with usual
notations:
1. i (in microns) = i = 0.45 3 D + 0.001D
2. The value for IT7 = 16i.
Solution:
(a ) Firstly, find out the dimension of hole specified, i.e., 25 H7.
Limits, Fits and Tolerances 191

Example 4 Design ‘workshop’, ‘inspection’, and ‘general type’ GO and NO-GO gauges for checking the
assembly φ25H7/f8 and comment on the type of fit. Data with usual notations:

1) i (microns) = i = 0.45 3 D + 0.001D


2) Fundamental deviation for shaft ‘f ’ = −5.5 D 0.412.
3) Value for IT7 = 16i and IT8 = 25i.
4) 25 mm falls in the diameter step of 18 and 30.

| Solved Problems with Detailed Solution:


(a) Firstly, find out the dimension of hole specified, i.e., 25 H7

Explanations For a diameter of 25-mm step size (refer Table 6.3) = (18 − 30) mm

∴ D = d1 ×d 2 = 18×30 = 23.2379 mm

And, i = 0.45 3 D + 0.001D

∴ i = 0.45 3 23.2379 + 0.001(23.2379)


In case of some of the chapters which involve ana- = 1.3074 microns

lytical treatment, problems (numerical) related to Tolerance value for IT7 = 16 i ….(Refer Table 6.4)
= 16(1.3074) = 20.85 microns ≅ 21microns
those concepts are explained stepwise at the end = 0.021 mm
of the chapters which enable the student to have +0.021
(b) Limits for 25 H7 = 25.00 −0.00 mm

good comprehension of the subject matter. ∴ tolerance on hole = 0.021 mm


Tolerance value for IT8 = 25i ….(refer Table 6.4)
= 25(1.3074)
= 32.6435 ≅ 33 microns
gy
(c) Fundamental deviation for shaft ‘f ’ = −5.5 D0.412
= −5.5(23.2)0.412
25.30
26.56 = −10.34 ≅ −10 microns

(a) Outside measurement (b) Inside measurement

22.37 23.05

(c) Depth measurement (d) Depth(Distance) measurement Square prism and


Digital display mirror to measure
small angular tilt
in the horizontal
Inside Outside plane

16 Eei
Magnetic
base

Height

16 Eea

(e) Height measurement and transfer (f) Specially designed anvils for measurements

i 8 l f l

Fig. 7.7 Digital vernier bevel protractor


Lamp

Fig. 7.28 Set of autocollimator along with square prism and mirror to measure small
angular tilts in the horizontal plane
(Courtesy, Metrology Lab Sinhgad COE, Pune)

Measurement Uncertainties
• Visual setting autocollimator ±0.3 second of an arc
over any interval up to 10 minutes of an arc
• Photoelectric setting autocollimator Typically ±0.10 sec-
ond of an arc over any interval up to 10 minutes
of an arc
• Automatic position-sensing electronic autocollimator Typ-
ically ±0.10 second of an arc over any interval up
to 10 minutes of an arc

A combined effort of Renishaw and NPL has resulted


in a small-angle generator with an uncertainty of ±0.01
second of an arc over a range of 60 seconds of an arc. The
equipment shown in Fig. 7.29 has now been installed at NPL
where it has been evaluated and is being used to provide a
calibration service
Fig.for high-accuracy
6.32 Automatic autocollimators.
gauge system Fig. 7.29 Autocollimator
(Courtesy, Mahr GMBH, Esslingen)

(a) (b) (c)

(d) (e) (f)


Photographs |
Photographs of instruments and their
applications are presented at appropriate
(g) (h)
locations in the book.
Comparator 243

Adjustable
tolerance
markers

1 2 3 A B

Pointer
A B

Force and Torque Measurement 443


C Fine adjustment
Batch 1. Undersize
Engine weighing Bin Bin Bin screw
A B C 2. Good
dynamometry
3. Oversized Measuring spindle
C
Sollenoid
valves Contact point
Checking (a) (b)
Spring
connector testing
insertion (1, 2, 3) are the relays, (A, B) are adjusting screws for electric contacts (C) Lifting screw
force Fig. 9.5 Mechanical dial indicator (comparator) with limit contacts
Load cell (Courtesy, Mahr GMBH Esslingen)
Pedal force Cylinder
Platen
load cell
Booster
Load cell Box-type opening Self-contanied movement.
Brake testing Strain gauge
load cell Ensures constant This unit can be removed
Lockable fine
measuring force and replaced quickly.
Fig. 17.6 Load-cell applications adjustment
Maximum sensitivity and
accuracy are ensured by
jewelled bearings of movement
in conjunction with precision
(i) Foil Gauges offer the largest choice of different types and in consequence tend to be the gears and pinions.
most used in load cell designs. Strain-gauge patterns offer measurement of tension, compression Raising of measuring
spindle either by way Clear-cut scale
and shear forces. of screw-in cable or
lifting knob Adjustable
(ii) Semiconductor Strain Gauges come in a smaller range of patterns but offer the advantages of Mounting shank tolerance
being extremely small and have large gauge factors, resulting in much larger outputs for the same given stress. and measuring markers
Due to these properties, they tend to be used for the miniature load cell designs. spindle made of
hardened stainless
steel
(iii) Proving Rings are used for load measurement using a calibrated metal ring, the movement of Measuring spindle mounted in high-
which is measured with a precision displacement transducer. precision ball guide (Types 1000/1002/
1003/1004) for minimal hysteresis
A vast number of load-cell types have developed
over the years, the first designs simply using a strain Insensitive to lateral forces acting on the
spindle
gauge to measure the direct stress which is introduced
into a metal element when it is subjected to a tensile
or compressive force. A bending-beam-type design uses
strain gauges to monitor the stress in the sensing ele-
ment when subjected to a bending force. More recently,
the measurement of shear stress has been adopted as a
more efficient method of load determination as it is less
dependent on the way and direction in which the force
is applied to the load cell.

(iv) ‘S’ or ‘Z’ Beam Load Cell This is a simple de-


sign load cell where the structure is shaped as ‘S’ or ‘Z’ and
strain gauges are bonded to the central sensing area in the

|
form of a full Wheatstone bridge. Fig. 17.7 Load/force cells

Exploded Views of Photographs

Wherever required, exploded views of the


instruments are also shown.
Metrology of Screw Threads 311

Wire of
Dia. ′d′
O
Diameter over
wire Dm
D E
Q
B C
E M
A 530 Metrology and Measurement
θ/2 Effective
diameter 21.6.1 Measurement of Bending Strain
Diameter ′T ′ P Gauge in tension, T
Consider measuring the bending strain in a cantilever.
If the two gauges are inserted into a half-bridge
circuit as shown and remembering that in tension the Gauge in compression, C
Fig. 11.14 Two-wire method resistance will increase by ΔR, and in compression Load
the resistance will decrease by the same amount, we Fig. 21.12 Measuring the bending strain
where T is the dimension under the wires and
can double the sensitivity to bending strain and elimi- in a cantilever
T = DM−2d nate sensitivity to temperature.
112 Metrology and Measurement
d = Diameter of wire
V ΔR
The
For methodsthe
measuring employed are as
dimension T, follows:
the wires are placed over a standard cylinder of diameter greater than The output is given by Vo = ×
2 R T:R + ΔR C:R − ΔR
the diameter
000/000under the wires,
for deviation of and the corresponding
perpendicularity, whichreading noted as r1 and the reading the over the
are theisratios
gauges as r2. (i.e., the output is double that from a quarter bridge circuit).
000 for any length of 000 for deviation of straightness and parallelism—this expression is used for
Then, T = P − (r − r ) Further, you can demonstrate that if the resistance of both Vo
local permissible1 deviation,
2 the measuring length being obligatory
gauges increases (due to temperature or axial strain) then the
where, = the
000p For constantofvalue
deviation which should
straightness be added to theexpression
and parallelism—this diameter under
is usedthe wires for calculating
to recommend a measur- output voltage remains unaffected (try it by putting the resistance
the
ingeffective diameter
length, but in caseand
the which also depends
proportionality upon the
rule comes intodiameter
operation,of the
the measuring
thread andlength
the pitch of from
differs the of gauge C as R + ΔR). R R
thread
those(pitch value)
indicated.
Now refer Fig. 11.14. 21.6.2 Measurement of Axial Strains
Excitation
5.3 MACHINE-TOOL TESTING In practice, four gauges are used, two of which measure the direct voltage,V
BC lies on the effective diameter line.
5.3.1 Alignment Testing of Lathe
strain and are placed opposite each other in the bridge (thereby Fig. 21.13 Circuit diagram
BC = ½ pitch =½p doubling sensitivity). Two more gauges are mounted at right
Table 5.1 Specifications of alignment testing of lathe
d cosecθ / 2 angles (thereby, not sensitive to the axial strain required) or on an
OP = unstrained sample of the same material to provide temperature compensation. The arrangements are
Sl. 2 Measuring Permissible
No. Test Item Figure Instruments Error (mm) shown in Fig. 21.14. Care must be taken in the angular alignment of the gauges on the sample.
d( cosec θ / 2 − 1)
1. Leveling of PA = Precision level or 0.01 to 0.02
machines (Straight- 2 any other optical
ness of sideway— PQ = QC cot θ/2 = p/4 cot θ/2instruments R1 R2
carriage)
(a) Longitudinal direc- (a) p cot θ / 2 d (cos ec θ / 2 − 1) R3
AQ = PQ − AP =
tion—straightness

4 2 R1
of sideways in Vo
AQ has a value half plane
vertical of P.
(b) In transverse
direction (b)
R4 R4 R3
R2
2. Straightness of car- Dial gauge and 0.015 to 0.02
riage movement in test mandrel or
horizontal plane or straight edges Excitation
possibly in a plane (a) with parallel faces, voltage, V
defined by the axis of between centres Fig. 21.14 Measurement of axial strains
centres and tool point
( Whenever test ( b) is
carried out, test (a) is
not necessary)
(b)

3. Parallelism of b Dial gauge 0.02 to 0.04


tailstock movement a
to the carriage
movement
(a) In horizontal Constant
plane, and ( b) in
vertical plane

Illustrations |
Illustrations are essential tools in books
on engineering subjects. Ample illustra-
tions are provided in each chapter to il-
lustrate the concepts, functional relation-
ships and to provide definition sketches
for mathematical models.
Case Studies |
Case Studies are an important part of books
on engineering subjects. Many case studies
are provided in the chapters to explain the
concepts and their practical significances.
1 Introduction to
Metrology

Metrology—Making Measurement Work For Us...


MANKIND MEASURES taking measurements constitute 10–15%
Measurement has become a natural part of production costs. Just for fun, try hold-
of our everyday life. Planks of wood and ing a conversation without using words
cartons of tea are both bought by size that refer to weights or measures.
and weight; water, electricity and heat
To explain the importance of measure-
are metered, and we feel the effect on
ment, Lord Kelvin said “I often say that
our pockets. Bathroom scales affect our
when you can measure what you are
moods and sense of humour—as do
speaking about and express it in num-
police speed traps and the possible
bers, you know something about it; but
financial consequences. The quantity of
when you cannot measure it, when you
active substances in medicine, blood-
cannot express it in numbers, your
sample measurements, and the effect of
knowledge is of a meagre and unsatis-
the surgeon’s scalpel must also be pre-
factory kind. It may be the beginning of
cise if patients’ health is not to be jeop-
knowledge but you have scarcely in your
ardised. We find it almost impossible to
thought advanced to the stage of sci-
describe anything without measuring
ence.” Measurement is defined as the
it—hours of sunshine, chest width, alco-
set of operations having the objective of
hol percentages, weights of letters, room
determining the value of a quantity.
temperatures, tyre pressures ... and so
on. The pilot carefully observes his alti- Science is completely dependent on
tude, course, fuel consumption and measurement. Geologists measure
speed; the food inspector measures bac- shock waves when the gigantic forces
teria content; maritime authorities mea- behind earthquakes make themselves
sure buoyancy; companies purchase felt; astronomers patiently measure the
raw materials by weights and measures, light from distant stars in order to deter-
and specify their products using the mine their age; atomic physicists feel
same units. Processes are regulated and jubilant when by taking measurements in
alarms are set off because of measure- millionths of a second, they are able at
ments. Systematic measurement with last to confirm the presence of an almost
known degrees of uncertainty is one of infinitely small particle. The availability
the foundations in industrial quality con- of measuring equipment and the ability
trol and generally speaking, in most to use them are essential if scientists
modern industries, the costs incurred in are to be able to objectively document
2 Metrology and Measurement

the results they achieve. The science of a common perception of what is meant
measurement, metrology, is probably the by expressions such as metre, kilo-
oldest science in the world and knowl- gram, litre, watt, etc. Mankind has
edge of how it is applied is a fundamental thousands of years of experience to
necessity in practically all science-based confirm that life really does become
professions! Measurement requires easier when people cooperate on
common knowledge. metrology.
Metrology is hardly ostentatious and Metrology is a word derived from two
the calm surface it shows covers vast Greek words: Metro–Measurement,
areas of knowledge that only a few are Logy–Science. Metrology includes all
familiar with, but which most make aspects with reference to measurements,
use of, confident that they are sharing whatever their level of accuracy.

1.1 DEFINITIONS OF METROLOGY

i. Metrology is the field of knowledge concerned with measurement and includes both theoretical
and practical problems with reference to measurement, whatever their level of accuracy and in
whatever fields of science and technology they occur. (Source: BS 5233:1975).
ii. Metrology is the science of measurement.
iii. Metrology is the science of weights and measures.
iv. Metrology is the process of making extremely precise measurements of the relative positions
and orientations of different optical and mechanical components.
v. Metrology is the documented control that all equipment is suitably calibrated and maintained in
order to perform as intended and to give reliable results.
vi. Metrology is the science concerned with the establishment, reproduction, conversion and trans-
fer of units of measurements and their standards.

The principal fields of metrology and its related applications are as follows:

a. Establishing units of measurement and their standards such as their establishment, reproduction,
conservation, dissemination and quality assurance
b. Measurements, methods, execution, and estimation of their accuracy
c. Measuring instruments—Properties examined from the point of view of their intended purpose
d. Observers’ capabilities with reference to making measurements, e.g., reading of instrument in-
dications
e. Design, manufacturing and testing of gauges of all kinds

1.2 TYPES OF METROLOGY

Metrology is separated into three categories with different levels of complexity and accuracy:
Introduction to Metrology 3

1. Scientific Metrology deals with the organization and development of measurement stan-
dards and with their maintenance (highest level).

2. Industrial Metrology has to ensure the adequate functioning of measuring instruments


used in industry as well as in production and testing processes. The metrological activities, testing and
measurements are generally valuable inputs to work with quality in industrial activities. This includes the
need for traceability, which is becoming just as important as measurement itself. Recognition of met-
rological competence at each level of the traceability chain of standards can be established by mutual
recognition agreements or arrangements.

3. Legal Metrology is concerned with the accuracy of measurements where these have influ-
ence on the transparency of economical transactions, and health and safety, e.g., the volume of petrol
purchased at a pump or the weight of prepackaged flour. It seeks to protect the public against inaccu-
racy in trade. It includes a number of international organizations aiming at maintaining the uniformity
of measurement throughout the world. Legal metrology is directed by a national organization which is
known as National Service of Legal Metrology.
The functions of legal metrology are to ensure the conversion of national standards and to guaran-
tee their accuracies by comparison with international standards; to regulate, advise, supervise and con-
trol the manufacture and calibration of measuring instruments; to inspect the use of these instruments
with measurement procedures for public interest; to organize training sessions on legal metrology and
to represent a country in international activities related with metrology.

4. Fundamental Metrology may be described as scientific metrology, supplemented by those


parts of legal and industrial metrology that require scientific competence. It signifies the highest level
of accuracy in the field of metrology.
Fundamental metrology is divided in accordance with the following eleven fields: mass, electricity,
length, time and frequency, thermometry, ionizing radiation and radioactivity, photometry and radiom-
etry, flow, acoustics, amount of substance and interdisciplinary metrology.

1.3 NEED OF INSPECTION

Inspection is necessary to check all materials, products, and component parts at various stages during
manufacturing, assembly, packaging and installation in the customer’s environment. It is the quality-
assurance method that compares materials, products or processes with established standards. When the
production rate is on a smaller scale, parts are made and assembled by a single manufacturing cell. If
the parts do not fit correctly, the necessary adjustments can be made within a short period of time. The
changes can be made to either of the mating parts in such a way that each assembly functions correctly.
For large-scale manufacturing, it is essential to make exactly alike similar parts or with the same accuracy.
These accuracy levels need to be endorsed frequently. The recent industrial mass-production system is
based on interchangeability. The products that are manufactured on a large scale are categorised into
4 Metrology and Measurement

various component parts, thus making the production of each component an independent process.
Many of these parts are produced in-house while some parts are purchased from outside sources and
then assembled at one place. It becomes very necessary that any part chosen at random fits correctly with
other randomly selected mating parts. For it to happen, the dimensions of component parts are made
with close dimensional tolerances and inspected at various stages during manufacturing. When large
numbers of identical parts are manufactured on the basis of interchangeability, actual dimension mea-
surement is not required. Instead, to save time, gauges are used which can assure whether the manufac-
tured part is within the prescribed limits or not. If the interchangeability is difficult to maintain, assorted
groups of the product are formed. In such a case, the products X and Y are grouped according to their
dimensional variations. For example, if shafts are made within the range of 59.95 mm to 60.05 mm,
and if the diameters of bearing holes are made within the range 60.00 mm to 60.1 mm then the shafts
are grouped for sizes of 59.95 mm to 60.00 mm and 60.01 mm to 60.05 mm. Similarly, two bearing-hole
groups are formed as sizes of 60.00 mm to 60.05 mm and 60.06 mm to 60.10 mm. The lower-sized shaft
group gets assembled with the lower-sized hole group, and the higher-sized shaft group gets assembled
with higher-sized hole group. This is known as selective assembly which demands for inspection at every
stage of manufacturing and makes the assemblies feasible for any odd combinations controlling the
assembly variations in terms of loose (clearance) fit or tight (interference) fit.
The inspection activity is required to

i. ensure the material, parts, and components conform to the established standards,
ii. meet the interchangeability of manufacture,
iii. provide the means of finding the problem area for not meeting the established standards,
iv. produce the parts having acceptable quality levels with reduced scrap and wastages,
v. purchase good quality of raw materials, tools, and equipments that govern the quality of finished
products,
vi. take necessary efforts to measure and reduce the rejection percentage for forthcoming production
batches by matching the technical specification of the product with the process capability, and
vii. judge the possibility of rework of defective parts and re-engineer the process.

1.4 METROLOGICAL TERMINOLOGIES

Many companies today are concerned with quality management or are in the process of introducing
some form of quality system in their work. This brings them into contact with quality standards such
as EN 45001–General Criteria for the Operation of Testing Laboratories, or with the standards in
the ISO 9000 series or the DIN system. A feature common to all quality standards is that they specify
requirements in respect of measurements and their traceability.
The quality context employs a number of measurement technology terms that can cause difficulties if
their meanings are not correctly understood.
Accuracy is the closeness of agreement between a test result and the accepted reference value [ISO 5725].
Bias is the difference between the expectation of the test results and an accepted reference value
[ISO 5725].
Introduction to Metrology 5

Calibration is a set of operations that establish, under specified conditions, the relationship between
values of quantities indicated by a measuring instrument or values represented by a material measure
and the corresponding values realized by standards. The result of a calibration may be recorded in a
document, e.g., a calibration certificate. The result can be expressed as corrections with respect to the
indications of the instrument.
Confirmation is a set of operations required to ensure that an item of measuring equipment is in a state
of compliance with requirements for its intended use. Metrological confirmation normally includes, for
example, calibration, any necessary adjustment or repair and subsequent recalibration, as well as any
required sealing and labelling.
Correction is the value which, added algebraically to the uncorrected result of a measurement, com-
pensates for an assumed systematic error. The correction is equal to the assumed systematic error, but
of the opposite sign. Since the systematic error cannot be known exactly, the correction is subject to
uncertainty.
Drift is a slow change of a metrological characteristic of a measuring instrument.
Error of a measuring instrument is the indication of a measuring instrument minus a ‘true’ value of
the corresponding input quantity, i.e., the error has a sign.
Expectation of the measurable quantity is the mean of a specified population of measurements.
Fiducial error (of a measuring instrument) is the error of a measuring instrument divided by a (fiducial)
value specified for the instrument. Fiducial value can be the span or upper limit of a nominal range of
a measuring instrument.
Group standard is a set of standards of chosen values that, individually or in combination, provide a
series of values of quantities of the same kind.
Inspection involves measurement, investigation or testing of one or more characteristics of a product,
and includes a comparison of the results with specified requirements in order to determine whether the
requirements have been fulfilled.
Magnification In order to measure small difference in dimensions, the movement of the measuring tip
in contact with work must be magnified and, therefore, the output signal from a measuring instrument is
to be magnified many times to make it more readable. In a measuring instrument, magnification may be
either mechanical, electrical, electronic, optical, pneumatic principle or a combination of these.
Measurand is a particular quantity subject to measurement.
National (measurement) standard is a standard recognized by a national decision to serve, in a coun-
try, as the basis for assigning values to other standards of the quantity concerned.
Nominal value is a rounded or approximate value of a characteristic of a measuring instrument that
provides a guide to its use.
Precision is the closeness of agreement between independent test results obtained under stipulated
conditions [ISO 5725].
6 Metrology and Measurement

Range is the capacity within which an instrument is capable of measuring.


Readability refers to the ease with which the readings of a measuring instrument can be read. It is
the susceptibility of a measuring device to have its indicators converted into meaningful numbers. If the
graduation lines are very finely spaced, the scale will be more readable by using a microscope, but the
readability will be poor with the naked eye.
Reference, accepted value serves as an agreed-on reference for comparison, and which is derived
as theoretical or established value, based on scientific principles; an assigned or certified value, based
on experimental work of some national or international organization; or consensus or certified value,
based on collaborative experimental work under the auspices of a scientific or engineering group, when
these are not available according to the expected value of the measurable quantity.
Repeatability conditions are where independent test results are obtained with the same method on
identical test items in the same laboratory by the same operator using the same equipment within short
intervals of time [ISO 5725].
Reproducibility is a precision under reproducibility conditions.
Reproducibility conditions are where test results are obtained with the same method on identical test
items in different laboratories with different operators using different equipment.
Response time is the time which elapses after a sudden change of the measured quantity until the
instrument gives an indication different from the true value by an amount less than the given permis-
sible value.
Resolution is the smallest change of the measured quantity which changes the indication of a measur-
ing instrument.
Sensitivity of the instrument denotes the smallest change in the value of the measured variable to
which the instrument responds. In other words, sensitivity denotes the maximum change in an input
signal that will not initiate a response on the output.
Stability refers to the ability of a measuring instrument to constanty maintain its metrological character-
istics with time.
The terms measurement Standard, Etalon material measure, measuring instrument, reference material
or measuring system are intended to define, realise, conserve or reproduce a unit or one or more values
of a quantity to serve as a reference.
Standardization is a process of formulating and applying rules for orderly approach to a specific activ-
ity for the benefit and with the cooperation of all concerned in particular. This is done for the promo-
tion of overall economy, taking due account of functional conditions and safety requirements.
Testing is a technical investigation, e.g., as to whether a product fulfils its specified performance.
Traceability means that a measured result can be related to stated references, usually national or inter-
national standards, through an unbroken chain of comparisons, all having stated uncertainties.
Introduction to Metrology 7

Trueness is the closeness of agreement between the average value obtained from a large series of test
results and an accepted reference value [ISO 5725]. The measure of trueness is usually expressed in
terms of bias.
Uncertainty of measurement is a parameter, associated with the result of a measurement that charac-
terises the dispersion of the values that could reasonably be attributed to the measurand. It can also be
expressed as an estimate characterizing the range of values within which the true value of a measurand
lies. When specifying the uncertainty of a measurement, it is necessary to indicate the principle on
which the calculation has been made.
Verification is an investigation that shows that specified requirements are fulfilled.

1.5 PRINCIPAL ASPECTS OF MEASUREMENT

Accuracy Accuracy is the degree to which the measured value of the quality characteristic agrees
with the true value. The accuracy of a method of measurement is referred to its absence of bias to the
conformity of results to the true value of quality characteristics being measured. As the exact measure-
ment of a true value is difficult, a set of observations are made whose mean value is taken as the true
value of the quantity to be measured. The various attributes of the workpiece such as dimensions,
hardness, tensile strength and other quality characteristics may creep in while measuring. Therefore, the
measured value is the sum of the quantity measured and the error of the instrument. As both of them
are independent of each other, the standard deviation of the measured value is the square root of the
square of the standard deviation of the true value (σtrue ) and the square of the standard deviation of
the error of measurement (σerror ).

σ measured value = σ 2 true + σ 2 error

For example, a micrometer measures a part dimension as 10 mm and if the selected accuracy is
±0.01 mm then the true dimension may lie between 9. 99 mm to 10.01 mm. Thus, the accuracy of the
micrometer is ±0.01 mm means that the results obtained by the micrometer are inaccurate between ±0.01
mm or there is an uncertainty of ±0.01 mm of the measured value (1% error in the instrument).

Precision Precision is the degree of repeatability in the measuring process. Precision of a method
of measurement refers to its variability when used to make repeated measurements under carefully
controlled conditions. A numerical measure of a precision is the standard deviation of the frequency
distribution that would be obtained from such repeated measurements. This is referred as σerror .
Precision is mainly achieved by selecting a correct instrument technology for application. The general
guideline for determining the right level of precision is that the measuring device must be ten times
more precise than the specified tolerances, e.g., if the tolerance to be measured is ±0.01 mm, the mea-
suring device must have a precision of ±0.001 mm. The master gauge applied should be ten times more
precise than the inspection device.
8 Metrology and Measurement

1.6 METHODS OF MEASUREMENTS

Measurement is a set of operations done with the aim of determining the value of a quantity which
can be measured by various methods of measurements depending upon the accuracy required and the
amount of permissible error.
The methods of measurements are classified as follows:

1. Direct Method This is the simplest method of measurement in which the value of the quan-
tity to be measured is obtained directly without any calculations, e.g., measurements by scales, vernier
calipers, micrometers for linear measurement, bevel protractor for angular measurement, etc. It involves
contact or non-contact type of inspections. In case of contact type of inspections, mechanical probes
make manual or automatic contact with the object being inspected. On the other hand, the non-contact
type of method utilizes a sensor located at a certain distance from the object under inspection. Human
insensitiveness can affect the accuracy of measurement.

2. Indirect Method The value of the quantity to be measured is obtained by measuring other
quantities, which are frequently related with the required value, e.g., angle measurement by sine bar,
three-wire method for measuring the screw pitch diameter, density calculation by measuring mass and
dimensions for calculating volume.

3. Absolute Method This is also called fundamental method and is based on the measurement of
the base quantities used to define a particular quantity, e.g., measuring a quantity (length) directly in
accordance with the definition of that quantity (definition of length in units).

4. Comparison Method The value of a quantity to be measured is compared with a known


value of the same quantity or another quantity related to it. In this method, only deviations from master
gauges are noted, e.g., dial indicators or other comparators.

5. Substitution Method The quantity is measured by direct comparison on an indicating device


by replacing the measurable quantity with another which produces the same effect on the indicating
device, e.g., measuring a mass by means of the Borda method.

6. Coincidence Method It is also called the differential method of measurement. In this, there
is a very small difference between the value of the quantity to be measured and the reference. The refer-
ence is determined by the observation of the coincidence of certain lines or signals, e.g., measurement
by vernier calipers (LC × vernier scale reading) and micrometer (LC × circular scale reading).

7. Transposition Method It is the method of measurement by direct comparisons in which


the value of the quantity measured is first balanced by an initial known value P of the same quantity.
Then the value of the quantity measured is put in place of that known value and is balanced again by
another known value Q. If the position of the element indicating equilibrium is the same in both cases,
Introduction to Metrology 9

the value of the quantity to be measured is PQ , e.g., determination of a mass by means of balance
and known weights, using the Gauss double weighing method.

8. Deflection Method The value of the quantity to be measured is directly indicated by the
deflection of a pointer on a calibrated scale, e.g., dial indicator.

9. Complementary Method The value of the quantity to be measured is combined with


a known value of the same quantity, e.g., determination of the volume of a solid by liquid displacement.

10. Method of Null Measurement It is a method of differential measurement. In this method,


the difference between the value of the quantity to be measured and the known value of the same quantity
with which it is compared is brought to zero (null), e.g., measurement by potentiometer.

1.7 MEASURING INSTRUMENTS AND THEIR SELECTION

Transformation of a measurable quantity into the required information is a function of measuring


instruments. The important characteristics which govern the selection of instruments are measuring
range, accuracy and precision. No measuring instrument can be built that has perfect accuracy and
perfect precision. The usage of measuring instruments depends on the range of application, e.g., in
case of waiting to avoid poor accuracy at the lower end of a scale, the instrument to be used should
be highly accurate having a large range of measurement. Alternatively, two instruments with different
ranges may be used—one for lower range and another for full range. The precision of an instrument is
an important feature as it gives repeatable readings with required accuracy levels.
Steel rules, vernier calipers, micrometers, height gauges, etc., are commonly used for length measure-
ment. But there are a number of other instruments that are also used for length measurements. Measur-
ing instruments are also developed for measuring such dimensional features like angle, surface finish,
form, etc. Resolution, or sensitivity, is also an important aspect to be considered for selecting instruments
for measurement purposes as it represents the smallest change in the measured quantity which can
reproduce a perceptible movement of the pointer on a calibrated scale. Generally, measuring instru-
ments are classified as follows:

i. On the basis of function


a. Length-measuring instruments
b. Angle-measuring instruments
c. Surface-roughness-measuring instruments
d. Geometrical-form-checking instruments
ii. On the basis of accuracy
a. Most accurate instruments
b. Moderate accurate instruments
c. Below-moderate accurate instruments
10 Metrology and Measurement

iii. On the basis of precision


a. Precision measuring instruments
b. Non-precision measuring instruments

1.7.1 Factors Affecting Accuracy of Measuring Instruments


1. Standards of Calibration for Setting Accuracy Traceability, calibration methods,
coefficient of thermal expansion, elastic properties of measuring instruments, geometric compatibility

2. Workpiece Control during Measurement Cleanliness, surface finish, waviness, scratch


depth, surface defects, hidden geometry, definable datum(s), thermal stability

3. Inherent Characteristics of Measuring Instrument Range of scale, amplification


(amplifying system functioning within prescribed limit of the instrument), effect of friction, hysteresis
loss, backlash, drift error, handling, calibration errors, readability, repeatability of measurement, sensitivity,
contact geometry, thermal expansion effects

4. Inspector (Human Factor) Skill, training, awareness of precision measurement, selection


of instruments, working attitude, socio-economic awareness, consistent efforts towards minimizing
inspection time and cost

5. Environmental Conditions Noise, vibration, temperature, humidity, electrical parameter


variations, adequate lighting, atmospheric refraction, clean surrounding
To ensure higher accuracy during measuring, the above sources of error are required to be analyzed
frequently and necessary steps should be taken to eliminate them.

1.8 ERRORS IN MEASUREMENT

The error in measurement is the difference between the measured value and the true value of the mea-
sured dimension. Error may be absolute or relative.
Error in Measurement = Measured Value − True Value
The actual value or true value is a theoretical size of dimension free from any error of measurement
which helps to examine the errors in a measurement system that lead to uncertainties. Generally, the
errors in measurements are classified into two testing types—one, which should not occur and can be
eliminated by careful work and attention; and the other, which is inherent in the measuring process/
system. Therefore, the errors are either controllable or random in occurrence.

Absolute Error
It is divided into two types:
Introduction to Metrology 11

True Absolute Error It is defined as the algebraic difference between the result of measurement
and the conventional true value of the quantity measured.

Apparent Absolute Error It is defined as the algebraic difference between the arithmetic mean and
one of the results of measurement when a series of measurements are made.
Absolute Error (EA)
∴ Absolute Error = Actual Value − Approximate Value
If, absolute value = x and
approximate value = x + dx, then
Absolute Error = dx

Relative Error
It is the quotient of the absolute error and the value of comparison (may be true value or the arithmetic
mean of a series of measurements) used for calculation of the absolute error.
It is an error with respect to the actual value.

Actual Value − Approximate Value


Relative Error =
Actual Value
For the above example,
dx
Relative Error =
x

Percentile Error (Ep ) Relative error is expressed in percentage form.


Actual Value − Approximate Value
Percentile Error = × 100
Actual Value
Absolute Error
Percentile Error = × 100
Actual Value
Percentile Error = Relative Error × 100
For example, if a number has actual value = 0.8597 and approximate value = 0.85, calculate the abso-
lute, relative and percentile error.
Absolute Error = 0.8597 − 0.85 = 0.0097
0.8597 − 0.85
Relative Error = = 0.11283
0.8597
Percentile Error = Relative Error × 100 = 0.11283 × 100 = 11.283%
12 Metrology and Measurement

Static Error
These are the result of physical nature of the various components of a measuring system, i.e., intrinsic
imperfection or limitations of apparatus/instrument. Static error may occur due to existence of either
characteristic errors or reading errors or environmental errors, as the environmental effect and other
external factors influence the operating capabilities of an instrument or inspection procedure. This
error can be reduced or eliminated by employing relatively simple techniques.

a. Reading Error These types of errors apply exclusively to instruments. These errors may be the
result of parallax, optical resolution/readability, and interpolation.
Parallax error creeps in when the line of sight is not perpendicular to the measuring scale. The mag-
nitude of parallax error increases if the measuring scale is not made flush to the component. This may
be one of the common causes of error. It occurs when either the scale and pointer of an instrument
are not in the same plane or the line of vision is not in line of the measuring scale.
In Fig. 1.1, let, Y be the distance between the pointer and the
eye of the observer, X be the separation distance of the scale
and the pointer, and θ be the angle between the line of sight and X
E
the normal to the scale. B Scale
A
Now, [(PA)/(NE )] = {X/(X − Y )}
And the error will be Y
θ
(PA) = { X/(X − Y )} {( X − Y ) tan θ}
Error = X tan θ
D
C
Generally, is very small.
Fig. 1.1 Parallax error
∴ tan θ = θ and E = X θ
For least error, X should be as minimal as possible. This error can be eliminated by placing a mirror
behind the pointer, which helps to ensure normal reading of the scale.

b. Alignment Error This occurs if the checking of an instrument is not correctly aligned with the
direction of the desired measurement. In Fig. 1.2 (a), the dimension D is being measured with a dial
indicator. But the dial indicator plunger is not held vertical and makes an angle θ with the line of mea-
surement. This leads to misalignment error getting introduced in the measurement, which has a value
equal to D(1 – cos θ). To avoid the alignment error, Abbe’s alignment principle is to be followed. It
states that the axis or line of measurement should coincide with the axis of the measuring instrument or the line of the
measuring scale.
Now consider Fig. 1.2 (b). While measuring the length of a workpiece, the measuring scale is inclined
to the true line of dimension being measured and there will be an error in the measurement. The length L
measured will be more than the true length, which will be equal to L cos θ. This error is called cosine
error. In many cases the angle θ is very small and the error will be negligible.
Introduction to Metrology 13

Dial indicator
θ

L cos θ
L
D

(a) (b)
Fig. 1.2 Alignment error

c. Characteristic Error It is the deviation of the output of the measuring system from the theo-
retical predicted performance or from the nominal performance specifications. Linearity, repeatability,
hysteresis and resolution error are the examples of characteristic error.

d. Environmental Error These are the errors arising from the effect of the surrounding tempera-
ture, pressure and humidity on the measuring system. Magnetic and electric fields, nuclear radiations,
vibration or shocks may also lead to errors. Environmental error can be controlled by controlling the
atmospheric factors.

Loading Error The part to be measured is located on the surface table (datum for comparison
with standards). If the datum surface is not flat or if foreign matter like dirt, chips, etc., get entrapped
between the datum and workpiece then an error will be introduced while taking readings, as shown
in Fig. 1.3.
Also, poor contact between the working gauge or a the instrument and workpiece causes an error as
shown in Fig. 1.4. To avoid such error, an instrument with a wide area of contact should not be used

Error

Dirt

Fig. 1.3 Instrument surface displacement Fig. 1.4 Error due to poor contact
14 Metrology and Measurement

while measuring irregular or curved surfaces, and the correct contact pressure must be applied. Therefore,
instrument loading error is the difference between the value of the measurand before and after the
measuring system is connected or contacted for measurement.

Dynamic Error It is caused by time variation in the measurand. It is the result of incapability of
the system to respond reliably to time-varying measurement. Inertia, damping, friction or other physical
constraints in sensing or readout or the display system are the main causes of dynamic errors.
Analysis of accumulation of error by the statistical method categorizes errors as controllable and
random errors.

Controllable Error
These are controllable in both magnitude and sense. These types of errors are regularly repetitive in
nature and are of similar form after systematic analysis is reduced effectively. These errors are also
called systematic errors.
Controllable errors include the following:

a. Calibration Error These are caused due to the variation in the calibrated scale from its normal
indicating value. The length standard, such as the slip gauge, will vary from the nominal value by a small
amount. This will cause a calibration error of constant magnitude.

b. Stylus Pressure Error The too small or too large pressure applied on a workpiece while mea-
suring, causes stylus pressure. This error causes an appreciable deformation of the stylus and the work-
piece.

c. Avoidable Error These errors occur due to parallax, non-alignment of workpiece centres, incor-
rect location of measuring instruments for temporary storage, and misalignment of the centre line of
a workpiece.

Random Error Random errors are accidental, non-consistent in nature and as they occur ran-
domly, they cannot be eliminated since no definite cause can be located. It is difficult to eliminate such
errors that vary in an unpredictable manner. Small variations in the position of setting standards and
the workpiece, slight displacement of lever joints in instruments, transit fluctuations in friction in mea-
suring instruments and pointer-type display, or in reading engraved scale positions are the likely sources
of this type of error.

1.9 UNITS OF MEASUREMENT

On 23 September, 1999, the Mars Climate Orbiter was lost during an orbit injection maneuver when
the spacecraft crashed onto the surface of Mars. The principal cause of the mishap was traced to a
thruster calibration table in which British units were used instead of metric units. The software for
Introduction to Metrology 15

celestial navigation at the Jet Propulsion Laboratory expected the thruster impulse data to be expressed
in newton seconds, but Lockheed Martin Astronautics in Denver, which built the orbiter, provided the
values in pound-force seconds, causing the impulse to be interpreted as roughly one-fourth its actual
value. This reveals the importance of the requirement of using a common unit of measurement. The
historical perspective in this effect must be seen for further study of metrology.
The metric system was one of the many reforms introduced in France during the period between
1789 and 1799, known for the French Revolution. The need for reform in the system of weights and
measures, as in other affairs, had long been recognized and this aspect of applied science affected the
course of human activity directly and universally.
Prior to the metric system, there had existed in France a disorderly variety of measures, such as for
length, volume, or mass, that were arbitrary in size and varied from one town to the next. In Paris, the
unit of length was the Pied de Roi and the unit of mass was the Livre poids de marc. However, all attempts
to impose the Parisian units on the whole country were fruitless, as the guilds and nobles who benefited
from the confusion opposed this move.
The advocates of reform sought to guarantee the uniformity and permanence of the units of mea-
sure by taking them from properties derived from nature. In 1670, the abbe Gabriel Mouton of Lyons
proposed a unit of length equal to one minute of an arc on the earth’s surface, which he divided into
decimal fractions. He suggested a pendulum of specified period as a means of preserving one of these
submultiples.
The conditions required for the creation of a new measurement system were made possible by
the French Revolution. In 1787, King Louis XVI convened the Estates General, an institution that
had last met in 1614, for the purpose of imposing new taxes to avert a state of bankruptcy. As
they assembled in 1789, the commoners, representing the Third Estate, declared themselves to be
the only legitimate representatives of the people, and succeeded in having the clergy and nobility
join them in the formation of the National Assembly. Over the next two years, they drafted a new
constitution.
In 1790, Charles-Maurice de Talleyrand, Bishop of Autun, presented to the National Assembly a
plan to devise a system of units based on the length of a pendulum beating seconds at latitude 45. The
new order was envisioned as an ‘enterprise whose result should belong some day to the whole world.’
He sought, but failed to obtain, the collaboration of England, which was concurrently considering a
similar proposal by Sir John Riggs Miller.
The two founding principles were that the system would be based on scientific observation and
that it would be a decimal system. A distinguished commission of the French Academy of Sciences,
including J L Lagrange and Pierre Simon Laplace, considered redefining the unit of length. Rejecting
the seconds pendulum as insufficiently precise, the commission defined the unit, given the name metre
in 1793, as one ten-millionth of a quarter of the earth’s meridian passing through Paris. The proposal
was accepted by the National Assembly on 26 March, 1791.
The definition of the metre reflected the extensive interest of French scientists in the shape of
the earth. Surveys in Lapland by Maupertuis in 1736 and in France by LaCaille in 1740 had refined
16 Metrology and Measurement

the value of the earth’s radius and established definitively that the shape of the earth was oblate. To
determine the length of the metre, a new survey was conducted by the astronomers Jean Baptiste
Delambre and P F A Mechain between Dunkirk in France on the English Channel, and Barcelona,
Spain, on the coast of the Mediterranean Sea. This work was begun in 1792 and completed in 1798,
with both the astronomers enduring the hardships of the ‘reign of terror’ and the turmoil of revo-
lution. The quadrant of the earth was found to be 10 001 957 metres instead of exactly 10 000 000
metres as originally proposed. The principal source of error was the assumed value of the numeric
quantity that was earth’s used for oblateness correction, taking into account the earth’s flattening
at the poles.
The unit of volume, the pinte (later renamed the litre), was defined as the volume of a cube having a
side equal to one-tenth of a metre. The unit of mass, the grave (later renamed the kilogram), was defined
as the mass of one pinte of distilled water at the temperature of melting ice. In addition, the centigrade
scale for temperature was adopted with fixed points at 0°C and 100°C representing the freezing and
boiling points of water. These have now been replaced by Celsius scales.
The work to determine the unit of mass was begun by Lavoisier and Hauy. They discovered that the
maximum density of water occurs at 4°C and not at 0°C as had been supposed. So the definition of the
kilogram was amended to specify the temperature of maximum density. The intended mass was 0.999972 kg,
i.e., 1000.028 cm3 instead of exactly 1000 cm3 for the volume of 1 kilogram of pure water at 4°C.
The metric system was officially adopted on 7 April, 1795. The government issued a decree (Loi du
18 germinal, an III) formalizing the adoption of the definitions and terms that are in use today. A brass
bar was made by Lenoir to represent the provisional metre, obtained from the survey of LaCaille, and
a provisional standard for the kilogram was derived.
In 1799, permanent standards for the metre and kilogram, made from platinum, were constructed
based on the new survey by Delambre and Mechain. The full length of the metre bar represented the
unit. These standards were deposited in the Archives of the Republic. They became official by the act
of 10 December, 1799.
The importance of a uniform system of weights and measures was recognized in the United States,
as in France. Article I, Section 8, of the US Constitution provides that the Congress shall have the
power “to coin money ... and fix the standard of weights and measures.” However, although the pro-
gressive concept of decimal coinage was introduced, the early American settlers both retained and cul-
tivated the customs and tools of their British heritage, including the measures of length and mass.
A series of international expositions in the middle of the nineteenth century enabled the French
government to promote the metric system for world use. Between 1870 and 1872, with an interrup-
tion caused by the Franco-Prussian War, an international meeting of scientists was held to consider the
design of new international metric standards that would replace the metre and kilogram of the French
Archives. A Diplomatic Conference on the Metre was convened to ratify the scientific decisions. Formal
Introduction to Metrology 17

international approval was secured by the Treaty of the Metre, signed in Paris by the delegates of 17
countries, including the United States, on 20 May, 1875.
The treaty established the International Bureau of Weights and Measures (BIPM). It also provided
for the creation of an International Committee for Weights and Measures (CIPM) to run the Bureau
and the General Conference on Weights and Measures (CGPM) as the formal diplomatic body that
would ratify changes as the need arose. The French government offered the Pavillon de Breteuil, once
a small royal palace, to serve as headquarters for the Bureau in Sevres, France, near Paris. The grounds
of the estate form a tiny international enclave within the French territory.
A total of 30 metre bars and 43 kilogram cylinders were manufactured from a single ingot of an
alloy of 90 per cent platinum and 10 per cent iridium by Johnson, Mathey and Company of London.
The original metre and kilogram of the French Archives in their existing states were taken as the points
of departure. The standards were intercompared at the International Bureau between 1886 and 1889.
One metre bar and one kilogram cylinder were selected as the international prototypes. The remaining
standards were distributed to the signatories. The First General Conference on Weights and Measures
approved the work in 1889.
The United States received metre bars 21 and 27 and kilogram cylinders 4 and 20. On 2 January, 1890
the seals to the shipping cases for metre 27 and kilogram 20 were broken in an official ceremony at the
White House with President Benjamin Harrison presiding the meeting. The standards were deposited
in the Office of Weights and Measures of the US Coast and Geodetic Survey.
The US customary units were tied to the British and French units by a variety of indirect comparisons.
The troy weight was the standard for minting of coins. The Congress could be ambivalent about
non-uniformity in standards for trade, but it could not tolerate non-uniformity in its standards for
money. Therefore, in 1827 the ambassador to England and former Secretary of the Treasury, Albert
Gallatin secured a brass copy of the British troy pound of 1858. This standard was kept in the Phila-
delphia mint, and identical copies were made and distributed to other mints. The troy pound of
the Philadelphia mint was virtually the primary standard for commercial transactions until 1857 and
remained the standard for coins until 1911.
The semi-official standards used in commerce for a quarter century may be attributed to Ferdinand
Hassler, who was appointed superintendent of the newly organized Coast Survey in 1807. In 1832, the
Treasury Department directed Hassler to construct and distribute to the states the standards of length,
mass, and volume, and balances by which masses might be compared. As the standard of length,
Hassler adopted the Troughton scale, an 82-inch brass bar made by Troughton of London for the
Coast Survey, that Hassler had brought back from Europe in 1815. The distance between the 27th and
63rd engraved lines on a silver inlay scale down the centre of the bar was taken to be equal to the British
yard. The system of weights and measures in Great Britain had been in use since the reign of Queen
Elizabeth I. Following a reform begun in 1824, the imperial standard avoirdupois pound was made the
standard of mass in 1844, and the imperial standard yard was adopted in 1855. The imperial standards
18 Metrology and Measurement

were made legal by an Act of Parliament in 1855 and are preserved in the Board of Trade in London.
The United States received copies of the British imperial pound and yard, which became the official
US standards from 1857 until 1893.
In 1893, under a directive from Thomas C Mendenhall, Superintendent of Standard Weights and
Measures of the Coast and Geodetic Survey, the US customary units were redefined in terms of the
metric units. The primary standards of length and mass adopted were the prototype metre No. 27
and the prototype kilogram No. 20 that the United States had received in 1889 as a signatory to
the Treaty of the Metre. The yard was defined as 3600/3937 of a metre and the avoirdupois pound-
mass was defined as 0.4535924277 kilogram. The conversion for mass was based on a comparison
performed between the British imperial standard pound and the international prototype kilogram
in 1883. These definitions were used by the National Bureau of Standards (now the National
Institute of Standards and Technology) from its founding in 1901 until 1959. On 1 July, 1959, the
definitions were fixed by international agreement among the English-speaking countries to be 1 yard
= 0.9144 metre and 1 pound-mass = 0.45359237 kilogram exactly. The definition of the yard is
equivalent to the relations 1 foot = 0.3048 metre and 1 inch = 2.54 centimetres exactly.
A fundamental principle was that the system should be coherent. That is, the system is founded upon
certain base units for length, mass, and time, and derived units are obtained as products or quotients
without requiring numerical factors. The metre, gram, and mean solar second were selected as base
units. In 1873, a second committee recommended a centimetre-gram-second (CGS) system of units
because in this system, the density of water is unity.
In 1889, the international prototype kilogram was adopted as the standard for mass. The prototype
kilogram is a platinum–iridium cylinder with equal height, a diameter of 3.9 cm and slightly rounded
edges. For a cylinder, these dimensions present the smallest surface-area-to-volume ratio to minimize
wear. The standard is carefully preserved in a vault at the International Bureau of Weights and Measures
and is used only on rare occasions. It remains the standard till today. The kilogram is the only unit still
defined in terms of an arbitrary artifact instead of a natural phenomenon.
Historically, the unit of time, the second, was defined in terms of the period of rotation of the earth
on its axis as 1/86 400 of a mean solar day. Meaning ‘second minute’, it was first applied to timekeep-
ing in about the seventeenth century when pendulum clocks were invented that could maintain time to
this precision.
By the twentieth century, astronomers realized that the rotation of the earth is not constant.
Due to gravitational tidal forces produced by the moon on the shallow seas, the length of the day
increases by about 1.4 milliseconds per century. The effect can be measured by comparing the
computed paths of ancient solar eclipses on the assumption of uniform rotation with the recorded
locations on earth where they were actually observed. Consequently, in 1956 the second was rede-
fined in terms of the period of revolution of the earth about the sun for the epoch 1900, as rep-
resented by the Tables of the Sun computed by the astronomer Simon Newcomb of the US Naval
Observatory in Washington, DC. The operational significance of this definition was to adopt the
linear coefficient in Newcomb’s formula for the mean longitude of the sun to determine the unit
of time.
Introduction to Metrology 19

1.10 METRIC UNITS IN INDUSTRY

The International System of Units (SI) has become the fundamental basis of scientific measurement
worldwide. The United States Congress has passed legislation to encourage use of the metric system,
including the Metric Conversion Act of 1975 and the Omnibus Trade and Competitiveness Act of
1988. The space programme should have been the leader in the use of metric units in the United States
and would have been an excellent model for education, had such an initiative been taken. Burt Edelson,
Director of the Institute for Advanced Space Research at George Washington University and former
Associate Administrator of NASA, recalls that “in the mid-‘80s, NASA made a valiant attempt to
convert to the metric system” in the initial phase of the international space station programme. Eco-
nomic pressure to compete in an international environment is a strong motive for contractors to use
metric units. Barry Taylor, head of the Fundamental Constants Data Centre of the National Institute
of Standards and Technology and US representative to the Consultative Committee on Units of the
CIPM, expects that the greatest stimulus for metrication will come from industries with global markets.
“Manufacturers are moving steadily ahead on SI for foreign markets,” he says. Indeed, most satellite-
design technical literature does use metric units, including metres for length, kilograms for mass, and
newtons for force, because of the influence of international partners, suppliers, and customers.

1.10.1 SI Base Units


This system is an extension and refinement of the metric system, which is more superior and conve-
nient than other systems. It provides one basic unit for each physical quantity. It is comprehensive, as
its seven basic units cover all disciplines as mentioned below.

Table 1.1 SI base units

Unit
Quantity Name Symbol
Length metre m
Mass kilogram kg
Time second s
Electric current ampere A
Thermodynamic temperature kelvin K
Amount of substance mole mol
Luminous intensity candela cd

1.10.2 SI Derived Units


SI derived units are the combination of two or more quantities which usually require a compound word
called derived units. Some of the derived units are mentioned as follows.
20 Metrology and Measurement

Table 1.2 SI derived units

Unit
Quantity Special Name Symbol Equivalent
Plane angle radian rad 1
Solid angle steradian sr 1
Angular velocity rad/s
Angular acceleration rad/s2
Frequency hertz Hz s-1
Speed, velocity m/s
Acceleration m/s2
Force newton N kg m/s2
Pressure, stress pascal Pa N/m2
Energy, work, heat joule J kg m2 /s2, N m
Power watt W kg m2/s3, J/s
Power flux density W/m2
Linear momentum impulse kg m/s Ns
Electric charge coulomb C As
Celsius temperature degree Celsius K, C

1.10.3 SI Prefixes
Table 1.3 SI prefixes used

Factor Prefix Symbol Factor Prefix Symbol


1024 yotta Y 10−1 deci d
1021 zetta Z 10−2 centi c
1018 exa E 10−3 milli m
1015 peta P 10−6 micro µ
1012 tera T 10−9 nano n
9 −12
10 giga G 10 pico p
106 mega M 10−15 femto f
103 kilo k 10−18 atto a
102 hecto h 10−21 zepto z
101 deka d 10−24 yocto yze
Introduction to Metrology 21

The SI system is now being adopted throughout the world as its main feature is newton (unit of force),
which is independent of the earth’s gravitation.

Review Questions

1. Define the term metrology and also discuss the types of metrology.
2. Differentiate between accuracy and precision.
3. List down the methods of measurement and explain any three of them in detail.
4. What are the different bases used for selection of measuring instruments?
5. State the different types of errors and explain relative error and parallax error.
6. Differentiate between systematic and random errors.
7. Explain the term cosine error with an example.
8. Write a short note on static error.
9. State the main difference between indicating and recording instruments.
10. Discuss the need for precision measurements in an engineering industry.
11. A cylinder of 80-mm diameter was placed between the micrometer anvils. Due to inaccurate placement,
the angle between the micrometer and cylinder axis was found to be 1 minute. Calculate the amount
of error in the measured diameter of the above cylinder if the micrometer anvil diameter is 6 mm. Use
suitable approximations.
12. Explain with a neat sketch the effect of poor contact, impression, expansion of workpiece and
distortion of workpiece on accuracies of measurement.
13. A test indicator is used to check the concentricity of a shaft but its stylus is so set that its movement
makes an angle of 350 with the normal to the shaft. If the total indicator reading is 0.02 mm, calculate
the true eccentricity.
14. What do you understand by the terms ‘readability’ and ‘range’, ‘repeatability’ and ‘reproducibility’,
and ‘drift’ and ‘error’?
2 Measurement Standards

“Precision is religion and measurement standards make it happen!”


Arun Kudale, MD, Kudale Calibration Laboratory (P), Ltd., Pune

WHAT ARE MEASUREMENT holds the secondary standard for India.


STANDARDS? My company holds tertiary standards
Line and End standards are referred as and is accredited by the National Accred-
‘measurement standards’ in industries, itation board for Testing and Calibration
which are used as references for cali- Laboratories. The type of standards
bration purpose. In the modern metro- being calibrated will govern the use of
logical era, digital instruments such as primary/secondary standards as a refer-
a periodically calibrated digital height ence, e.g., slip gauges are calibrated
gauge are commonly used. In India, once in three years. Determination and
light wave standards (wavelength) are confirmation of length and calibration
used for laboratory purposes only and must be made under specified condi-
are not commercially used. Owing to its tions. The National Accreditation board
cost LASER is restricted in use for for Testing and Calibration Laboratories
alignment testing and assessment of specifies that a calibration laboratory
movement of subassemblies only. should be adequately free from vibra-
tions generated by the central air-condi-
In general, there are four levels of stan- tioning plant vehicular traffic and other
dards used as references all over the sources. In other words, there should be
world, viz., primary, secondary, tertiary vibration-free operational conditions, the
and working standards. Primary stan- illumination should be 450 lux to 700 lux
dard is the one that is kept in Paris and on the working table with a glass index
secondary is the one kept with NPL of 19 for lab work, a generally dust-free
India; tertiary standard is the standard, atmosphere, temperature should be
which we use in our industries as a ref- controlled between 20 ± 1°C and humid-
erence for calibration purpose. Working ity should be controlled between 50 ±
standards are used on the shop floor. 10%. To avoid any such adverse effect
Hence it could be said that there is an on instruments, a calibration laboratory
unbroken chain for tracing the stan- is required to be set underground.
dards. Every country has a custodian In our opinion, quality should be built
who looks after secondary standards. up at the design stage, which is an
The National Physical Laboratory (NPL) important key factor in designing a
Measurement Standards 23

quality assurance system. As far as practices and maintenance of instru-


the role of calibration activities are ments, etc., for building a quality
concerned, they help industries (which assurance system. We have helped
use metrological instruments) to know many industries by guiding them in writ-
about existing uncertainties with the ing quality manuals, which is a part of
instruments being used, as well as building quality assurance systems in
share information and knowledge of lab their plants.

2.1 INTRODUCTION

In ancient Egypt, around 3000 years BC, the death penalty was inflicted on all those who forgot or
neglected their duty to calibrate the standard unit of length at each full-moon night. Such was the peril
courted by royal architects responsible for building the temples and pyramids of the Pharaohs. The first
royal cubit was defined as the length of the forearm (from the elbow to the tip of the extended middle
finger) of the ruling Pharaoh, plus the breadth of his hand.
The original measurement was transferred to and carved in black granite. The workers at the building
sites were given copies in granite or wood and it was the responsibility of the architects to maintain them.
Even though we have come a long way from this starting point, both in law-making and in time, people
have placed great emphasis on correct measurements ever since.
In 1528, the French physician J Fernel proposed the distance between Paris and Amiens as a general
length of reference. In 1661, the British architect Sir Christopher Wren suggested the reference unit should
be the length of pendulum with a period of half second, and this was also referred as a standard.
In 1799 in Paris, the Decimal Metric System was created by the deposition of two platinum stan-
dards representing the metre and the kilogram—the start of the present International System of Units
(SI system). These two standards of length were made of materials (alloys), and hence are referred as
material standards.
The need for establishing standards of length arose primarily for determining agricultural land areas
and for erection of buildings and monuments.
A measurement standard, or etalon, is a material measure, measuring instrument, reference material
or measuring system intended to define, realize, conserve or reproduce a unit or one or more values
of a quantity to serve as a reference. Any system of measurement must be related to known standards
so as to be of commercial use. The dictionary meaning of standard is ‘something that is set up and
established by authorities as a rule for the measurement of quantity, weight, value quality, etc.
Length is of fundamental importance as even angles are measured by a combination of linear
measurements. All measurements of length are fundamentally done in comparison with standards of
length. In the past, there have been large numbers of length standards, such as cubit, palm and the
digit. The Egyptian unit, known as cubit, was equal to the length of the forearm, from the elbow to the
tip of the middle finger of the ruling Pharaoh, plus the breadth of his hand. The cubit was of various
24 Metrology and Measurement

lengths ranging from 450 mm to 670 mm. Even in the 18th century, a map of Australia showed miles of
three different lengths. The first accurate standard was developed in England, known as the Imperial
Standard Yard, in 1855 and was followed by the International Prototype Metre made in France in 1872.
These developments are summarized in Table 2.1.

Table 2.1 Interesting facts of development of measurement standards through the ages

Sl. No. Year Measurement Standard Explanation


1. 3000 BC Royal cubit Length of the forearm from the elbow to
tip of the extended middle finger of the
ruling Pharaoh, plus the breadth of his
hand. This was equivalent to 1.5 feet, or
two hand spans, or six-hand width or
24-finger thickness or 0.4633 metres.
2. 16th century Feet The distance over the left feet of sixteen
men lined up after they left church on
Sunday morning.
3. 18th century Yard King Henry I declared that the yard, was
the distance from the tip of his nose to
the end of his thumb, when his arm was
outstretched sideways. This standard was
legalized in 1853 and remained a legal
standard until 1960.
Metre The first metric standard was developed,
which was supposed to be one-ten mil-
lionth of a quadrant of the earth’s merid-
ian passing through Paris.
4. 19th century Upgradation of metre In 1872, an international commission
standard was set up in Paris to decide on a more
suitable metric standard and it was finally
established in 1875.
Wavelength standard From 1893 onwards, comparison of the
above-mentioned standard with wave-
lengths of light proved a remarkable
stable standard.

2.2 THE NEW ERA OF MATERIAL STANDARDS

To avoid confusion in the use of the standards of length, an important decision towards a definite
length standard, metre (Greek word-Metron meaning measure), was established in 1790 in France. In the
nineteenth century, the rapid advancement made in engineering was due to improved materials available
and more accurate measuring instruments.
Measurement Standards 25

2.3 TYPES OF STANDARDS

After realizing the importance and advantage of the metric system, most of the countries in the
world have adopted the metre as the fundamental unit of linear measurement. In recent years, the
wavelength of monochromatic light, which never changes its characteristics in any environmental
condition is used as the invariable fundamental unit of measurement instead of the previously devel-
oped material standards such as metre and yard. A metre is defined as 1650763.73 wavelengths of the
orange radiation in vacuum of krypton-86. The yard is defined as 0.9144 metre, which is equivalent
to 1509458.35 wavelengths of the same radiations. Hence, three types of measurement standards are
discussed below.

i. Line standard
ii. End standard
iii. Wavelength standard

2.3.1 Line Standard


According to the line standard, which is legally authorized and an Act of Parliament, the yard or metre
is defined as the distance between inscribed lines on a bar of metal under certain conditions of tem-
perature and support.

a. The Imperial Standard Yard This standard served its purpose from 1855 to 1960. It is
made of a one-inch square cross section bronze bar (82% copper, 13% tin, 5% zinc) and is 38 inches
long. The bar has a ½-inch diameter x ½-inch deep hole, which are fitted with a 1/10th-inch diameter
gold plug. The highly polished top surfaces of these plugs contain three transversely and two longitudi-
nally engraved lines lying on the natural axis of the bronze bar as shown in Fig. 2.1.
The yard is defined as the distance between two central transverse lines on the plugs when the
temperature of the bar is constant at 62°F and the bar is supported on rollers in a specified manner to
prevent flexure, the distance being taken at the point midway between the two longitudinal lines at 62°F
for occasional comparison. Secondary standards were also made as a copy of the above international
yard. To protect the gold plug from accidental damage, it is kept at the neutral axis, as the neutral axis
remains unaffected even if the bar bends.

b. International Standard Prototype Metre The International Bureau of Weights


and Measures (CIPM: Comité International des Poids et Mesures) established the metre as the linear
measuring standard in the year 1875. The metre is the distance between the centre portions of two
lines engraved on the polished surface of a bar (prototype) made up of platinum (90%) – iridium
(10%) alloy having a unique cross section (web) as shown in Fig. 2.2(a). The web section chosen
gives maximum rigidity and economy in the use of costly material. The upper surface of the web is
inoxidizable and needs a good finish for ruling a good quality of engraved lines. This bar is kept at
0°C and under normal atmospheric pressure.
The metric standard, when in use, is supported at two points by two rollers of at least one-cm diam-
eter, symmetrically situated in the horizontal plane, and 589 mm apart.
26 Metrology and Measurement

Gold inserts engraved with terminal lines


(0.1″ dia)

Natural axis 1″
SQ.

36″

38″

Bronze Metal

One yard between


terminal lines at 62°F

Enlarged view of
gold insert

Fig. 2.1 Imperial standard yard

16 mm

Graduation on
neutral plane of bar

16 mm
1000 mm

International Prototype
Metre C/S by Tresca
(a)
Fig. 2.2(a) International standard prototype metre

According to this standard, the length of one metre is defined as the straight line distance, at 0°C between
the centre portion of a pure platinum – iridium alloy of a total length of 1000-mm and having a web cross section.
Measurement Standards 27

Figure 2.2(b) ( Plate 1) shows the actual International Standard Prototype Metre and Historical
Standard platinum–iridium metre bar. The 1889 definition of the metre, based upon the international
prototype of platinum–iridium, was replaced by the 11th CGPM (Conférence Générale des Poids et
Mesures, 1960) using a definition based upon the wavelength of krypton-86 radiations. This definition
was adopted in order to improve the accuracy with which the metre may be realized. This was replaced
in 1983 by the 17th CGPM as per Resolution 1.
The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792
458 of a second.
The effect of this definition is to fix the speed of light at exactly 299 792 458 m·s–1. The original
international prototype of the metre, which was sanctioned by the 1st CGPM in 1889 (CR, 34–38),
is still kept at the BIPM under conditions specified in 1889. The metre is realized on the primary level
by the wavelength from an iodine-stabilized helium–neon laser. On sub-levels, material measures like
gauge blocks are used, and traceability is ensured by using optical interferometry to determine the
length of the gauge blocks with reference to the above-mentioned laser light wavelength. Accuracy of
measurement using this standard is limited up to ±0.2 mm. For higher accuracy, scales along with a
magnifying glass on the microscope may be used which makes measurement quick and easy. Scale
markings are not subjected to wear even after periodic use but parallax error may get introduced while
measuring. The example of line standard includes metre, yard, steel rule (Scale).

2.3.2 End Standard


The need of an end standard arises as the use of line standards and their copies was difficult at vari-
ous places in workshops. End standards can be made to a high degree of accuracy by a simple method
devised by A J C Brookes in 1920. End standards are used for all practical measurements in workshops
and general use in precision engineering in standard laboratories. These are in the form of end bars
and slip gauges. In case of vernier calipers and micrometers, the job is held between the jaws/anvils of
the measuring instrument and the corresponding reading is noted, while a length bar and slip gauges
are used to set the required length to be used as a reference dimension.

a. End Bar End bars made of steel having cylindrical cross section of 22.2-mm diameter with
the faces lapped and hardened at the ends are available in sets of various lengths. Parallelity of the
ends is within few tenths of micrometres. Reference- and calibration-grade end bars have plane
end faces, but the set of inspection- and workshop-grade end bars can be joined together by studs,
screwed into a tapped hole in their ends. Although from time to time, various types of end bars have
been constructed with some of them having flat, spherical faces, but flat and parallel-faced end bars
are firmly established as the most practical end standard used for measurement. It is essential to
retain their accuracy while measuring when used in a horizontal plane, by supporting them, keeping
end faces parallel.

End bars are made from high-carbon chromium steel, ensuring that faces are hardened to 64 RC
(800 HV). The bars have a round section of 30 mm for greater stability. Both the ends are threaded,
recessed and precision lapped to meet requirements of finish, flatness, parallelism and gauge length.
28 Metrology and Measurement

These are available up to 500 mm in grades 0,1,2 in an 8-piece set. Length bars can be combined by
using an M6 stud. End bars are usually provided in sets of 9 to 12 pieces in step sizes of 25 mm up to
a length of 1 m. (See Fig. 2.3, Plate 1.)

b. Slip Gauges Slip gauges are practical end standards and can be used in linear measurements
in many ways. These were invented by the Swedish Engineer C E Johnson. Slip gauges are rectangular
blocks of hardened and stabilized high-grade cast steel or the ceramic compound zirconium oxide
(ZrO2 ) having heat expansion coefficients of 11.5 × 10−6 K−1 and 9.5 × 10−6 K−1 respectively and are
available with a 9-mm wide, 30 to 35-mm-long cross section. The length of a slip gauge is strictly the
dimension which it measures—in some slip gauges it is the shortest dimension and in the larger slip
gauges, it is the longest. The blocks, after being manufactured to the required size, are hardened to resist
wear and are allowed to stabilize to release internal stresses, which prevent occurrence of subsequent
size and shape variations. (See Fig. 2.4, Plate 1.)
Slip gauges are made from select grade of carbide with a hardness of 1500 Vickers, are checked for
flatness and parallelism at every stage and calibrated in our NABL accredited laboratory. Slip gauges are
available in five grades of accuracies as discussed in Table 2.2.
Slip gauge sets are made according to the following standards:
IS 2984-1981, Metric BS-4311: 1968, Imperial BS.888.1950, DIN: 861-1988, JIS B 7506-1978.
According to accuracy, slip gauges are classified as follows in Table 2.3.

After hardening, the blocks are carefully finished on the measuring faces to the required fine degree
of surface finish, flatness and accuracy. The standard distance is maintained by the mirrorlike sur-
face finish obtained by the surfinishing process, lapping. IS: 2984–1966 specifies three grades of slip
gauges:

Table 2.2 Accuracy grades of slip gauges

Grades Measurement System Use


Grade 00 Metric Reference (Imperial) – Kept in a standard room and are used
for work of highest precision only
Grade 0 Metric Inspection (Imperial) – Setting comparators
Grade K Metric Calibration (Imperial) – For measuring other grades by
comparison
Grade 1 Metric Inspection (Imperial) – Used in the tool room, e.g., setting
up sine bars, checking of slip gauges
Grade 2 Metric Workshop (Imperial) – Used in workshops for general use,
e.g., for setting up machine tools
Measurement Standards 29

Table 2.3 Types of slip gauges

Type Accuracy Accuracy of Flatness and Parallelism


AA – Master Slip Gauges ± 2 microns/m 75 microns
A – Reference Gauges ± 4 microns/m 125 microns
B – Working Gauges ± 8 microns/m 250 microns

Grade 0 used for laboratories and standard rooms for checking subsequent grade gauges
Grade I having lower accuracy than Grade 0 and used in the inspection department
Grade II to be used in the workshop during actual production of components.
Measuring faces of slip gauges are forced and wrung against each other so that the gauges stick
together. This is known as wringing of slip gauges, as shown in Fig. 2.5. Considerable force is required
to wrung the slip gauges. The effect is caused partly by molecular attraction and partly by atmospheric
pressure. To wring two slip gauges together, they are first cleaned and placed together at right angles.
Then they are rotated through 90° while being pressed together.
According to IS: 2984–1966, the size of a slip gauge is the distance L between the plane mea-
suring faces, being constituted by the surface of an auxiliary body with one of the slip-gauge faces
in wrung position and the other exposed. Slip gauges are supplied as a set, comprising of rectan-
gular steel blocks of different dimensions with opposite faces flat and parallel to a high degree of
accuracy.

Sliding

Twisting Stacked slip


Gauges

D
(a) (b) (c)
Fig. 2.5 Wringing of slip gauges: (a) Parallel wringing of slip gauges (b) Cross wringing of slip
gauges (c) Wringing complete
30 Metrology and Measurement

Table 2.4 Set of slip gauge (number = 122)

Blocks Steps Number Blocks Step Number

1.0005 1 1.001–1.009 0.001 9

1.001–1.009 0.001 9 1.01–1.49 0.01 49

1.01–1.49 0.01 49 1.6–1.9 0.1 4

1.6–1.9 0.1 4 0.5,1–24.5 0.5 49

0.5,1–24.5 0.5 49 25,30,40–100 10

25,30,40–100 10

Total 122 Total 121

Table 2.5 Set of slip gauge (number = 112)

Blocks Steps Number Blocks Steps Number

1.0005 1 1.005 1

1.001–1.009 0.001 9 1.01–1.49 0.01 49

1.01–1.49 0.01 49 0.5–24.5 0.5 49

0.5–24.5 0.5 49 25–100 25 4

25–100 25 4

Total 112 Total 103

Table 2.6 Set of slip gauge (number = 88)

Blocks Steps Number Blocks Steps Number

1.0005 1 1.001–1.009 0.001 9

1.001–1.009 0.001 9 1.01–1.09 0.01 9

1.01–1.49 0.01 49 1.10–1.90 0.1 9

0.5–9.5 0.5 19 1–9 1 9

10–100 10 10 10–100 10 10

Total 88 Total 46
Measurement Standards 31

Table 2.7

Blocks Steps Number Blocks Steps Number

0.1001–0.1009 0.0001 9 0.1001–0.1009 0.0001 9

0.101–0.149 0.001 49 0.101–0.109 0.001 9

0.050–0.950 0.05 19 0.110–0.190 0.01 9

1–4 1 4 0.050 1

0.100–0.900 0.1 9

1–4 1 4

Total 81 Total 41

Also available in a combination of M47/1,M32/1,M18/1,M9/1 and 1-mm wear protectors.


The individual gauge blocks required to build up a length of 6.905 mm from the set of M88/1 would
be as follows:
1st gauge 1.005 mm 2nd gauge 1.40 mm 3rd gauge 4.50 mm 6,905 mm
Note the 6,905-mm length could be achieved by using more than three gauge blocks. However, it is
important that a minimum number of gauge blocks per combination size should be used.

2.3.3 Wavelength Standards


Line and end standards are physical standards and are made up of materials that can change their size
with temperature and other environmental conditions. The correct lab conditions are required to be
maintained so that the length standard remains unchanged. High-sensitivity length measurements are
therefore very important as these measurements are widely used in science, technology and industry
and they are of type that have highest accuracy after time frequency measurements. In search for a suit-
able unit of length, length-standard realization by improving primary-level wavelength sources is used
for wavelength comparisons and gauge block measurements in a sensitive way. Fitting in a new defini-
tion of ‘metre’, the primary-level wavelength standard can be a laser standard, which has its frequency
compared with Cs time, and frequency standard. High-frequency accuracy, high-frequency stability and
high re-producebility help in high-accuracy interferometry length measurements.
BIPM (Bureau International des Poids et Measures) made the first verification of the national proto-
types by intercomparisons among the available standards along with comparisons with the international
prototype. This included new and improved determinations of the thermal expansion of metre bars.
The international accord, using the 1893 and 1906 determinations of the wavelength of the red line
of cadmium, defined the ångström which was used as the spectroscopic unit of length, but this was
abandoned in 1960. The CIPM decided to investigate the possibility of redefining the metre in terms of
32 Metrology and Measurement

a wavelength of light, and established the Comité Consultatif pour la Définition du Mètre (The Consultative
Committee for Length) for this purpose.
The CGPM (Conférence Générale des Poids et Mesures) adopted a definition of the metre in terms of the
wavelength in vacuum of the radiation corresponding to a transition between specified energy levels of
the krypton-86 atom. At the BIPM, measurement of linescales in terms of this wavelength replaced com-
parisons of linescales between themselves and to avoid it in future, new equipment was installed for doing
this by optical interferometry. In 1960, orange radiation of the isotope krypton-86 used in a hot-cathode
discharge lamp maintained at a temperature of 63 K, was selected to define the metre. The metre was then
defined as equal to 1650763.73 wavelengths of the red-orange radiation of the krypton isotope-86 gas.
1 metre = 1650763.73 wavelengths, and
1 yard = 0.9144 metre
= 0.9144 × 1650763.73 wavelengths
= 1509458.3 wavelengths
The CGPM recommended a value for the speed of light in vacuum as a result of measurements of the
wavelength and frequency of laser radiation in 1975. The CGPM redefined the metre as the length of the
path travelled by light in vacuum during a specific fraction of a second. It invited the CIPM to draw up
instructions for the practical realization of the new definition. The CIPM outlined general ways in which
lengths could be directly related to the newly defined metre. These included the wavelengths of five rec-
ommended laser radiations as well as those of spectral lamps. The wavelengths, frequencies and associ-
ated uncertainties were specified in the instructions for the practical realization of the definition. At the
BIPM, comparison of laser frequencies by beat-frequency techniques supplemented the measurement of
linescales in terms of wavelengths of the same lasers started in 1983. The metre is the length of the path
travelled by light in vacuum during a time interval of 1/299 792 458 of a second. In order to check the
accuracy of practical realizations of the metre based upon the new definition, a new round of international
comparisons of laser wavelengths by optical interferometry and frequency by beat-frequency techniques
was begun at the BIPM. These international comparisons comprised comparisons of individual compo-
nents of the laser, the absorption cells containing the atoms or molecules upon which the laser is stabilized
in particular, as well as comparisons of whole laser systems (optics, gas cells and electronics).
In the early days of stabilized laser systems, it was almost always necessary for lasers to be brought
to the BIPM for measurements to be made. This was not always convenient; so the BIPM developed
small, highly stable and accurate laser systems. As a result, the reference values maintained by the BIPM
could be realized away from environmental factors. In these ‘remote’ comparisons, it became relatively
easy for a number of ‘regional’ laboratories to bring their lasers for a joint comparison.
From the early inception of stabilized lasers, the BIPM offered member states of the Metre Conven-
tion the opportunity to compare their laser standards against reference systems. This service was based
on heterodyne beat-frequency measurements, largely concentrated on two types of stabilized lasers:

i. Iodine-stabilized He–Ne systems operating at wavelengths of 515 nm, 532 nm, 543 nm, 612 nm,
or (most commonly) 633 nm
ii. A methane-stabilized He–Ne laser operating at 3.39 µm
Measurement Standards 33

For the standard at 633-mm wavelength, three He–Ne/ I2 laser set-ups have been built such that
their frequencies are locked to the transition of I2 molecules. The I2 cells, which are placed in the
He–Ne laser resonators, provide the interaction between the He–Ne laser beam and I2 molecules.
Absorption signals are detected by the tuning laser frequency around the energy transition of I2 mol-
ecules. By using an electronic servo system, these absorption signals of the I2 molecules are used to lock
the laser frequency to the energy transition of the I2 molecules with a stability of 1×10−13 in an average
time interval of 1000 s. In addition to its substantial programme related to the He–Ne stabilized lasers,
the BIPM also carried out a small research programme in the performance and metrological qualities
of the frequency-doubled Nd-YAG laser at 532 nm. This relatively high-power system turned out to
have excellent short-term stability and it is often used in a number of applications. The BIPM’s com-
parison programme therefore included Nd-YAG systems by heterodyne and, more recently, by absolute
frequency measurements.
For the standard at 532-mm wavelength, two Nd-YAG laser frequencies tuned to the energy transi-
tions of I2 molecules are locked. In the establishment of these standards, lasers with wavelengths of
532 nm and with an output power of 50 mW are used. In the locking process of the laser frequency
I2 cells are used outside the resonator. At present, the frequencies of each of the two lasers are tuned
to the energy transition of I2 molecules and fluorescent signals are observed as the result of the inter-
action between the laser and the molecules in the cells. The frequencies of two Nd-YAG lasers are
changed in the range of the absorption spectrum of the I2 molecules by using a servo system. So the
third deviation of the resonance absorption signal is obtained by the affection of the iodine molecules
with the laser beam. The CIPM-recommended value is 473 612 353 604 ±10,0 kHz for He–Ne/I2
lasers using beat-frequency methods. The international comparison of the portable optical frequency
standard of He–Ne/CH4 (λ = 3.39 μm) with PTB was realized in Braunschweig between the dates of
15th and 30th December 2000. The absolute frequency value is measured as 88 376 181 000 253 ±23 Hz.
The 3.39-µm laser programme dealt with a well-characterized system that was a critical element in
the frequency chains used in the earlier measurements of the speed of light. They also have applica-
tions in infrared spectroscopy. The BIPM has, therefore, maintained a high-performance system and
participated in a number of comparisons with several NMIs. A similar facility was provided for 778 nm
Rb-stabilized systems, which were of interest to the telecommunications industry. Both programmes
are now drawing to a close in the light of the frequency-comb technique. With the introduction of
the new comb techniques allowing direct frequency measurements of optical laser frequencies, the
activity of heterodyne frequency comparisons between laser standards has been reduced and as such,
nonphysical wave standards are least affected by environmental conditions and remain practically
unchanged, making it convenient to reproduce them with a great degree of accuracy.

2.4 SUBDIVISION OF STANDARDS

The International Prototype Metre cannot be used for every general-purpose application. The original
international prototype of the metre, which was sanctioned by the first CGPM in 1889, is still kept
under the specified conditions by BIPM in 1889. Therefore, a practical hierarchy of working standards
has been created depending upon the importance of the accuracy required.
34 Metrology and Measurement

Material standards are divided into four basic types:

i. Primary standards
ii. Secondary standards
iii. Tertiary standards
iv. Working standards

1. Primary Standards To define a unit most precisely, there is only one material standard
which is preserved under very specifically created conditions. Such type of a material standard is
known as a primary standard. The International Metre is the example of a primary standard. This should
be used only for comparison with secondary standards and cannot be used for direct application.

2. Secondary Standards Secondary standards should be exactly alike the primary standards
by all aspects including design, material and length. Initially, they are compared with primary standards
after long intervals and the records of deviation are noted. These standards should be kept at a number
of places in custody for occasional comparison with tertiary standards.

3. Tertiary Standards The primary and secondary standards are applicable only as ultimate
controls. Tertiary standards are used for reference purposes in laboratories and workshops. They
can again be used for comparison at intervals with working standards.

4. Working Standards Working standards developed for laboratories and workshops are derived
from fundamental standards. Standards are also classified as

i. Reference standards
ii. Calibration standards

2.5 CALIBRATION

The ultimate goal of manufacturing industries is to provide a quality product to customers.


Initially, the main thrust of any business is to offer potential customers value for their business revenue.
Coupled with this, is the immediate need to garner customers, which is critical to the success of the
enterprise. Keeping customers requires that the product meets appropriate quality levels, which, in turn,
requires calibration knowledge. The aim of the product is to not only fulfill the requirement of the user
but also to have specified dimensions. Measurement of dimensions can’t be perfect and reliable unless
and until measuring instruments are calibrated accurately. Thus, calibration plays a vital role in main-
taining quality control. Calibration of measuring instruments is not only an advantage to any company
but it is a necessity for every manufacturing industry.
The advantages of calibration are accuracy in performing manufacturing operations, reduced inspec-
tion, and ensured quality products by reducing errors in measurement.
Measurement Standards 35

2.5.1 Defining Calibration


Calibration is a comparison of instrument performance to standards of known accuracy; calibrations
directly link customers’ measurement equipment to national and international standards.
According to the ISO, calibration is the quantitative determination of errors of measuring instru-
ments and adjusting them to a minimum. In other words, calibration means to find out whether the
instrument gives the correct reading or not. It also includes minor adjustments in the instrument to
minimize error.
As the measurement standards are referred at different levels owing to their availability for applica-
tions, calibration is required to be carried out as per set standards. This creates a need for setting up
calibration labs at different levels, which are explained as follows.

a. In-house Calibration Lab These labs are set up within a company itself for calibration of
in-house instruments.

b. Professional Calibration Labs These are set up by professionals whose main business is
calibration of measuring instruments and who use all dedicated and sophisticated calibrating instru-
ments, e.g., Kudale Calibration Lab in Pune, India.

c. NABL Certification to Professional Labs According to the National Accreditation Board


for Testing and Calibration Laboratory, certification is given to only those laboratories which have the
entire norms (instruments) as per NABL norms. In-house calibration labs need not have this certificate.

2.5.2 Status of Calibration


1. Active This status is given to an instrument if it gives an exact reading or the error shown in the
reading is within the tolerable limit.

2. Calibrate Before Use [CBU] If a stock of some instrument is purchased by a company,


and out of that only a few are being used currently while keeping the others in store then the instru-
ments kept in stock are given the status of CBU as they will be used later.

3. Only For Indication [OFI] Instruments with this status can’t be used for any measure-
ment purpose, but can be used as non-measuring devices, e.g., a height gauge with OFI status can be
used as a stand.

4. Rework This status indicates that the instrument should be reworked before use to get a correct
reading, e.g., surface plate, base plate, etc.

5. Reject This status is provided to indicate that the error in the reading shown by the measuring
instrument is not within the allowable limits.
36 Metrology and Measurement

6. Write-Off This status is given to that instrument which is to be directly scrapped.


Note: Rejected instruments can be used after repair, but instruments with a write-off status cannot
be used for measurement in future.

2.5.3 Standard Procedure for Calibration


1. Cleaning of Instruments Every instrument should be first cleaned thoroughly.

2. Determination of Error The next step is to determine the errors in the instrument by
various methods.

3. Check for Tolerable Limits After determination of error, the error is to be compared with
the allowable tolerance.

4. Minor Changes These are made in the instrument, if possible, to minimize the error in the
reading indicated by the instrument.

5. Allotment of Calibration Set Up Each instrument is allotted the set up as per its condition.

6. Next Calibration Date The instruments that are allotted an active status are also given the
next calibration date as per standards.
A measuring instrument’s normally allotted calibration interval based on guidelines is given in
Table 2.8.

Table 2.9 shows the type of instruments generally calibrated to maintain their accuracy over a longer
period of time.

2.5.4 List of Equipments used for Calibration of Measuring Instruments


Reference Gauge Standards Slip gauges, plain and threaded ring gauges, plain and threaded
plug gauges, pin gauges, etc.

Devices with Variable Measurement Standards Comparators, exterior micrometers, bore


meters, callipers, depth gauges, etc., on-site surface plates, measurement columns, three-dimensional mea-
suring machines, profile projectors, horizontal measurement bench, slip-gauge controller, comparator
calibration bench, laser-sweep beam micrometer, circularity, straightness, and length-gauge standards,
contour-measuring equipment, single-dimensional measurement column, management software for
measurement instrument stocks, high pressure gauge, OD caliper, dial, beam balance or digital scale,
height-setting master, analytical type balance, ID caliper, dial, digital or vernier, ID micrometer, internal
limit gauge, go/no-go type, OD micrometer, depth micrometer, force gauge, ID micrometer, tri-point
type, torque meter, bench micrometer, gauge block, radius gauge, bevel protractor, thickness gauge,
Measurement Standards 37

Table 2.8 Calibration intervals of different instruments

Name of the Instrument Acceptable Tolerance Demand Calibration Interval (Months)


Vernier caliper and height gauge ±0.005 mm 12
Micrometer 2 μm 12
Pin gauge ±0.006 mm 12
Slip gauge ±0.02 μm 36
Setting ring for setting diameter Tolerance in m.
of
1) 3 mm 4 36
2) 3–6 mm 4.2
3) 6–10 mm 4.2
4) 10–18 mm 4.5
5) 18–30 mm 5
6) 30–80 mm 5.5
Dial gauge 0.003 mm 12
Digital dial gauge Tolerance in μm
0–1 mm 1 36
0–10 mm 2
0–60 mm 3
Radius master 5% 24

Table 2.9 Types of instruments


Electrical Typical Equipment Calibrated
ACOUSTICS Sound Level Meter, Pistonphone and Octave Filter
VIBRATION Accelerometer, Vibration Meter, Geophone 3-axis,
Calibrator exciter Vibration Analyser, Vibration
Exciter/Shaker, Portable Shaker System Vibration
Machine (on-site calibration)
IMPEDANCE Standard Capacitor, Standard Air Capacitor, Phase
Meter, Standard Inductor, Capacitance Meter, Ratio
Transformer
VT/CT Current Transformer, Turn Ratio Meter, Current
Transformer, Standard Current Transformer, Poten-
tial Transformer
POWER ENERGY Test Device Digital Power Meter, Energy Meter,
Kilowatt Meter, Standard Meter, Standard Wart Con-
verter, Three Phase Measuring Assembly, Polyphase
Watt Meter, Energy Analyser, Clip-on Power Meter
(Continued )
38 Metrology and Measurement

Table 2.9 (Continued)

Electrical Typical Equipment Calibrated


AC/DC Calibrator, Amplifier, Multimeter, DC Reference
Standard, DC Power Supply voltage Standard, Oscil-
loscope Calibrator, DC Resister, Current Shunt,
True RMS Voltmeter, LVDT, DC Voltage Standard
MAGNETIC Gauss Meter, Magnets
RF AND MICROWAVE Power Sensor/Meter, Step Attenuator, Standard
Signal Generator, Automatic, Modulation Meter,
Synthesized Sweeper, Vector Voltmeter, Doppler
Radar, Gun, Signal Generator, Sp.
TIME AND FREQUENCY Rubidium Frequency Standard, Quartz Oscillator,
Frequency Counter, Universal Counter, Microwave
Counter, Stop Watch
MECHANICAL
DIMENSIONAL Gauge Block, Long Gauge, Ring Gauge, Pin Gauge,
Optical Flat Meter Comparator Optical Parallel,
Glass Scale, Straight Edge Angle Gauge, Digital Cali-
per, Micro-Indicator, Height Gauge.
PRESSURE Deadweight Pressure Balance/Gauge/Piston
Gauge/Tester, Pressure Calibrator, Digital Test
Gauge, Digital Manometer, Digital Barometer, Reso-
nant Sensor Barometer, Digital Pressure Indicator,
High Pressure Gauge, Micromanometer.
FORCE Load Cell, Proving Ring Dynamometer, Calibration
Box, Calibration Loop, Load Column, Hydraulic
Jack, Force Gauge Tension Gauge, Force Transducer
THERMOPHYSICAL
RESISTANCE THERMOMETRY Standard Platinum Resistance Thermometer, Digital
Thermometer Liquid in Glass Thermometer
HYGROMETER Data Logger, Hygrometer
DENSITY VISCOSITY Viscometer, Hydrometer
THERMOCOUPLE/THERMOMETRY Thermocouple Probe/Wire
FLOW
CAPICITY Beaker, Measuring Cylinder, Prover Tank
FLOW Pipe Prover, Gas Meter, Anemometer, Flow Meter
CHEMISTRY Gas Analyser, Breath Analyser, Gas Detector
Measurement Standards 39

toolmaker’s square, angle gauge, ring gauge, optical projector, comparator, snap gauge, toolmaker’s
microscope, test indicator, optical flat, dial indicator, surface plate slot and groove gauge, screw pitch
gauge, tapered hole gauge.

2.5.5 Case Study 1: Dial Calibration Tester


Kudale Calibration Laboratory Pvt. Ltd., Pune, India, (NABL Certified Calibration Laboratory)

a. Introduction The manufacturing tolerances in almost all the industries are becoming stringent
due to increased awareness of quality. This also calls for high accuracy components in precision assem-
blies and subassemblies. The quality control department therefore is loaded with the periodic calibration

Fig. 2.6 Dial calibration tester


40 Metrology and Measurement

of various measuring instruments. Since the accuracy of the components depends largely on the accu-
racy of measuring instruments like plunger-type dial gauges, back-plunger-type dial gauges, lever-
type dial gauges and bore gauges, periodic calibration is inevitable and is a regular feature in many
companies of repute. The practice of periodic calibration is of vital importance for quality assurance
as well as cost reduction. The set of dial calibration tester enables us to test four different kinds of
precision-measuring instruments and all the required accessories are included in the set. The habit of
periodic calibration has to be cultivated right from the stage of technical education, viz., engineering
colleges, polytechnics and other institutes.
Why is periodic calibration required?

i. To grade a dial according to its accuracy and thereby to choose the application where it can be
safely used
ii. To determine the worn-out zone of travel facilitating full utilization of dials
iii. To inspect the dial after repairs and maintenance
iv. To ascertain the exact point of discardation

b. Scope This procedure is to cover the dial calibration tester for the following range.
Range = 0–25 mm and LC = 0.001 mm

c. Calibration Equipment
Electronic Probe – Maximum Acceptable Error = 3.0 μm
Slip Gauges = 0 Grade

d. Calibration Method
i. Clean the measuring faces of the dial calibration tester with the help of CTC.
ii. Place the micrometer drum assembly and dial holder on the stem, one above the other.
iii. Hold the electronic probe in the dial holder of the dial calibration tester.
iv. Set the zero of the electronic probe by rotating the drum in the upward direction.
v. Adjust the cursor line at the zero on the drum.
vi. With these settings, the micrometer drum should be at the 25-mm reading on the main scale. The
micrometer drum is at the topmost position after this setting.
vii. After the above setting in Step 6, rotate the micrometer drum to the downward direction till it
reaches zero on the main scale. The micrometer drum is at the lowermost position at this point.
viii. Set the main scale zero and the zero on the micrometer drum across the cursor line.
ix. Place the 25-mm slip gauge between the micrometer head tip and the contact point of the elec-
tronic probe.
x. Take the readings in the upward direction from 0.5 mm to 25 mm in a step size of 0.5 mm.
xi. Calculate the uncertainty as per NABL guideline 141.

e. Uncertainty Calibration for A type-B component can be calculated as per the following
guidelines:
Measurement Standards 41

Fig. 2.7 Dial calibration tester

i. Difference in temperature of master and unit under calibration


ii. Difference in thermal expansion coefficient of master and unit under calibration
iii. Due to least count of instrument (Take 50%)
iv. Uncertainty measurement of master from calibration certificate

2.5.6 Case Study 2: Calibration of Gauge Block Measuring Unit


Mahr Gmbh Esslingen, Germany
(Accredited by the Physikalisch-Technische Bundesanstalt PTB for linear measurement)

a. Mechanical Construction The base of the


unit consists of a rigid cast-iron stand with a vertical
guide. The guide column features an adjustment spindle
with a handwheel, which enables the vertical slide con-
taining the upper probe to be roughly positioned. Fine
adjustment is carried out by adjusting a rigidly connected
spring parallelogram system, which is integrated into the
support arm. The foot of the base holds the manipula-
tor for correctly positioning the gauge blocks. Smooth
movement is ensured by high-precision ball bush guides.
The positioning mechanism is operated by an adjustment
knob located on the right side of the base. Depending on
the gauge blocks to be tested, various mounting devices
can be fixed to the manipulator as shown in Fig. 2.8. The
mounting device is provided with a locator allowing the
Fig. 2.8 Calibration of gauge block gauge blocks to be easily positioned in the five specified
42 Metrology and Measurement

measuring positions. The measuring table is supported by wear-resistant hardened guide bars. The
inductive probes are vacuum lifted. Accessories for calibration are shown in Fig. 2.10.

b. Measuring Process The gauge block to be tested and the reference gauge block are placed
one behind the other into the mounting device. Due to the round hardened guide bars, the gauge
blocks can be moved with low friction. Measurement is carried out with two inductive probes (sum
measurement). One measuring point on the reference gauge block and five measuring points on the
test piece are collected. Whenever the gauge blocks are moved, the inductive probes are lifted by
means of an electrical vacuum pump. The measuring values are calculated and displayed by the com-
pact Millitron 1240 instrument. Via a serial interface, the measuring values can be transferred to a PC
or laptop.

c. Computer-Aided Calibration For computer-aided evaluation of the measuring values,


either a PC or a laptop/notebook can be used. The laptop is to be preferred because of its lower radia-
tion of heat.
The following options are available:
• Measurement of single gauge blocks
• Measurement of complete gauge block sets
• Simultaneous calibration of several equal gauge block sets

d. Application EMP 4W software system, the program realizes the computer-aided evaluation as per
DIN EN ISO 3650.
It offers the following options:
• Selection and determination of measuring sequences
• Management of test piece and standard gauge blocks
• Management of individual gauge blocks
• Measuring program to perform gauge-block tests
• Control of all operations and inputs
• Automatically assigning the sequence of nominal dimensions for set tests
• Organizati on of the measurement process for testing multiple sets
• Printer program for test records and for the printout of standard gauge-block sets
• Printout of DKD records

e. Gauge Calibration Software Gauge calibration with Mahr and QMSOF T


Gauge calibration is an important topic for a company’s qualification system. Its importance is par-
ticularly emphasized by the ISO 9000 to 9004 (and EN 29000 to 29004, respectively) standards. These
standards demand for complete and regular inspection of all measurement and test tools in operation.
QMSOFT gauge calibration covers the actual measurement and testing, the comparison of results with
standardized nominal values (nominal-to-actual comparison) and a variety of management activities for
maintaining the gauge data stock.
Measurement Standards 43

The QMSOFT system is a modern, modular software package for measuring, storing, and document-
ing standard test instruments such as gauges, plug gauges, dial indicators, or snap gauges. Computer-aided
gauge calibration is only efficient, if all of the three necessary steps are at least in part controlled by the
PC. QMSOFT includes a variety of matched routines (QMSOFT modules) that may be used for practical
gauge calibration tasks and cover the above-mentioned steps (measurement, tolerances, management).
These routines ideally supplement the length measuring, gauge block and dial-indicator testing
instruments used for this purpose. (See Fig. 2.9, Plate 2.)

QMSOFT’s Special Features


• PC-based management (storage, history, evaluation) of any inspection tools contained in a gauge
database; possibility to manage several independent data inventories
• Automatic nominal value generation (tolerance calculations) for the most common types of inspec-
tion tools (e.g., plain gauges, threaded gauges, etc.) according to numerous national and international
standards
• Comprehensive menu support for conducting standardized measurements and direct connection
between the measuring instrument and the PC
• Integrated management and measurement, i.e., direct storage of the test results in the gauge database
• Strict modular approach offering high accuracy.

F. Calibration standards used at Mahr Gmbh


1. Roundness Standard High-accuracy sphere for checking spindle radial run-out. Approx.
dia. = 50 mm (1.97 in); roundness error = 0.04 µm (1.57 µin); approx. mass = 1.8 kg (3.97 lb).

2. Optical Flat Dia. = 150 mm (5.91 in); for checking and aligning horizontal X-axis, flatness er-
ror = 0.2 µm (7.87 µin). Approx. mass = 2 kg (4.41 lb).

3. Universal Cylindrical Square, high-accuracy cylinder with two surfaces for dynamic probe
calibration. Dia. = 20 mm (0.787 in); length = 150 mm (5.91 in).

4. Cylindrical Squares for Checking and Aligning Spindle Axis Parallel to the Col-
umn Dia. = 80 mm (3.15 in); length = 250 mm (9.84 in); max. cylindricity error = 1 µm (39.37 µin);
approx. mass = 11.5 kg (25.35 lb).

5. Cylindrical Squares for Checking and Aligning Spindle Axis Parallel to the Col-
umn Dia. = 100 mm (3.94 in); length = 360 mm (14.17 in); max. cylindricity error = 1 µm (39.37 µin);
approx. mass = 13 kg (28.66 lb). (Accessories for calibration are shown in Fig. 2.10, Plate 2.)

G. Work-Holding Fixtures
1. Rim Chuck with 6 Jaws Dia. = 70 mm (2.76 in); includes 124-mm dia. (4.88 in) mount-
ing flange and reversible jaws for external and internal chucking. External range = 1 mm to 73 mm
44 Metrology and Measurement

(.0394 in to 2.87 in); internal range = 16 mm


to 62 mm (.63 in to 2.44 in). Total height in-
cluding flange = 42 mm (1.65 in); approx.
mass = 1.7 kg (3.75 lb).

2. Rim Chuck with 6 Jaws Dia. =


100 mm (3.94 in); includes 164-mm dia. (6.46 in)
mounting flange and reversible jaws for exter-
nal and internal chucking. External range =
1 mm to 99 mm (.0394 in to 3.9 in); internal range
= 29 mm to 95 mm (1.14 in to 3.74 in). Total
height including flange = 47 mm (1.85 in); app-
Fig. 2.11 Work-holding fixtures rox. mass = 3.2 kg (7.05 lb).

3. Rim Chuck with 8 Jaws Dia. = 150 mm (5.91 in); includes 198-mm dia. (7.80 in) mounting
flange and separate sets of jaws for external and internal chucking. External range = 1 mm to 152 mm
(.0394 in to 5.98 in); internal range = 24 mm to 155 mm (.945 in to 6.10 in). Total height including
flange = 52 mm (2.05 in); mass approx. 6.1 kg (13.45 lb).

4. Three-Jaw Chuck Dia. 110 mm (4.33 in); includes 164-mm dia. (6.46 in) mounting flange.
External chucking range = 3 mm to 100 mm (.118 in to 3.94 in); internal range = 27 mm to 100 mm
(1.06 in to 3.94 in). Total height including flange = 73 mm (2.87 in); approx. mass = 3 kg (6.61 lb).

5. Quick-Action Clamping Device 1 mm to 12 mm Dia. (.0394 in to .47 in) with 124-mm


dia. (4.88 in) mounting flange, for external chucking. Includes chucks with dia. from 1 mm to 8 mm
(.0394 in to .315 in) graded by 0.5 mm (.0197 in). Total height 80 mm (3.15 in); approx. mass = 2.2 kg
(4.85 lb).

h. Set of Clamping Disks These are adjustable devices for pre-centering and clamping a
workpiece for series measurements and are suitable for workpiece diameters ranging from 36 mm to
232 mm (1.42 in to 9.13 in), depending on the machine type. The set includes two fixed disks with an
elongated hole and one eccentric locking disk with an approximate mass of 0.4 kg (.88 lb).
For technical and legal reasons, the measuring instruments used in the production process must
display ‘correct’ measuring results. In order to guarantee absolute accuracy, they must be calibrated at
regular intervals and must be traceable to national standards. Paragraph 4.11 of the quality standards
of DIN EN ISO 9000 states that the supplier shall identify all inspection, measuring and test equipment which
can affect product quality, and calibrate and adjust them at prescribed intervals, or prior to use, against certified equip-
ment having a known valid relationship to internationally or nationally recognized standards.
The Mahr Calibration Service provides and guarantees this sequence due to the operation of the
Calibration Laboratories DKD-K-05401 and DKD-K-06401 accredited by the Physikalisch- Tech-
nische Bundesanstalt PTB for linear measurement.
Measurement Standards 45

Review Questions

1. What are standards of measurements? Explain the classification of various standards.


2. Explain the terms: a) Metre b) Yard c) Wringing of slip gauges d) calibration.
3. Write short notes on
a) Line standard b) End standard c) Grades of slip gauges
4. Explain the wringing of slip gauges.
5. Explain the need and standard procedure for calibration.
6. Explain what you mean by subdivision of standards.
7. Explain the optical definition of ‘inch’.
8. State the section and the materials from which the following length standards are made of:
(a) Imperial standard yard (b) International prototype metre (c) Wavelength standard;
To which category do these standards belong?
9. Define ‘metre’ in optical terms.
10. Distinguish between primary, secondary and working standards.
11. Explain slip gauges as an end standard by stating their advantages.
12. Distinguish between ‘line standards’ and ‘end standards’. How are the end standards derived from
line standards?
13. Describe the standard procedure of calibrating a metrological instrument.
14. Explain the procedure of wringing of slip gauges.
3 Linear Metrology

Every day starts with length metrology…


Dr M S Pawar, Prof and Vice-Principal, BMIT, Solapur

MEASURING LENGTH isolation. Instead these instruments


Measuring length is fundamental to require to be mounted over some
our everyday life, and there are many adjustable stands or holders. After
tools in use to measure length—tape this, the instruments are set for some
measures, odometers, rulers, ultra- standard dimensions for another work
sonic sensors, GPS systems, etc. The as a standard component. The instru-
three tools for precision length mea- ments having minimum least count
surement, viz., a precision rule (ruler), generally possess larger measuring
a vernier calipers, and a micrometer range and instruments having finer
caliper govern the length metrology least count generally possess small
and form the base for further study on measuring range.
metrology. These instruments offer
The steel rule is more commonly known
varying degrees of precision (and
as a ruler. Most have scales marked off
accuracy) and one can also gain
in inches and in centimetres (or milli-
insight into making and reporting mea-
metres). The object whose dimension
surements and calculations with the
you are measuring should be as close
correct precision (significant digits).
to the scale as possible, and your eye
Efforts have been channelized to should be directly over the scale when
develop instruments which can give more you read the scale. These two things
and more precise measurements and will help minimize parallax error due to
cover a wide range of application areas. the line of sight between your eye, the
There are two types of instruments scale, and the object.
used—1) Instruments which give abso-
The vernier caliper is an advancement
lute measurement values, e.g., scale,
from the steel rule in that it uses a sliding
vernier calipers, micrometer 2) Instru-
vernier scale to aid in making the esti-
ments which give comparative measure-
mate of the last digit. The micrometer
ment values, e.g., calipers, dividers.
caliper (micrometer) is an extension of
The instruments which provide compar- the vernier caliper, in that it uses a
ative measurements cannot be used in threaded screw to position the scale
Linear Metrology 47

rather than the sliding scale of the ver- Length Metrology is the measuring hub
nier caliper. This allows the scale to be of metrological instruments and sincere
placed more precisely, and, conse- efforts must be made to understand the
quently, the micrometer can be read to a operating principles of instruments
higher precision. used for various applications.

3.1 INTRODUCTION

Length is the most commonly used category of measurements in the world. In the ancient days, length
measurement was based on measurement of different human body parts such as nails, digit, palm,
handspan, pace as reference units and multiples of those to make bigger length units.
Linear Metrology is defined as the science of linear measurement, for the determination of the dis-
tance between two points in a straight line. Linear measurement is applicable to all external and internal
measurements such as distance, length and height-difference, diameter, thickness and wall thickness,
straightness, squareness, taper, axial and radial run-out, coaxiality and concentricity, and mating mea-
surements covering all range of metrology work on a shop floor. The principle of linear measurement
is to compare the dimensions to be measured and aligned with standard dimensions marked on the
measuring instruments. Linear measuring instruments are designed either for line measurements or end
measurements discussed in the previous chapter.
Linear metrology follows two approaches:

1. Two-Point Measuring-Contact-Member Approach Out of two measuring contact


members, one is fixed while the other is movable and is generally mounted on the measuring spindle of
an instrument, e.g., vernier caliper or micrometer for measuring distance.

2. Three-Point Measuring-Contact-Member Approach Out of three measuring


contact members, two are fixed and the remaining is movable, e.g., To measure the diameter of a bar
held in a V-block, which provides two contact points, the third movable contact point, is of the dial
gauge.
The instruments used in length metrology are generally classified into two types:

i. Non-precision measuring instruments, e.g., steel rule


ii. Precision measuring instruments, e.g., vernier calipers, micrometer

In our day-to-day life, we see almost all products made up of different components. The modern
products involve a great deal of complexity in production and such complex products have interchange-
able parts to fit in another component. The various parts are assembled to make a final end product,
which involves accurate inspection. If there are thousands of such parts to be measured, the instruments
will require to be used thousands of times. The instruments in such a case require retaining their accuracy
48 Metrology and Measurement

of measurement during inspection. The precision measuring instruments have a high degree of repeat-
ability in the measuring process. If the dimensions measured by the instrument are less than 0.25, it is
said to be a precision instrument, and the error produced by such an instrument must not be more than
0.0025 mm for all measured dimensions.

3.2 STEEL RULE (SCALE)

It is the simplest and most commonly used linear measuring instrument. It is the part replica of the
international prototype metre shown in Fig. 3.1 (a). It measures an unknown length by comparing
it with the one previously calibrated. Steel rules are marked with a graduated scale whose smallest
intervals are one millimetre. To increase its versatility in measurement, certain scales are marked with
0.5 millimetres in some portion. Some steel rules carry graduation in centimetres on one side and
inches on the other side. In a workshop, scales are used to measure dimensions of components of
limited accuracy.

Fig. 3.1(a) Steel rule

The marks on a rule vary from a width of 0.12 mm to 0.18 mm to obtain a degree of accuracy much
closer than within 0.012 mm. The steel rules are manufactured with different sizes and styles and can be
made in folded form for keeping in a pocket. The steel rules can be attached with an adjustable shoul-
der to make them suitable for depth measurement. These are available in lengths of 150, 300, 600
or 1000 mm. In case of direct measurement, a scale can be used to compare the length of a workpiece
directly with a graduated scale of the measuring rule while in indirect measurement, intermediate devices
such as outside or inside calipers are used to measure the dimension in conjunction with a scale.
Steel rules of contractor grade have an anodized profile with minimum thickness and wear-resistant
ultraviolet curved screen-printing. A steel rule should be made of good-quality spring steel and should
be chrome-plated to prevent corrosion. A steel rule is made to high standards of precision and should
be carefully used to prevent damage of its edges from wear, as it generally forms a basis for one end of
the dimension. Scales should not be used for cleaning and removing swarf from machine-table slots.
The sharpness of graduations should be cleaned and maintained by using grease-dissolving fluids.
One of the problems associated with the use of a rule is parallax error. It results when the observer
making the measurement is not in line with the workpiece and the rule. To avoid parallax error while
making measurements, the eye should be directly opposite and 90° to the mark on the part to be mea-
sured. To get an accurate reading of a dimension, the rule should be held in such a way that the gradu-
ation lines are perfectly touching or as close as possible to the faces being measured.
The battery-operated digital scale shown in Fig. 3.1 (b) is especially used to measure the travels of
machines, e.g., upright drilling and milling machines. It has a maximum measuring speed of 1.5 m/s and
is equipped with a high-contrast 6-mm liquid-crystal display.
Linear Metrology 49

Fig. 3.1(b) Digital scale


(Courtesy, Mahr Gmbh Esslingen)

3.3 CALIPERS

A caliper is an end-standard measuring instrument to measure the distance between two points. Calipers
typically use a precise slide movement for inside, outside, depth or step measurements. Specialized
slide-type calipers are available for centre, depth and gear-tooth measurement. Some caliper types such
as spring/fay or firm-joint calipers do not usually have a graduated scale or display and are only used for
comparing or transferring dimensions as secondary measuring instruments for indirect measurements.
The caliper consists of two legs hinged at the top, with the ends of the legs spanning the part to be mea-
sured. The legs of a caliper are made from alloy steels and are identical in shape, with the contact points
equidistant from the fulcrum. The measuring ends are suitably hardened and tempered. The accuracy
of measurement using calipers depends on the sense of feel that can only be acquired by experience.
Calipers should be held gently near the joint and square to the work by applying light gauging pressure
to avoid disturbance during setting for accurate measurement.

3.3.1 Types of Calipers


Inside calipers are made with straight legs, which are bend outwards at the ends and are used for mea-
suring hole diameters, distance between shoulders, etc. The opening of an inside caliper can be checked
by a rule or micrometer.
Outside calipers have two legs which are bent inward and are used for measuring and comparing
diameters, thicknesses and other outside dimensions by transferring the readings to a steel rule, microm-
eter or vernier caliper. It can be adjusted by tapping one leg or by adjusting the screw to straddle the work
by its legs as shown in Fig. 3.2.
Spring calipers are an improved variety of ordinary friction-joint calipers. The two legs carry a curved
spring (made from suitable steel alloy) at the top, fitted in notches used to force the spring apart. The
distance between them can be adjusted by applying pressure against the spring pressure by tightening the
nut. Inside and outside calipers are available in sizes of 75, 100, 150, 200, 250 and 300 mm.
A centre-measuring caliper has conically pointed jaws designed to measure the distance between the
centres of two holes. A gear-tooth caliper has an adjustable tongue designed to measure the thickness
of gear teeth at the pitch line. The adjustable tongue sets the measurement depth at the pitch line or
addendum. Machine travel calipers are designed to measure the travel or position changes of a machine
50 Metrology and Measurement

Spring Spring

Nut
Screw
Legs

Outside Transfer
Inside

Firm joint

Fig. 3.2 Types of calipers

bed, table, or stage. These gauges are typically mounted on a machine or are built into a product includ-
ing machine tools, microscopes, and other instruments requiring precision dimensional measurement
or position control. Nib-shaped jaws facilitate measurement of inside features (ID), outside features
(OD), grooves, slots, keyways or notches. Compared to the blade edge typically found on standard
calipers, the nib is more easily and accurately located on an edge or groove. Small, pocket-sized calipers
are usually designed for low-precision gauging applications. Rolling-mill calipers are usually simple rugged
devices for quick gauging of stock in production environments. Sliding calipers use a precise slide move-
ment for inside, outside, depth or step measurements. While calipers do not typically provide the preci-
sion of micrometers, they provide a versatile and broad range of measurement capability: inside (ID),
outside (OD), depth, step, thickness and length. Spring, fay, firm-joint or other radially opening-type
calipers have jaws that swing open with a scissor or plier-type action. These calipers are commercially
available in non-graduated versions.
Measurement units for calipers can be either English or metric. Some calipers are configured to
measure both. The display on calipers can be non-graduated (meaning that the caliper has no display)
dial or analog display, digital display, column or bargraph display, remote display, graduated scale display
Linear Metrology 51

or vernier scale display. Important specifications for calipers include the range and the graduation or
resolution. The range covers the total range of length or dimension that the caliper can measure. The
graduation or resolution is the best or minimum graduations for scaled or dial-indicating instruments.
Common features of calipers include depth attachments or gauges and marking capabilities. A depth
attachment is a gauge specialized for depth measurements usually consisting of a solid base with a
protruding rod or slide. The solid depth base provides a reference and support across the opening.
Marking capabilities include gauges that accommodate a scribe or other device for accurately marking
a component at a specific measurement along a particular dimension.

3.4 VERNIER CALIPER

Vernier calipers (Invented by Frenchman Pierre Vernier) is a measuring tool used for finding or transfer-
ring measurements (internal or external). Internal calipers are used to check the inside diameters of pipes
and of bows being turned on a lathe. External calipers are used to determine the diameter of a round pipe
or a turned spindle. A vernier caliper is a combination of inside and outside calipers and has two sets of
jaws; one jaw (with a depth gauge) slides along a rule. With a rule, measurements can be made to the near-
est 1/64th or 1/100th in., but often this is not sufficiently accurate. The vernier caliper is a measuring tool
based on a rule but with much greater discrimination. Pierre Vernier devised the principle of the vernier
for precise measurement in 1631 and found that the human eye cannot discern the exact distance between
two lines, but can tell when two lines coincide so as to form one straight line. Based on this observation, he
developed the principle of the vernier caliper which states that the difference between two scales or divisions which
are near, but not alike are required for obtaining a small difference. It enhances the accuracy of a measurement.
The first instrument developed following Vernier’s principle was the sliding caliper, as shown in Fig. 3.3.
Steel and brass were used for the production of a sliding caliper manufactured in 1868. It included scales
for the Wurttenberger inch, Rhenish inch, the Viennese inch and the millimetre, already used in France.
The vernier caliper essentially consists of two steel rules and these can slide along each other. A solid
L – shaped frame (beam) is engraved with the main scale. This is also called the true scale, as each millimetre

Fig. 3.3 Sliding caliper


(Mahr Gmbh Esslingen)
52 Metrology and Measurement

marking is exactly 1 millimetre apart. The beam and fixed measuring jaw are at 90° to each other. If,
centimetre graduations are available on the line scale then it is divided into 20 parts so that one small
division equals 0.05 cm. On the movable measuring jaw, the vernier scale is engraved which slides on the
beam. The function of the vernier scale is to subdivide minor divisions on the beam scale into the small-
est increments that the vernier instrument is capable of measuring. Most of the longer vernier calipers
have a fine adjustment clamp roll [Fig 3.7 (b)] for precise adjustment of the movable jaw. The datum of
measurement can be made to coincide precisely with one of the boundaries of distance to be measured.
A locking screw makes the final adjustment depending on the sense of correct feel. The movable jaw
achieves a positive contact with the object boundary at the opposite end of the distance to be measured.
The measuring blades are designed to measure inside as well as outside dimensions. The depth bar is an
additional feature of the vernier caliper to measure the depth.

Measuring blades for


inside measurement
Locking screw
Beam
Slide

Vernier
scale Depth bar
Moveable measuring Line scale
jaw (Main scale)
Fixed measuring jaw
Fig. 3.4 Vernier caliper
(Mahr Gmbh Esslingen)

The vernier and main scale is polished with satin-chrome finish for glare-free reading. The slide and
beam are made of hardened steel with raised sliding surfaces for the protection of the scale. The mea-
suring faces are hardened and ground. IS: 3651 –1974 specifies three types of vernier calipers generally
used to meet various needs of external and internal measurement up to 2000 mm with an accuracy of
0.02, 0.05 and 0.1 mm. The recommended measuring ranges are 0–125, 0–200, 0–300, 0–500, 0–750,
0–1000, 750–1500 and 750–2000 mm. The beam for all types and ranges of vernier calipers is made flat
throughout its length. The nominal lengths and their corresponding tolerances are given below.
Beam guiding surfaces are made straight within 10 microns for a measuring range of 200 mm and 10
microns for every next 200 mm recommended in measuring ranges of larger sizes.

Table 3.1 Recommended tolerances for nominal length of vernier calipers

Nominal Length (mm) Recommended Tolerances (microns)


0–300 50
900–1000 80
1500 and 2000 150
Linear Metrology 53

3.4.1 Instructions on Use


i. The vernier caliper is an extremely precise measuring instrument.
ii. Close the jaws lightly on the object to be measured.
iii. If you are measuring something with a round cross section, make sure that the axis of the object
is perpendicular to the caliper. This is necessary to ensure that you are measuring the full diam-
eter and not merely a chord.
iv. Ignore the top scale, which is calibrated in inches.
v. Use the bottom scale, which is in metric units.
vi. Notice that there is a fixed scale and a sliding scale.
vii. The boldface numbers on the fixed scale are in centimetres.
viii. The tick marks on the fixed scale between the boldface numbers are in millimetres.
ix. There are ten tick marks on the sliding scale. The leftmost tick mark on the sliding scale will let
you read from the fixed scale the number of whole millimetres for which the jaws are opened.
x. In Fig. 3.5, the leftmost tick mark on the main scale is between 21 mm and 22 mm, so the number
of whole millimetres is 21.

2 3 4

Main scale

Vernier scale
0 10

Alignment
Fig. 3.5 Scale comparison

xi. Examine the vernier scale to determine which of its divisions coincide or are most coincident with
a division on the main scale. The number of these divisions is added to the main scale reading.
xii. In Fig. 3.5, the third tick mark on the sliding scale is in coincidence with the one above it.

Smallest division on main scale


Least count =
Total no. of divisions on vernier scale
1mm
= = 0.1 mm
10

Table 3.2 Measuring the total reading by vernier caliper

Sl. No. Main Scale Reading Vernier Scale Reading Total Reading (mm)
(MSR) (mm) (VSR) C (mm) = LC × VSR = MSR + C
1. 21 3 0.3 21.30
54 Metrology and Measurement

xiii. The error in reading the vernier scale with a least count of 0.1 mm, 0.05 mm, 0.02 mm should
not exceed the value obtained by ±(75 + 0.05 UL ) microns, ±(50 + 0.05 UL ) microns, ±(20 +
0.02 UL ) microns respectively, where UL is the upper limit of the measuring range in mm.
xiv. In this case, UL = 200 mm. Therefore, the error in reading is 85 microns (0.085 mm). Total reading
is (21.30 ± 0.085) mm.
xv. If two adjacent tick marks on the sliding scale look equally aligned with their counterparts on
the fixed scale then the reading is halfway between the two marks. In Fig. 3.5, if the third and
fourth tick marks on the sliding scale looked to be equally aligned then the reading would be
(21.35 ± 0.05) mm.
xvi. On those rare occasions when the reading just happens to be a ‘nice’ number like 2 cm, don’t
forget to include the zero decimal places showing the precision of the measurement and the
reading error. So the reading is not 2 cm, but rather (2.000 ± 0.005) cm or (20.00 ± 0.05) mm.

The digital vernier-caliper version [Fig. 3.7 (a)] has special features such as LCD display, on/off
and reset adjustment with storage of measuring values and data-transmission capabilities. Plastic
is good for artifacts since it reduces the chance of scratching. The plastic types are inexpensive.
In case of a vernier caliper having a circular scale (dial caliper), read the dial or the scale for reading
the measured value. It has dial graduations of 0.02 mm, and one-hand use with thumb-operated fine
adjustment clamp roll is possible. The arrangement of lock screw for dial bezel and sliding jaw is also
provided. Figure 3.8 shows the applications of vernier calipers.

7.0 mm
0 10 20 30 40

0 1 2 3 4 5 6 7 8 9 10 0.1 mm

0.5 mm

7.5 mm
Fig. 3.6 Illustration of measurement using vernier caliper

Possible Errors and Precautions to be taken into Account while Using Vernier
Caliper The errors occurring in the vernier instrument are mainly due to manipulation or mishan-
dling of its jaws on the workpiece. Some of the causes may be due to play between the sliding jaw on
the scale, wear and warping of jaws. Due to this, the zero line on the main scale may not coincide with
the zero on the vernier scale, which is referred as zero error. Incorrect reading of the vernier scale
results may be due to parallax error or difficulty in reading the graduated marks. Owing to its size and
weight, getting a correct feel is difficult.
Care should be taken to minimize the error involved in coinciding correctly the line of measure-
ment with the line of scale, and the plane measuring tips of the caliper must be perpendicular to
Linear Metrology 55

Lock screw

Fine adjustment
clamp roll

(a) Digital display (b) Circular scale

(c) Graduated scale (d) Round measuring faces

Function button instruments with digital display

mm/inch Zero setting


switching of display
Reversal
DATE (Date
of counting
transmission)
direction

PRESET ON/ OFF


(enter num. values)
Fig. 3.7 Types of vernier scales
(Mahr Gmbh Esslingen)
56 Metrology and Measurement

25.30
26.56

(a) Outside measurement (b) Inside measurement

22.37 23.05

(c) Depth measurement (d) Depth(Distance) measurement

Inside Outside

16 Eei
Magnetic
base

Height

16 Eea

(e) Height measurement and transfer (f) Specially designed anvils for measurements

Fig. 3.8 Applications of vernier caliper

the central line of the workpiece. Grip the instrument near or opposite to the jaws and not by the
overhanging projected main bar of the caliper. Without applying much pressure, move the caliper
jaws on the work with a light touch. To correctly measure the reading, know the exact procedure of
measurement.

3.5 VERNIER HEIGHT GAUGE

This is one of the most useful and versatile instruments used in linear metrology for measuring, inspect-
ing and transferring the height dimension over plane, step and curved surfaces. It follows the principle
of a vernier caliper and also follows the same procedure for linear measurement. It is equipped with a
wear-resistant special base block in which a graduated bar is held in the vertical position.
The vernier height gauge as shown in Fig. 3.10 (a) consists of a vertical graduated beam or column
on which the main scale is engraved. The vernier scale can move up and down over the beam. The bracket
Linear Metrology 57

Screw for
Vertical bar adjusting zero
error

Main scale

Fine adjustment
Clamping screw screw
Vernier scale

Magnifying glass

Bracket

Clamp
Scriber

Sturdy base

Fig. 3.9 Vernier height gauge

carries the vernier scale which slides vertically to match the main scale. The bracket also carries a rect-
angular clamp used for clamping a scriber blade. The whole arrangement is designed and assembled in
such a way that when the tip of the scriber blade rests on the surface plate, the zero of the main scale
and vernier scale coincides. The scriber tip is used to scribe horizontal lines for preset height dimen-
sions. The scriber blade can be inverted with its face pointing upwards which enables determination of
heights at inverted faces. The entire height gauge can be transferred on the surface plate by sliding its
base. The height gauges can also be provided with dial gauges instead of a vernier, which makes reading
of bracket movement by dial gauges easy and exact.
The electronic digital vernier height gauge shown in Fig. 3.10(b) provides an immediate digital
readout of the measured value. It is possible to store the standard value in its memory, which could
be used as datum for further readings, or for comparing with given tolerances. Digital pre-setting is
possible in which reference dimensions can be entered digitally and automatically, allowed during each
58 Metrology and Measurement

Hand crank

Granite base

Scriber points
Cast-iron base

(a) (b)
Fig. 3.10 Vernier height gauge
(Mahr Gmbh Esslingen)

measurement. Via a serial interface, the measured data can be transmitted to an A4 printer or computer
for evaluation. Fine setting is provided to facilitate the setting of the measuring head to the desired
dimensions especially for scribing jobs enabling zero setting at any position. By means of a hand crank
on the measuring head with a predetermined measuring force, the measuring head is balanced by a
counterweight inside the column, which can be locked at any position for scribing purpose, making the
instrument suitable for easy operation. (See Fig. 3.11, Plate 3.)

3.6 VERNIER DEPTH GAUGE

A vernier depth gauge is used to measure depth, distance from plane surface to a projection, recess,
slots and steps. The basic parts of a vernier depth gauge are base or anvil on which the vernier scale is
calibrated along with the fine adjustment screw. To make accurate measurements, the reference surface
must be flat and free from swarf and burrs. When the beam is brought in contact with the surface being
measured, the base is held firmly against the reference surface. The measuring pressure exerted should
be equivalent with the pressure extended when making a light dot on a piece of paper with a pencil. The
reading on this instrument follows the same procedure as that of a vernier caliper.
The vernier and main scale have a satin-chrome finish for glare-free reading with a reversible beam
and slide. The beam is made of hardened stainless steel, while the sliding surface is raised for protec-
tion of the scale. The battery-operated digital vernier caliper is also available with a high contrast 6-mm
liquid crystal display having a maximum measuring speed of 1.5 m/s.
Linear Metrology 59

Beam Vernier scale


Fine adjustment
screw

Fig. 3.12(a) Vernier depth gauge


(Mahr Gmbh Esslingen)

90

Depth measurement Distance measurement of slots Distance measurement of steps


Fig. 3.12(b) Vernier-depth-gauge applications
(Mahr Gmbh Esslingen)

3.7 MICROMETERS

Next to calipers, micrometers are the most frequently used hand-measuring instruments in linear metrol-
ogy. Micrometers have greater accuracy than vernier calipers and are used in most of the engineering pre-
cision work involving interchangeability of component parts. Micrometers having accuracy of 0.01 mm
are generally available but micrometers with an accuracy of 0.001 mm are also available. Micrometers are
used to measure small or fine measurements of length, width, thickness and diameter of a job.
60 Metrology and Measurement

Principle of Micrometer A micrometer is based on the principle of screw and nut. When a
screw is turned through one revolution, the nut advances by one pitch distance, i.e., one rotation of the
screw corresponds to a linear movement of the distance equal to the pitch of the thread. If the circum-
ference of the screw is divided into n equal parts then its rotation of one division will cause the nut to
advance through pitch/n length. The minimum length that can be used to measure in such a case will be
pitch/n and by increasing the number of divisions on the circumference, the accuracy of the instrument
can be increased considerably. If the screw has a pitch of 0.5 mm then after every rotation, the spindle
travels axially by 0.5 mm and if the conical end of the thimble is divided by 50 divisions, the rotation of
the thimble of one division on the micrometer scale will cause the axial movement of the screw equal to
0.5/50 mm = 0.01 mm, which is the least count of the micrometer and is given by the formula

Smallest division on main scale


Least count = = 0.05 mm/50 = 0.01 mm
Total no. of divisions on vernier (circular scale)

Micrometers are classified into the following types:


1. Outside micrometer 2. Inside micrometer 3. Depth-gauge micrometer

3.7.1 Outside Micrometer

Figure 3.13 illustrates the design features of an outside (external micrometer). It is used to measure
the outside diameter, length and thickness of small parts. Outside micrometers having an accuracy of
0.01 mm are generally used in precision engineering applications.

Barrel scaled
Carbide tipped Reference lines Thimble
meas, faces

Spindle
Ratchet
Anvil

Locking device
Frame

Heat insulated handle

Fig. 3.13 Outside micrometer with a measuring range of 0–25 mm and accuracy of 0.01 mm
(Mahr Gmbh Esslingen)
Linear Metrology 61

The main parts of outside micrometers are the following:

1. U-shaped or C-shaped Frame The micrometer consists of a U- or C-shaped rigid frame,


which holds all parts of the micrometer together. The gap of the frame decides the maximum diameter or
length of the job to be measured. The frame is generally made of steel, cast steel and other light alloys with
satin-chromed finish to allow glare-free reading. A heat-insulating handle provides ease of finger gripping.

2. Carbide-Tipped Measuring Faces—Anvil and Spindle The micrometer has a fixed


anvil and it is located at 3.5 mm from the left-hand side of the frame. The diameter of the anvil is the
same as that of the spindle with exact alignment of their axes. The anvil is accurately ground and lapped
with its measuring face flat and parallel to the measuring face of the spindle. The carbide-tipped anvil
guarantees extreme precision and ensures long lifetime of the instrument. The anvil is rigidly fixed to
one left end of the frame and it is also made up of a hardened steel-like spindle. The spindle is the mov-
able measuring face with the anvil on the front side and it is engaged with the nut. The spindle should
run freely and smoothly throughout its length of travel. There should not be any backlash (lost motion
of the spindle when the direction of rotation of the thimble is changed) between the screw and nut and
at the time of full reading, full engagement of the nut and screw must be possible.
When the spindle face is touched with the anvil face, the zero of the micrometer must match with the
reference line on the main scale and the thimble is required to be set at zero division on the main scale.
If this condition is not satisfied, the corresponding reading gives the value of zero present in the instru-
ment, known as zero error. To compensate for the zero error, there is a provision to revolve the barrel
slightly around its axis. The measuring range is the total travel of the spindle for a given micrometer.

3. Locking Device A locking device is provided on a micrometer spindle to lock it in exact posi-
tion. This enables correct reading without altering the distance between the two measuring faces, thus
retaining the spindle in perfect alignment.

4. Barrel A barrel has fixed engraved graduation marks on it and is provided with satin-chromium
finish for glare-free reading. The graduations are above and below the reference line. The upper gradu-
ations are of 1-mm interval and are generally numbered in multiples of five as 0, 5, 10, 15, 20 and 25.
The lower graduations are also at 1-mm interval but are placed at the middle of two successive upper
graduations to enable the reading of 0.5 mm.

Main scale reading Vernier scale reading

Reference line

Reading = 5.00 mm
Fig. 3.14 Graduations marked on barrel and thimble
62 Metrology and Measurement

5. Thimble It is a tubular cover fastened and integrated with a screwed spindle (Fig. 3.14). When
the thimble is rotated, the spindle moves in a forward or reverse axial direction, depending upon the
direction of rotation. The conical edge of the spindle is divided into 50 equal parts as shown in Fig.
3.14. The multiples of 5 and 10 numbers are engraved on it and the thickness of graduations is between
0.15 to 0.20 mm.

6. Ratchet A ratchet is provided at the end of the thimble. It controls the pressure applied on
the workpiece for accurate measurement and thereby avoids the excessive pressure being applied
to the micrometer, thus maintaining the standard conditions of measurement. It is a small extension
of the thimble. When the spindle reaches near the work surface which is to be measured, the operator
uses the ratchet screw to tighten the thimble. The ratchet gives a clicking sound when the workpiece
is correctly held and slips, thereafter preventing damage of the spindle tips. This arrangement is very
important as a variation of finger efforts can create a difference of 0.04 to 0.05 mm of the measured
readings.
Micrometers are available in various sizes and ranges as shown in Table 3.3.

3.7.2 Instructions for Use


• The micrometer is an extremely precise measuring instrument; the reading error is 4 microns
when used for the range of 0–25 mm.
• Use the rachet knob (at the far right in the following picture) to close the jaws lightly on the object
to be measured. It is not a C-clamp! When the rachet clicks, the jaws are closed sufficiently.
• The tick marks along the fixed barrel of the micrometer represent halves of millimetres.

Table 3.3 Measuring range of micrometers

Limits of Error
Measuring Range Least Count (DIN 863) Pitch of Spindle Thread
0–25 mm 0.01 mm 4 µm 0.5 mm
25–50 mm 0.01 mm 4 µm 0.5 mm
50–75 mm 0.01 mm 5 µm 0.5 mm
75–100 mm 0.01 mm 5 µm 0.5 mm
100–125 mm 0.01 mm 6 µm 0.5 mm
125–150 mm 0.01 mm 6 µm 0.5 mm
150–175 mm 0.01 mm 7 µm 0.5 mm
175–200 mm 0.01 mm 7 µm 0.5 mm
Linear Metrology 63

7 3 5 1 6 9 8 10

4
D

A
2 13 12

11
B Mahr
0 - 25

14
Measurement Range
15

Fig. 3.15 Micrometer


(Mahr Gmbh Esslingen)

1–Spindle with tungsten carbide, 2–Body support, 3–Push on sleeve, 4–Space to accomodate
the object under measurement, 5–Thimble, 6–Conical-setting nut, 7–Anvil with tungsten carbide,
8–Sealing disk, 9–Clamping cone, 10–Ratchet stop, 11–Clamping lever, 12–Clamping screw,
13–Clamping piece, 14–Raised cheese head screw, 15–Curved spring washer

• Every revolution of the knob will expose another tick mark on the barrel, and the jaws will open
another half millimetre.
• Note that there are 50 tick marks wrapped around the moving barrel of the micrometer. Each
of these tick marks represents 1/50th millimetre (total of 50 divisions are engraved) and note the
reading as per the observation table given below (Table 3.4).

0 1 2
15
10 Vernier Scale
5
0.5 mm scale
Main Scale

Table 3.4 Observation table of measurement by micrometer

Main Scale Reading Vernier (Circular) Total Reading


Sl. No. (MSR) Scale Reading (VSR) C = L.C. × V.S.R. = MSR + C
1. 2.5 mm 12 0.01 2.62 mm
64 Metrology and Measurement

• The total reading for this micrometer will be (2.62 ± 0.004) mm, where 4 microns is the error of
instrument.
• The micrometer may not be calibrated to read exactly zero when the jaws are completely closed.
Compensate for this by closing the jaws with the rachet knob until it clicks. Then read the
micrometer and subtract this offset from all measurements taken. (The offset can be positive or
negative.)
• On those rare occasions when the reading just happens to be a ‘nice’ number like 2 mm, don’t
forget to include the zero decimal places showing the precision of the measurement and
the reading error. So the reading should be recorded as not just 2 mm, but rather (2.000 ±
0.004) mm.

40
0 5

0.30 mm
35

5.5 mm

Measure = 5.5 mm + 0.36 mm


Measure = 5.86 mm

Fig. 3.16 Micrometer measuring 5.86 mm

Figure 3.17 shows micrometers with different types of indicators.

DATA (Data
transmission)

PRESET (or mm/inch switch 0 Returns display to 0.000 for comparison


entering any measurement
ABS Returns display to preset measuring
numerical valve)
position in reference to previous preset
valve (PRESET)
Fig. 3.17 Micrometer with digital display (Mahr Gmbh Esslingen)
Linear Metrology 65

Limit of error
5 Maximum error
4
Error ∝m 3
2
1
0
−1
−2
Limit of error
−3
−4
−5
0 2.5 5.1 7.7 10.3 12.9 15 17.6 20.2 22.8 25 mm
Length of gauge block
Fig. 3.18 Limits of error for a micrometer with a measuring
range of 0–25 mm, set at zero
(Mahr Gmbh Esslingen)

1. Digital Micrometer with Digital Display


Checking the Micrometer For checking the micrometer, use grade 1-gauge blocks as in DIN
EN ISO 3650 to check the compliance with the limits of error specified. The reading of the microme-
ter must be same as that of the standard gauge block. The gauge-block combinations should be selected
so as to permit testing of spindles at points which are integral numbers of the nominal pitch, as well as
at intermediate positions. The following series of blocks is suitable: 2.5, 5.1, 7.7, 10.3, 12.9, 15.0, 17.6,
20.2, 22.8 and 25 mm as shown in Fig. 3.19.

2. Micrometer with Dial Comparator


This micrometer is used for rapid measure-
ments of diameters of cylindrical parts such as
shafts, bolts and shanks and for measurements
of thickness and length; and is recommended
for standard precision parts.

3. Micrometers with Sliding Spindle


and Measuring Probes and Microm-
eter with Reduced Measuring Fac-
es These micrometers are used for measur-
ing narrow recesses, and groves, etc., and has a
Fig. 3.19 Ceramic gauge-block set chrome-plated steel frames with a spindle and
(Mahr Gmbh Esslingen) anvil made of hardened steel, carbide-tipped
measuring faces with operating and scale parts
by satin-chrome finish and heat insulators. Note that sliding spindle types do not require locking
arrangements.
66 Metrology and Measurement

Fig. 3.20(a) Micrometer with dial comparator


(Mahr Gmbh Esslingen)

Fig. 3.20(b) Micrometers with sliding spindle and measuring probes, reduced measuring faces
(Mahr Gmbh Esslingen)

4. Micrometer with Spherical Anvil This type of micrometer is used for measuring pipe-
wall thicknessess and is available in the standard range of 25–50 mm. It consists of a carbide ball dis.
of 5 ± 0.002 mm.

5. Micrometers with Sliding Spindle and Disc-type Anvils This type of microm-
eter is used for measuring soft materials such as felt, rubber, cardboard, etc., and has a chrome-plated
steel frame with a spindle and anvil made of hardened steel, carbide-tipped measuring faces with
operating and scale parts of satin-chrome finish and heat insulators. This is available in the range of
0–25 mm.

6. Micrometers with Disc-type Anvils This type of micrometer is used for measurements
of tooth spans Wk, as of module 0.8 as indirect determination of tooth thickness on spur gears with
straight and helical teeth; to measure shoulders on shafts, undercut dimensions, registers and for mea-
suring soft materials such as rubber, cardboard, etc.
Linear Metrology 67

Fig. 3.21 Micrometer with spherical anvil


(Mahr Gmbh Esslingen)

Fig. 3.22 Micrometers with sliding spindle and disc-type anvil


(Mahr Gmbh Esslingen)

Fig. 3.23 Micrometer with disc-type anvils


(Mahr Gmbh Esslingen)
68 Metrology and Measurement

5
15 10 15

Mahr

a=Regulation range ±0.5 mm


b=V-anvil
c=Tape red-anvil
Fig. 3.24 Thread micrometer
(Mahr Gmbh Esslingen)

3.7.3 Thread Micrometers


This type of micrometer is used for measuring pitch, root and outside diameter. It consists of a rugged
steel frame with heat-insulators of up to a 100-mm one-piece design of frame and spindle guide for
maximum stability. The measuring spindle is hardened throughout the ground and is provided with
a locking lever with an adjustable anvil. The measuring spindle and anvil holders are equipped with
mounting bores for accommodation of interchangeable anvils.
A flat end surface of the anvil shank rests on a hardened steel ball in the bottom of a mounting bore.
The frame and scales are provided with satin-chrome finish for glare-free readings.
A thread micrometer consists of a point on one side and a V-groove on the other, both matching the
pitch angle of the thread to be checked. One setting is sufficient for two adjacent frame sizes.

a. Interchangeable Anvils for Thread Micrometers For measuring pitch, root and
outside diameters, anvils made up of hardened wear-resistant special steels are used with a cylindrical
mounting shank and retainer ring which ensures locking while permitting rotation in the bore of spin-
dle and anvil.

(a) (b)
Fig. 3.25 Setting standards for thread micrometers

b. V and Tapered Anvils for Pitch Diameters The set of thread micrometers consists of
V-anvils and tapered anvils for measuring pitch diameters.
Linear Metrology 69

For metric threads (60°), V-anvils covering a wide range of 0.2–9 mm pitches are available. For
Whitworth threads (55°), V-anvils covering a wide pitch range of 40 to 3 tpi are available, while for
American UST threads (60°), V-anvils covering a pitch range of 60 to 3 tpi are available.

c. V and Pointed Anvils for Root Diameters The


set of thread micrometers consists of V-anvils and pointed anvils
for measuring root diameters as shown in Fig. 3.26. Each pitch
requires a separate anvil and pointed anvils can be used for sev-
Fig. 3.26 V and pointed anvils eral pitches. For Whitworth threads (55°), V-anvils covering
a wide pitch range of 40 to 3 tpi are available.

d. Flat Anvils for Outside Diameters For this, anvils


used are made up of hardened steel and carbide tips. The same
anvils are used for metric (60°), Whitworth (55°) and American
Fig. 3.27 Flat anvils UST (60°) threads.

e. Ball Anvils and Roller Blades These are used for gears and ball anvils are used for spe-
cial applications. A ball anvil, consists of a carbide ball with a cylindrical mounting shank and retainer
ring, for mounting into mounting bores of thread micrometers. Figure 3.28 shows a ball and roller
blade anvil of 3.5-mm shank diameter, 15.5-mm shank length and an accuracy of ±2 µm.

H H
0.15

15.5

(a) (b)
Fig. 3.28 (a) Ball anvil, and (b) Roller-blade anvil
(Mahr Gmbh Esslingen)

3.7.4 Accessories for Precision Micrometer


Micrometer Stand This is used for mounting micrometers in such a way that it allows the opera-
tor to use both hands to operate the micrometer and insert the workpiece.
It consists of a rugged, heavy-duty, base hammer with dimpled enamel and swivelable rubber lining
to protect the micrometers. Clamping of jaws and links are enabled by one screw.

3.7.5 Inside Micrometer


These are used to measure the larger internal dimensions of through holes, blind holes and registers. It
has a rigid, lightweight tubular design and the measuring spindle is hardened and ground. The carbide-
tipped spherical measuring faces are lapped and one of the measuring faces is adjustable.
70 Metrology and Measurement

5
0
05

Fig. 3.29 Mounting stand for a


micrometer Fig. 3.30 Inside micrometer
(Mahr Gmbh Esslingen) (Mahr Gmbh Esslingen)

Inside micrometers have high accuracy of 4 µm + 10 × 10−6 L where L is the length of combination
in mm. Some inside micrometers are provided with cylindrical gauge rods spring-mounted in protec-
tive sleeves, which are chrome finished. The procedure for taking the measurement is same as that of
outside micrometers.
A self-centering inside micrometer is used to measure through holes, blind holes and registers. In
this type, a ratchet stop is integrated with a coupler and a self-centering measuring head with three
anvils on the side being placed at 120° intervals (Fig. 3.31).
The self-centering inside micrometer is equipped with all digital functions such as On/Off, Reset
(Zero Setting), mm/inch, HOLD (storage of measuring value), DATA (data transmission), PRESET
(set buttons can be used to enter any numerical value), TOL (tolerance display) and is as shown in
Fig. 3. 32.

3.7.6 Depth Micrometers


Depth micrometers are used for measurements of depths, groove spacing and groove widths. A mea-
suring spindle head is hardened throughout and ground. It has a hardened, chromium-plated cross-
beam, and a lapped contact surface with a hardened anvil.

Fig. 3.31 Self-centering inside micrometer Fig. 3.32 Self-centering inside digital micrometer
(Mahr Gmbh Esslingen) (Mahr Gmbh Esslingen)
Linear Metrology 71

45

45
0
5

0
5

45
5 5

0
5
0 0
5 5 5

1,00
A B

Fig. 3.33 Depth micrometer and its applications


(Mahr Gmbh Esslingen)

For the application shown in Fig. 3.33,


Dimension A = Thimble Reading
Dimension B = Thimble Reading + Thickness of disc (1.00 mm)
Dimension C = Dimensions B – Dimension A

3.8 DIGITAL MEASURING INSTRUMENT FOR EXTERNAL AND INTERNAL


DIMENSIONS

It is used for the measurement of external and internal dimensions, external and internal threads, regis-
ters, narrow collars, recesses and grooves, outside and inside tapers, external and internal serrations and
other related application. Refer Fig. 3.34.
734.20

Dial indicator

Anvils
Measuring arms

Fig. 3.34 Universal measuring instrument for external and internal dimensions
(Mahr Gmbh Esslingen)
72 Metrology and Measurement

It consists of a rugged design with a ground and hard-chromium-plated column while a movable
arm holder is mounted in a precision ball guide to eliminate play and friction. The stationary arm holder
can be moved on the column for rough setting and has high sensitivity and accuracy due to stability
provided by the movable arm holder with a constant measuring force as a result of a built-in spring. A
reversible measuring force direction is possible for both outside and inside measurements. Reversible
arms can be located at any extent of the measuring range.

3.9 DIGITAL UNIVERSAL CALIPER

The digital universal caliper (Fig. 3.35) is used for measurement of outside and inside dimensions, registers,
narrow calipers, external and internal tapers, dovetails, grooves, distances between hole-centres and for
scribing the workpiece. This instrument has an outside measuring range of 0–300 mm and an inside mea-
suring range of 25–325 mm, with a resolution of 0.01 mm within the error limit (DIN 862) of 0.03 mm.
The digital universal caliper provides functions such as On/Off, RESET (zero setting), mm/inch,
HOLD (storage of measuring values), DATA (data transmission), PRESET (set buttons can be used
to enter any numerical value) and TOL (tolerance display). The maximum measuring speed of the
instrument is 1.5 m/s and a high-contrast 6-mm liquid crystal display is used with interchangeable arms.
The arms are reversible for extending measuring range and both the arms can be moved on the beam,
thus well-balancing the distribution of weight on small dimensions. The slide and beam are made up
of hardened steel and the instrument is operated by battery. The following table explains the different
anvils used for various applications.
At the beginning of the technological era, Carl Mahr, a mechanical engineer from Esslingen, real-
ized that machines were becoming more and more accurate and required measuring tools to ensure
the accuracy of their components. So he founded a company that dealt with the production of
length-measuring tools. At that time, the individual German states used different units of measure. For
this reason, his vernier calipers and scales were manufactured for all sorts of units, such as the Wurt-
temberger inch, the Rhenish inch, the Viennese inch, and the millimetre that already applied to France.
Carl Mahr made a valuable contribution to the metric unit introduced after the foundation of the
German Empire in 1871. He supplied metre rules, which were used as standards, first by the Weights
and Measures offices in Wurttemberg and shortly thereafter in all German states. Measuring instru-
ments for locomotives and railroad construction were a particular speciality. As the system of railroads
in Europe expanded, demand was particularly great. The technology continued to develop and the

Fig. 3.35 Digital universal caliper


(Mahr Gmbh Esslingen)
Linear Metrology 73

demands in measuring tools and instruments increased. They were refined and gained accuracy. When
the company was founded, the millimetre was accurate enough to use as a unit, but soon, everything
had to be measured in lengths and hundredths and later in thousandths of a millimetre in order to keep
abreast with the technological development. Nowadays, even fractions of those units are measured. In
addition to the traditional precision measuring tools, the Mahr Group now manufactures high-preci-
sion measuring instruments, special automatic measuring units, measuring machines, and gear testers.
Many of these systems operate with support of modern electronic components and computers.

Review Questions
1. Define linear metrology and explain its application areas.
2. List various instruments studied in linear metrology and compare their accuracies.
3. Sketch a vernier caliper and micrometer and explain their working.
4. Discuss the function of ratchet stop in case of a micrometer.
5. Explain the procedure to check micrometers for errors.
6. Sketch different types of anvils used in micrometers along with their applications.
7. Explain the working of a depth micrometer gauge by a neat sketch along with its application.
8. Explain the features of a digital vernier caliper and compare it with a sliding vernier caliper.
9. Explain which instruments you will use for measuring
a. Diameter of a hole of up to 50 mm
b. Diameters of holes greater than 50 mm
c. Diameters of holes less than 5 mm
10. Discuss the precautions to be taken while measuring with a vernier caliper and micrometer to mini-
mize errors.
11. List the length metrology equipment manufactures and prepare a brief report on it.
12. What is the accuracy of a vernier caliper and micrometer? Also, explain the difference between 1
MSD and 1 VSD.
13. Draw a diagram which indicates a reading of 4.32 mm on vernier scales by explaining principles of
a vernier caliper.
14. What is the accuracy of a vernier height gauge? Also, discuss with a neat sketch its most important
feature.
15. Draw line diagrams and explain the working of a bench micrometer.
16. Describe the attachments used to measure the internal linear dimensions using linear measuring
instruments.
Straightness,
4 Introduction to
Flatness, Squareness,
Metrology
Parallelism, Roundness,
and Cylindricity
Measurements

“At a particular stage, in order to search for dimensional accuracy it becomes necessary to
measure geometric features…”
Dr L G Navale, Principal, Cusrow Wadia Inst. of Tech., Pune, India

GEOMETRIC FEATURES OF nents being important if they are to


MACHINES function correctly, we know that shape
If the components of a machine have to and size also influence the wear on
function properly, their geometrical the moving parts which affect dimen-
shapes become important factors. This sional accuracy. But in case of station-
is very important for the functioning of ary locating parts, the geometrical in-
the mating parts.The primary concern of accuracies affect the class of fit
any manufacturing industry is a dimen- required. This is because they may
sional metrology. And to make accurate change the clearence between the
measurements of any dimension to a meeting parts.
specific length, there are certain other
Various methods and techniques are
geometric features, which must be con-
available to measure the above-men-
sidered. Geometrical features of a mea-
tioned geometrical features. It may
surement include measurement of
start right from the use of a spirit level
straightness, flatness, squareness, par-
for measuring straightness to sophisti-
allelism, roundness, circularity, cylin-
cated instruments like form tester (Mahr
dricity, co-axiality, etc.
Gmbh Esslingen), Taly-round (Taylor
Also, along with the act that, the geo- Hobson Ltd.), NPL- roundness tester,
metrical shapes and sizes of compo- etc.

4.1 INTRODUCTION

The most important single factor in achieving quality and reliability in the service of any product is
dimension control, and demand for the above-said qualitative aspect of a product is increasing day by
day with emphasis on the geometric integrity. Straightness, flatness, squareness, parallelism, roundness
and Cylindricity are important terms used to specify the quality of a product under consideration. The
process of inspection could quantify these qualitative aspects. This chapter discusses different methods
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 75

on how to measure straightness, flatness, squareness, parallelism, roundness and cylindricity of a part/
job and instruments used for the same.

4.2 STRAIGHTNESS MEASUREMENT

Perfect straightness is one of the important geometrical parameters of many of the surfaces on an
object/part of machine in order to serve its intended function. For example, in case of a shaping
machine, it is required that a tool must move in a straight path to perfectly cut the material (shape) and
to get this, the surfaces of guideways must be straight.
It is very easy to define a straight line as the shortest distance between two lines. But it is very dif-
ficult to define straightness exactly. A ray of light, though it is affected by environmental conditions
(temperature, pressure and humidity in the air), for general purposes is straight. Also, for small areas,
liquid level is considered as straight and flat.
In the broader sense, straightness can be defined as one of the qualitative representations of a sur-
face in terms of variation/departure of its geometry from a predefined straight line or true mean line.
Refer Fig. 4.1 which shows a very exaggerated view of a surface under consideration. A line/surface is
said to be straight if the deviation of the distance of the points from two planes perpendicular to each
other and parallel to the general direction of the line remains within a specific tolerance limit.

Tolerance on
straightness

Deviation from Reference


reference line line

Fig. 4.1 Exaggerated view of a surface

The tolerance for the straightness of a line is defined as maximum deviation in relation to the ref-
erence line joining the two extremities of the line to be checked. The fundamental principle used to
measure the straightness is Bryan’s principle. Bryan states that a straightness-measuring system should
be in line with the functional point at which straightness is to be measured. If it is not possible, either
slide-ways that transfer the measurement must be free of angular motion or angular motion data must
be used to calculate the consequences of the offset.

4.2.1 Methods of Straightness Measurement


1. Using Spirit Level We can do straightness testing using spirit level. Spirit level is used in the
shape of a bubble tube which is mounted on a cast-iron base. Inside the glass tube, the spirit level (gen-
erally used) has a circular arc of radius R which moves during a change of slope around the centre M.
The sensitivity of the spirit level depends only on the radius of curvature of the bubble tube and not on
76 Metrology and Measurement

the length of its bearing surface. (A short level may be more sensitive than a long coarse one. However,
it is advisable to use spirit levels which are so short that small deviations are obtained rather than mean
values). The sensitivity E of the spirit level is the movement of the bubble in millimetres, which cor-
responds to the change in slope of 1 mm per 1000 mm.

Movement of bubble
E=
1mm/metre

An auto-collimator can also be used to test the straightness. Spirit levels can be used only to measure/test
straightness of horizontal surfaces while auto-collimators can be used on a surface in any plane. To test the
surface for straightness, first of all draw a straight line on the surface. Then divide the line into a number of
sections (in case of a spirit level, it is equal to the length of the spirit level base and length of the reflector’s
base in case of auto-collimator). Generally, bases of these instruments are fitted with two feet in order to
get the line contact of feet with a surface instead of its whole body. In case of a spirit level, the block is
moved along the marked line in steps equal to the pitch distance between the centrelines of the feet. The
angular variations of the direction of the block are measured by the sensitive level on it, which ultimately
gives the height difference between two points by knowing the least count of the spirit level. Figure 4.2
( Plate 4) shows a spirit level (only 63 mm long) is that perfectly useful, despite its small size, when it is placed
on a carpenter’s square or a steel rule. The screws do not exert any direct pressure on the rule. Steel balls are
set in the level so that (a) the surface of the ruler is not damaged, and (b) the unit does not shift when it is
fixed on the temporary base. The thickness of square or ruler is up to 2 mm.

2. Straight Edges In conjunction of surface plates and spirit levels, straight edges are used
for checking straightness and flatness. It is a narrow/thin, deep and flat-sectioned measuring instru-
ment. Its length varies from several millimetres to a few metres. These are made up of steels (available
up to 2 m), cast iron (available up to 3 m). As shown in Fig. 4.3, straight edges are ribbed heavily and
manufactured in bow shapes. The deep and narrow shape is provided to offer considerable resistance
to bending in the plane of measurement without excessive weight. Straight edges with wide working

Length L

Support feet
Fig. 4.3 Straight edges
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 77

edges are used for testing large areas of surfaces with large intermediate gaps or recesses. An estimation
of the straightness of an edge or the flatness of a surface very often is made by placing a true straight
edge in contact with it and viewing against the light background. A surface can also be tested by means
of straight edges by applying a light coat of Prussian Blue on the working edges and then by drawing
them across the surface under test. The traces of marking compounds are rubbed in this way on the
tested surfaces and the irregularities on the surface are coated in spots with different densities, as high
spots are painted more densely and low spots are partly painted. (This scraping process is repeated until
a uniform distribution of spots on the whole surface is obtained.)
IS: 2200 recommends two grades, viz., Grade A and Grade B. Grade A is used for inspection
purposes [error permitted (2 + 10L)μ] and Grade B is used for workshop general purpose [error
permitted (5 + 20L)μ]. The acceptable natural deflection due to weight is 10μ/m. The side faces
of straight edges should be parallel and straight. Different types of straight edges are shown in
Fig. 4.4.

3. Laser Measurement System for Straightness Measurement Straightness mea-


surements highlight any bending component or overall misalignment in the guideways of a machine.
This could be the result of wear in these guideways, an accident, which may have damaged them in
some way, or poor machine foundations that cause a bowing effect on the whole machine. The straight-
ness error will have a direct effect on the positioning and contouring accuracy of a machine. The set-
up of the components used in this measurement comprise • Straightness beam-splitter • Straightness
reflector as shown in Fig. 4.5. ( Plate 4)
For measurement set-up, the straightness reflector is mounted to a fixed position on the table even
if it moves. The straightness beam-splitter should then be mounted in the spindle. If straightness
measurements are taken on two axes, it is possible to assess parallelism. It is also possible to measure
squareness errors between these axes.

4.3 FLATNESS MEASUREMENT

Flatness is simply a minimum distance between two planes, which covers all irregularities of the surface
under study. In other words, determining flatness means to determine the best-fit plane between two
reference planes i.e., one above and one below the plane of surface under consideration. Flatness, a
qualitative term, can be quantified by determining the distance ‘d ’. Refer Fig. 4.6.
Flatness is the deviation of the surface from the best-fitting plane, i.e., the macro-surface topogra-
phy. It can be defined as an absolute total value; for example—a 50-mm diameter disc is required to be
flat to 0.003 mm (i.e., 3 microns). However, it is more frequently specified as deviation per unit length;
i.e., the disc above would be specified to be flat to 0.0006 mm per cm. Flatness could also be defined in
terms of wavelengths of light (see measurement of flatness).
According to IS: 2063–1962, a surface is deemed to be flat within a range of measurement when
the variation of perpendicular distance of its points from a geometrical plane (to be tested, it should be
exterior to the surface under study) parallel to the general trajectory of the plane to be tested remains
78 Metrology and Measurement

Sl. No Figure Description

1.

2.

Wide-edge straight edges

3.

4.

5. Angle straight edges

6.

7. Toolmaker’s straight edges

8.

Fig. 4.4 Different types of straight edges


Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 79

Table 4.1 Specifications as per IS 3512-66 Tool Maker’s straight edge

Size (mm) 100 150 200 300 500 1000 1200

Accuracy ( μ) 1 1 1 2 2 3 3

Best-fit plane
Reference planes
d

Fig. 4.6 Flatness measurement

below a given value. The geometrical plane may be represented either by means of a surface plane or by
a family of straight lines obtained by the displacement of a straight edge or spirit level or a light beam.
Flat testing is possible by comparing the surface under study with an accurate surface. On many round-
ness systems, it is possible to measure flatness. This is done by rotating the gauge so that the stylus deflection
is in a vertical direction. This can apply equally for both upper and lower surfaces. All spindle movement
and data-collection methods are the same for that in roundness mode. So filtering and harmonic techniques
of analysis are the same for those of roundness. Flatness can be analyzed by quantifying deviations from a
least-squares reference plane. A least-squares reference plane is a plane where the areas above and below
the plane are equal and are kept to a minimum separation. Flatness is calculated as the highest peak to the
deepest valley normal to a reference plane. Geometrical tolerance of flatness is as shown in Fig. 4.7.

Tol. 0, 2 0, 2
Tol. 0, 2

Possible surface
Fig. 4.7 Geometrical tolerance of flatness
80 Metrology and Measurement

Flatness can also be analyzed by a minimum zone calculation, defined as two parallel planes that
totally enclose the data and are kept to a minimum separation. The flatness error can be defined as the
separation of the two planes.

4.3.1 Methods of Flatness Measurement


1. Beam Comparator Used for Flatness Testing A flat plane, which is used in most
metrological activities as a reference is referred as surface plate. An instrument called beam comparator
checks the general degree of flatness. It works on the principle used in a method of comparative mea-
surement. With the help of this instrument, the flatness of a surface under consideration is compared
with a master plate (same size or larger). For this comparative testing, it is not essential that, a reference
master plate itself should be absolutely true, but the error should be known. Figure 4.8 shows the beam
comparator set-up.

Fig. 4.8 Beam comparator

It consists of two outer legs to accommodate the maximum dimension of the surface under test.
First, it is to be placed on the master plate and then on the surface under checking. The reading is to be
read from the indicator for every turn of comparison. If any difference between two readings exists, it
indicates directly the error in the flatness in the plate surface under test over the span considered. The
alternative method to this is by using a precision-level instrument or an auto-collimator.

2. Flatness Measurement by Interferometry Small variations of less than one or two


microns are measured using interference fringes produced between the surface and an optical flat illu-
minated by monochromatic light. (Monochromatic light is used because the fringes then have more
contrast and are more sharply defined). Like Newton’s rings, the fringes may be regarded as contours
of equal distance from the surface of the flat; the separation between each fringe of the same colour
represents a height difference of half a wavelength of the light used (Fig. 4.9). The optical flat method
has the disadvantage that the surfaces of the flat and specimen must be in close contact leading to
scratching of both. The interferometer shows the fringes using a non-contact method, where the
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 81

Viewing setup Light from diffuse


monochromatic
light source

Optical flat
Reference
surface

Sample
Typical fringes
Resulting fringe
patterns

Convex surface

Concave surface

Saddle shaped
surface

Fig. 4.9 Interferometry and fringe patterns

sample is separated by several millimetres from the optical reference flat. The fringes are produced by
a telescope/eye-safe laser system and are viewed through the telescope eyepiece. They can also be pho-
tographed or displayed on a CCTV system. Samples can be measured whilst they remain in position on
a precision-polishing jig. The fringes follow the direction of the arrows when the optical flat is pressed
in closer contact with the surface of the sample.
For many interferometry situations, the interferometer mainframe and the optics accessories may
all sit on one vibration isolation table, with the measurement beam oriented horizontally. However, in
many other cases, the set-up illustration will show the interferometer in a vertical orientation, either
upward or downward looking. This popular set-up, as shown in Fig. 4.10, that is conducive to ergo-
nomic requirement, changes test pieces very fast, and uses less space.
Flatness measurement interferometry set-ups such as this are used for the metrology of surface flat-
ness of plane elements such as mirrors, prisms, and windows up to 150 mm. The test object must be
held so that the surface under test can be aligned in two axes of tilt.
The transmission flat, which should be of known flatness and shape, serves to shape the beam to
its own shape, and provides a reference wavefront, which is compared to the returning, reflected light
from the test object. Each spatial point in the combined beams is evaluated for the variation between
the wavefront of the transmission flat and the test object. These differences are expressed as interfer-
ence between the two beams.
82 Metrology and Measurement

Test object
Phase shifter 3-jaw chuck

MiniFiz

Interferometer Tip/tilt
Transmission flat Right-angle base
Fig. 4.10 Set-up of interferometer

The test object must be held so that the surface under test can be
aligned in two axes of tilt. Using the two-axis mount controls (or ‘tip/
tilt’), adjust tilt to optimize the number of fringes. When aligned, the
interferometer monitor will display black, grey and white bands, (‘fringes’)
as shown in Fig. 4.11 which represents a concave surface.
If this instrument, namely, MiniFIZ includes a zoom capability,
Fig. 4.11 Fringe zoom in or out from the test piece to make the object as large as pos-
pattern sible in the test view, without clipping the image. This adjustment opti-
mizes the lateral resolution of the measurement, essentially ensuring
the largest number of data sampling points.
A DE phase shift also recommends using a phase-shifting MiniFIZ, which, combined with the power
of a computer and our surface analysis software, will provide greater height detail, point-by-point in the
data set. Flatness can be estimated by eye, if the user is experienced and trained; but precision measure-
ments of the highest order require phase-shifting the interference fringes.

3. Flatness Measurement Using Laser Measurement System This measurement is


performed to check the accuracy of CMM tables and all types of surface plates. It determines whether
any significant errors in form exist and, in turn, quantifies them. If these errors are significant to the
application of the flat surface then remedial work, such as further lapping, may be required. The set-up
of the specific components used in this measurement comprise
• Base (50 mm)
• Base (100 mm)
• Base (150 mm)
• Flatness mirrors
Angular measurement optics is also required to attach to the top of the flatness bases. These are
available separately and are shown in the angular-measurement section. The angular retro-reflector is
mounted on one of three lengths of flatness foot-spacing base. The size of the base used depends on
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 83

the size of the surface to be tested and the required number of points to be taken. The angular beam-
splitter is mounted on the flatness mirror base. (See Fig. 4.12, Plate 5.)
Before making any measurements, a ‘map’ of the measurement lines should be marked out on the
surface. The length of each line should be an integer multiple of the foot-spacing base selected. There
are two standard methods of conducting flatness measurements:

a. Moody Method in which measurement is restricted to eight prescribed lines.

b. Grid Method in which any number of lines may be taken in two orthogonal directions across
the surface.

4. Flatness Measurement Electro-mechanical Gauges Large variations of several


microns can be measured using conventional electromechanical gauges, preferably of the non-contact
type for polished surfaces. The ULTRA TEC Precision Gauge Micromount UMI245 holds a gauge of
this type and can be used to measure samples mounted on a precision jig. Refer Fig. 4.13.
Electrochemical or
pneumatic gauge

Micromount

Wafer
Jig-mounting plate
Jig-conditioning
ring
Fig. 4.13 ULTRA TEC Precision Gauge Micromount UM1245

4.3.2 Surface Plate


The surface plate is a very important supplementary instrument used in most metrological activities.
Its top plane surface is primarily a true and level plane. For establishing geometrical relationships, the
flat surface of a surface plate is used as a reference datum plane. In other words, it forms the practical
basis of engineering measurement. It acts as a master for checking the characteristics of a work sur-
face, viz., flatness. Metrological instruments and the jobs are kept on it for carrying out measurements.
It is manufactured with different materials, viz., cast iron, granite, or glass block. It is mounted firmly
(with its flat surface facing upwards) on a stand having leveling screws at the bottom of all four legs
of the stand.

Cast-Iron Surface Plates These are used after rough machining is done and then followed
by seasoning or ageing for a suitable period. Then heat treatment (annealing up to 500°C for about
84 Metrology and Measurement

three hours) is done on the seasoned plates to relive the internal stresses. The rough finished surface
is scrapped suitably until a fairly uniform spotting of the marker is obtained all over the surface. Then
it is followed by a finishing process like snowflaking. The accuracy of this surface plate is ±0.002 to
±0.005 mm for the surface plate diagonal of 150 mm. CI Surface plates are available in two grades:

Grade-I Maximum departure is of 5 microns over an area of 300 mm × 300 mm of plate.

Grade-II Maximum departure is of 20 microns over an area of 300 mm × 300 mm of plate.

Granite Surface Plates They have as advantage over CI surface plates and have more rigidity
for the same depth with the absence of corrosion. They provide a high modulus of rigidity and are
moisture free. Metallic objects can easily slide on their surface and they are also economical in use. Sizes
are available from 400 × 250 × 50 mm to 2000 × 1000 × 250 mm.

Glass Surface Plates They are also commercially available. These are comparatively light in
weight and free from burr and corrosion. Accuracy varies in the range 0.004 to 0.008 mm. They are
available in sizes of 150 × 150 mm to 600 × 800 mm.

4.4 PARALLELISM

Parallelism is one of the important geometrical relationships to access the qualitative aspect of a work/
job geometry. Two entities (line, plane) are said to be parallel to each other when the perpendicular dis-
tance measured from each other anywhere on the surfaces under test and at least in two directions does
not exceed an agreed value over a specified time. Parallelism defines the angle between two surfaces of
a sample. It can be specified as a thickness difference per unit length or as an angular deviation, e.g., a
thickness difference of 1 micron per cm is equivalent to 20 seconds of an arc, or a 100-micron radians
angle.

4.4.1 Methods of Parallelism Measurement


1. Using Dial Indicator and Test Mandrel For checking parallelism between two axes
or between two planes, dial gauges are used in conjunction with test mandrels. This arrangement also
checks parallel motion between two bodies.

i. Parallelism of Two Planes The distance between two planes (surfaces) at any position
should not deviate beyond a minimum value agreed between the manufacturer and the user.

ii. Parallelism of Two Axes (of Two Cylinders) The maximum deviation between the
axes of a cylinder at any point may be determined by gently rocking the dial indicator in a direction
perpendicular to the axis.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 85

Plane A
Dial indicator

Support with a flat face


Plane B (Reference plane)

Fig. 4.14 Parallelism of two planes

iii. Parallelism of An Axis to a Plane


(Reference) An instrument is moved along
Axis 1 of Cyl. 1
the plane for a distance over which parallelism is
to be checked. If the reading taken at a number of
Dial indicator
points doesn’t exceed a limiting value then the axis
can be said to be parallel to the plane.

Axis 2 of Cyl. 2 iv. Parallelism of an Axis to the Inter-


Section of Two Planes The set up shown in
Fig. 4.17 is used to check perpendicularity between
two planes.
Fig. 4.15 Parallelism between two axes
v. Parallelism of Two Straight Lines,
Each formed by the Intersection of Two
Planes To check parallelism between two per-
pendicular planes (specifically where distance be-
tween two lines are small), the set up shown in Fig.
4.18 is used. In case where the distance is large, V-
blocks covered by a straight edge are used and a
Dial indicator
Axis spirit level makes the check.

2. Using Electro-mechanical Gaug-


Plane es For large deviations from the parallel, surfaces
can be measured mechanically, i.e., 10 microns per
cm is equivalent to 1 milliradian or approximately
Fig. 4.16 Parallelism of an axis to a plane
3.5 minutes of an arc. The sample is supported on
a three-ball plane with the measuring device above
one ball as shown in Fig. 4.19.
Rotation of the sample about the axis at right angles to the three-ball plane allows differences in height to
be measured. The sample surfaces must, of course, be flat to a finer limit than the out-of-parallelism.
86 Metrology and Measurement

3. Using an Autocollimator Smaller


values of parallelism can be measured using the
Cylinder autocollimator, which allows differences as small as
representing a few seconds of an arc to be measured on polished
the axis Dial indicator
positions surfaces. The autocollimator consists of a reflecting
telescope with a calibrated cross-wire eyepiece as
shown in Fig. 4.20. Using an accurately parallel ref-
erence disc, a three-ball plane under the telescope is
set precisely at right angles to the optical axis. The
reference disc is then replaced by the sample.
If surfaces of the sample are not parallel, the
reflected cross wire image from its upper surface
will be displaced when viewed in the eyepiece. Sam-
ples can be assessed in position on the precision
Two planes polishing jig and the out-of-flatness corrected using
Fig. 4.17 Parallelism of an axis to the micrometer tilt screws on the precision jig.
the interrsection of two planes

Fig. 4.18 Parallelism of two straight lines, each formed by the intersection of two planes

Three-ball plane

Probe of electro
mechanical gauge

Sample

Three-ball plane

Fig. 4.19 Electromechanical gauge


Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 87

Autocollimator
telescope

Tilt
screw

Light
path

Sample

Sample
Jig- mounting
conditioning plate
ring

Three-ball plane
Fig. 4.20 Auto collimator

4.5 SQUARENESS MEASUREMENT

Angular measurement requires no absolute standards for measurement as a circle can be divided into
any number of equal parts. Also, there is a demand of ability of an instrument to access/obtain the
quality angular measurement or checking. For example, in case of a column and knee-type milling
machine, the cross slide must move exactly 90° to the spindle axis in order to get an exact flat sur-
face during the face-milling operation. In a number of cases, the checking of right angles is of prime
importance while measuring and/or checking geometrical parameters of the work. For example, the
sliding member of a height gauge (carrying the scriber) must be square to the locating surfaces in order
to avoid errors in measurement. Two entities (two lines, or two planes or one line and one plane) are
said to have parallelism relationship between each other, if they do not exceed an agreed value over a
specified time. For this, the reference square may be a right-angle level or may be a selected plane or a
line or an optical square. Permissible errors are specified as errors relating to right angles (in ± microns
or millimetres) for a given length. For determining this error, another part of the machine under test
is considered as reference along with the specification of direction of error. Squareness measurement
determines the out-of-squareness of two nominally orthogonal axes, by comparing their straightness
88 Metrology and Measurement

Datum surface
Possible
surface
Possible
median
Tol. 0,1
plane

D
B
Datum
Tol. 0,1 surface
Possible median plane Tol. 0,15
Datum surface
Datum surface
Datum surface

Possible
Tol. 0,15 axis Tol. φ 0,1

Fig. 4.21 Some of the representations of squareness

values. Squareness errors could be the result of poor installation, wear in machine guideways, an acci-
dent that may have caused damage, poor machine foundations or a misaligned home position sensor
on gantry machines. Squareness errors can have a significant effect on the positioning accuracy and
contouring ability of a machine. Figure 4.21 gives some of the representations of squareness.

4.5.1 Optical Square


It determines the out-of-squareness of two nominally orthogonal axes, by comparing their straightness
slope values, which are referenced via the optical square. Straightness optics is also required along with
the straightness accessory kit (which consists of one optical square and clamp screws of one bracket
(for adjustable turning mirror)) when one axis is vertical. The optical square provides better accuracy
than other systems due to the premium grade optics used (+/−0.5 arc second). The geometry of the
patented straightness retro-reflector gives non-overlapping output, return laser beams, and reduced
angular sensitivity making the alignment far easier than other systems.

4.5.2 Methods of Squareness Measurement


1. Indicator Method This method is used to access the ability of the grinding process to
grind opposite faces of the block accurately parallel. The testing procedure consists of checking
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 89

63 mm
(2.46in) 46 mm
21 mm (1in)
(0.8in)

ε 63 mm
(3.45in)
22 mm
(10.86in)

Sq
uar
e(0
.3 m
) mm )
70 in) 0m
(0.7 ( 3.2 )
mm (0.3m
.5
83 0mm
10

Fig. 4.22 Optical square

the parallelism of the faces AC and BD (refer Fig. 4.17). Then the squareness of these faces
with the face CD is to be checked. This instrument consists of a framework with a flat base on
which a knife-edge carrying some indicating unit is mounted. In Fig. 4.23 a dial gauge indicator
is shown.

A B
Indicator unit
Block
(dial gauge)

Knife edge

C D
Fig. 4.23 Square block (indicator method)

It is arranged on an accurately horizontal surface, i.e., surface plate of inspection grade in such a way
that the knife-edge is placed in contact with an approximately vertical surface and the dial gauge height
is adjusted to make a contact near the top of the side of the block. The knife-edge is pushed and slightly
pressed against the side of the block say, AC. The reading is obtained on the indicator. Now face BD
is brought in contact with the instrument set-up. The difference between the readings is twice the error
in sequences for the distance between the knife-edge and the dial.

2. Using NPL Tester Figure 4.24 illustrates the NPL square tester. It consists of a tilting frame
mounted on a knife edge or roller supported at the end of an arm by the micrometer head. The frame
90 Metrology and Measurement

Straight edge
Tilting edge

Micrometer head

Knife edge

Fig. 4.24 NPL Square tester

carries a vertical straight edge with two parallel sides. This instrument is used to test the engineer’s
square. For the testing, it is kept on the surface plate. The angle of the straight edge with respect to the
surface plate could be changed using the micrometer. The movement on the micrometer drum will tilt
the entire frame and, in turn, the measuring surface of the straight edge. The square-ender test is placed
against the surface of the straight edge. To get the contact along the total length of the straight edge,
the micrometer height is to be adjusted. If the same reading is obtained on both the sides of the straight
edges, the blade is truly square. If the two readings are not the same then half the difference between
the two readings gives the error in squareness.

3. Checking of Squareness of Axis of Rotation with a Given Plane The square-


ness relationship of a rotating axis w r t a given plane can be determined by a set-up shown in Fig.
4. 25. A dial indicator is mounted on the arm attached to the spindle. The plunger of a dial gauge
is adjusted parallel to the axis of rotation of the spindle. Therefore, when the spindle revolves, the
plane on which the free end of the plunger is rotating will become perpendicular to the axis of
rotation and parallel to the plane in which the free end of the plunger is rotating. Now, the plunger
of the dial gauge is made to touch the plane under inspection and the spindle is revolved slowly.
Readings are noted at various positions. The variations in the readings represent the deviation in
the parallelism between the plane (under inspection) on which the free end of the plunger is rotat-
ing. It also represents the deviation in the squareness of the axis of rotation of the spindle with
the plane under test.

4. Square Master This is an ideal instrument for standard rooms and machine shops, which
involves single-axis measurement. Measurement of squareness, linear height, centre distance, diam-
eters, step measurements are possible with this instrument. This is also an optional linear scale for
vertical measurement.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 91

180°
180°

Axis of
rotation

Dial-
indicator
positions

Plane

Fig. 4.25 Checking squareness of an axis of rotation with Fig. 4.26 Square master
the given plane

4.5.3 The Squareness Testing of the


Machine Tool
The squareness of the machine under test is determined by
taking a straightness measurement on each of the two nominally
orthogonal axes of interest using a common optical reference.
The optical reference is typically the straightness reflector,
which remains in a fixed location and is neither moved nor
adjusted between the two straightness measurements.
The optical square provides a means of turning the mea-
surement path through a nominal 90° and allows the two
straightness measurements to be taken without disturbing
the optical reference.The general procedure is to set up the
optics as shown in Fig. 4.28 (a) with the interferometer as
Fig. 4.27 Machine tool
the moving component. Once a straightness measurement
has been taken on this first axis, the interferometer is repositioned, as shown in Fig. 4.28 (b) to enable
the straightness of the second axis to be measured. When the straightness measurements on both
axes have been completed, the software is able to calculate the squareness value. Figure 4.28 illustrates
92 Metrology and Measurement

Optical Interferometer
Optical reference Optical
Straightness
square path reference
reflector Straightness
Optical path
reflector
square
Interferometer

Laser Laser
head head

(a) First-axis measurement (b) Second-axis measurement


Fig. 4.28 Operation principles

the checking of squareness between two horizontal axes. However, it is also possible to check the
squareness between the horizontal and vertical axis with the addition of a special retroreflector and
turning mirror.
When a very high accuracy is wanted in measuring squareness, where we need to get an even higher
accuracy than for a laser transmitter, we can make use of a method where the laser transmitter is indexed
at 180°. The method is suitable for measurement of Squareness compared to two points on a reference
plane, or for measuring a plumb where we use the vials on the laser transmitter as reference.

4.6 ROUNDNESS MEASUREMENT

Measuring differences in diameter is not sufficient to measure roundness. For example, a 50-sphere
φ has a constant diameter when measured across its centre, but is clearly not round. To measure any
component for roundness, we require some form of datum.
In case of a cylinder, cone or sphere, round-
ness is a condition of a surface of revolution
where all points of the surface increased by
a plane perpendicular to the common axis
(in case of a cylinder) and passing through a
common centre (in case of a sphere) are equi-
distant from the axis (or centre). Roundness
is usually assessed by rotational techniques by D

measuring radial deviations from a rotating


datum axis; this axis remains fixed and becomes
the main reference for all measurements. The
output from the gauge can be represented as a
D
polar profile or graph and although this gives
a convenient pictorial representation, deriving
actual numbers from it can be time consum-
ing and subjective. We, therefore, need some Fig. 4.29 Roundness measurement
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 93

P means of processing the information to give us accurate and repeat-


able answers. As we are trying to assess departures from true circu-
larity and require a reference from which to measure, it makes sense
to try to fit a circle to our profile and relate all our calculations to it.
This reference is called a reference circle. The four types of reference
circles used in measurement of roundness are as follows:

4.6.1 Types of Reference Circles


V

Fig. 4.30 LSRC 1. Least Square Reference Circle (LSRC) The least
squares reference circle is a circle where the sum of areas inside this
circle are equal to the sum of the areas outside the circle and kept to
a minimum separation.
The out-of-roundness value is the difference between the maxi-
mum and minimum radial departure from the reference circle centre
(in Fig. 4.30, it is ( P + V )). This is a very convenient reference circle
to derive, as it is mathematically precise.

2. Minimum Zone Circle (MZC) The MZC is defined as


two concentric circles positioned to just enclose the measured profile
Fig. 4.31 MZC such that their radial departure is a minimum. The roundness value is
then given as their radial separation (RONt).

3. Minimum Circumscribed Circle (MCC) This is also


known as the ring-gauge reference circle and is the smallest circle
that totally encloses the profile. Out-of-roundness is quantified as the
largest deviation from this circle (RONt).

4. Maximum Inscribed Circle (MIC) The maximum


inscribed circle, sometimes referred as the plug gauge circle, is the largest
Fig. 4.32 MCC circle that is totally enclosed by the profile. Errors are quantified as the
maximum radial deviation (RONt) away from this reference circle.
There are two common ways of measuring roundness. One
method involves rotation of the part while keeping the measuring
transducer fixed and the other involves keeping the component fixed
while rotating the measuring transducer.

a. Component Rotation Figure 4.34 (a) shows a typical rotating


component system. Here, the component is rotated on a highly accurate
spindle that provides the reference for the circular datum. In Fig. 4.34
Fig. 4.33 MIC (b), the axis of the component is aligned with the axis of the spindle, us-
ing a centering and leveling table. A transducer is then used to measure
radial variations of the component with respect to the spindle axis.
94 Metrology and Measurement

(a) (b)
Fig. 4.34 Component rotation

The output of the gauge or transducer consists of three added components:

i. Instrument error
ii. Component set-up error
iii. Component form error

By using high-precision mechanics and stable electronics, instrument error (which is too small to be
significant) and component set-up error is minimized firstly by accurate centering and leveling and then
the residual error is removed by electronic or software means. Form error is the area of interest and
once, the first two types of errors are excluded, this error can be highly magnified and used to derive a
measure of the out-of-roundness.

b. Rotating Stylus An alternative method is to rotate the stylus while keeping the compo-
nent stationary. This is usually performed on small high-precision components but is also useful for
measuring large, non-circular components; for example, measurement of a cylinder bore using this
method would not require rotation of the complete engine block. This type of measuring system
tends to be more accurate due to continuous loading on the spindle, but is however limited by the
reach of the stylus and spindle.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 95

4.6.2 Precautions while


Measuring Roundness of Surfaces
a. Securing the Workpiece to the
Measuring Instrument Table One
of the most crucial factors when making any
sort of measurement is the stability of the
component during measurement. On round-
ness systems, there are various ways to clamp
the component with some form of chuck or
vice. However, all of these clamping methods
require stability on their mounting face.
For example, if a person was to sit on a
stool having four legs, there is a strong chance
that the stool will rock, whereas a stool with
three points will not rock. Therefore, wher-
ever possible, parts for measurement should
be held on a three-point location.
Fig. 4.35 Rotating stylus
Many roundness tables have ‘V’ grooves
cut into the surface. These grooves can serve a number of purposes. A fixture can be designed that has
three feet and in this case, the component is placed on a fixture that has three spherical balls on its base.
When placed on the table, the three spherical balls will give a three-point location to prevent rocking
but will also rotate in the grooves to prevent lateral movement. This type of fixture is usually suitable
for large or medium components where stylus force does not affect stability.
For smaller components and for components that do not have suitable bases, some form of
clamping may be required. Clamping should be done with minimum force to prevent distortion of the

Fig. 4.36 Roundness table


96 Metrology and Measurement

component. The clamp should also have a three-point location wherever possible. For components
that are very small and fragile, care must be taken when clamping, and it is often necessary to consider
a reduction in stylus force to prevent measurement errors.

b. Stylus must be Central to the Workpiece The centre of the stylus tip and the centre
of the component ideally should be in line with the measuring direction of the stylus. Any errors in the
alignment of the component centre to the stylus tip centre will cause cosine errors.
If we look at the drawing in Fig. 4.37 (a), we can see that the stylus tip is in line with the component
centre. In Fig. 4.37 (b) we have a cresting error causing a cosine error. Cosine errors cause a number
of problems, as can be seen that the stylus is presumed to be measuring at the 0° position on the table,
whereas in actual fact the actual angular position is a few degrees off the centre. This will cause prob-
lems when calculating the eccentricity position and the amplitude of the deviations of the profile. For
large components, small cresting errors will have small effect. If we look at Fig. 4.37 (c), we can see
that for components with smaller diameters, the cosine errors are significantly larger. Therefore, for
components with small diameters, good cresting is extremely critical.

+ + +

(a) (b) (c)


Fig. 4.37 Position of stylus tip

c. Maintaining the Required Stylus Force This depends on the component. Wherever
possible, the lowest stylus force should be used without causing any detriment to the measurement. Too
light a stylus force may cause the stylus tip to bounce and leave the surface, especially on surfaces with
large form errors or surfaces with holes or other interruptions. Too large a stylus force may damage the
component, or cause a movement during measurement or cause a ringing phenomenon, which appears
as a high-frequency noise on the measurement data.

d. Need to Centre and Level the Component Centering and leveling is critical to the
measurement process. Any large eccentricities will affect results. However, centering and leveling is
not always easy or practical especially when trying to centre and level with manual devices. Although
mathematics can be used to remove some of the effects of eccentricity, it is always best to centre and
level as accurately as possible. In general, the more off-centre the component, the greater the residual
eccentricity error even after mathematical correction.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 97

e. Cleaning the Workpiece Most roundness systems measure extremely small deviations and
any dirt on the workpiece will show as deviations and affect the results. In all cases, it is important to
clean the workpiece before any measurement is completed. Below is an example (refer Fig. 4.38) of a
component that has been measured without being cleaned.

Fig. 4.38 Uncleaned component

There are various methods of cleaning—some are not as effective as others. Ultrasonic cleaning is
good except that the component will be warm and needs a normalizing time. Even then finger marks
must still be removed using very fine tissue paper, such as lens tissue, which is lint free.

f. Preventing Stylus Damage A stylus stop attachment can be used. This usually consists
of some form of mechanical device that prevents the stylus from reaching its full range of movement
in the negative direction (i.e., down a hole). However, this is purely a mechanical device that prevents
damage to the stylus. Some deviation will still show on the results where the stylus falls in and out of
the hole and is ‘resting’ on the stop.

g. Removing the Residual Effects Caused by the Stylus Dropping Down


Holes Some means of software manipulation is required here. There are many methods of using
software to remove these residual errors. Some of these methods are automatic and capture the
unwanted data by detection of the holes. Another possible method is by manual means where the user
selects the area for analysis. There is a limit to how much data can be removed for analysis. If there
is a large amount of data removed, then calculations for reference circles and their centres become
unstable. For example, if a measurement were made for roundness on a part and only 10° of data was
used for analysis then the calculation for the centre of that analysis would be unstable. If the data were
spread out over 360° but was still only 10° of data when added together, this would be more stable.

h. Requirement of Long Stylus On some types of components such as deep internal bores,
it may be necessary to use a longer stylus in order to reach the measurement area. Using long styli, fac-
tors such as stylus force may need adjustment to allow for the extra leverage and weight of the stylus.
98 Metrology and Measurement

Increasing the stylus length will also decrease the resolution of the results. This is not always a problem
but may be on higher precision surfaces. On some systems, it is possible to increase the reach of the
gauge connected to the stylus rather than increase the length of the stylus. These are sometimes known
as gauge extension tubes.

i. Assessing Harmonics A harmonic is a repeated undulation in 360°. So in Fig. 4.39, a third


harmonic has three undulations of equal wavelength in 360°. Any surface can be broken down into its
individual harmonic elements. Below is an example of a third harmonic that has been caused by over-
tightening of the machine tool chuck. UPR (Undulations Per Revolution) is a way to assess the same,
for example, a part that has a three-lobed shape consists of three undulations in one revolution.

Fig. 4.39 Undulations of a third harmonic

The ability to analyze harmonics is very useful in order to predict a component’s function or to con-
trol the process by which the component is manufactured. If there is data missing, it becomes difficult
to determine the harmonic content of the surface. However, there are methods of calculating harmon-
ics on interrupted surfaces but they are not widely used.

4.6.3 Roundness Measurement on Interrupted Surfaces


It is possible to measure roundness on interrupted surfaces. There are two problems to overcome when
measuring roundness on a surface that has holes or gaps in the surface:
i. Firstly, the stylus will fall down the holes if they are quite large compared to the stylus tip radius.
This will cause damage to the stylus and will be detrimental to the results.
ii. Secondly, even if there is no damage to the stylus, the results will show deviations where the
stylus drops into the hole.

4.6.4 NPL Roundness Measuring Instrument


NPL provides a high-accuracy service for measuring the roundness of spheres and hemispheres
up to 100 mm in diameter. This service, which is primarily intended for the measurement of glass
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 99

hemispheres used to calibrate roundness measuring instruments, is based on a Talyrond-73 instrument


that was specially developed in collaboration between NPL and Taylor Hobson, shown in Fig. 4.40. The
key features of the new instrument’s design are a spindle with a highly reproducible rotation and a novel
multi-step error-separation technique, which is used to separate the spindle error from the component
roundness error. These features make it possible to measure departures from roundness with an uncer-
tainty of ±0.000 005 mm at a confidence level of 95%.

Fig. 4.40 Roundness-measuring instrument

The fundamental basis of the instrument’s design is to use a spindle with a highly reproducible
rotation and then use a novel error-separation technique to reduce significantly the errors associated
with the lack of perfection of the spindle geometry. The instrument used to make the measurements is
capable of collecting 2000 data points per revolution.
In operation, the component to be measured is placed on a rotary stage and data is collected at
several orientations of the stage. The Fourier-series representation of each measured trace is deter-
mined. A mathematical model, which relates the Fourier representations of the component errors and
the spindle errors to those of the traces, is then solved. The resulting Fourier representation of the
component error is used to determine the roundness of the component and to provide values of the
component error at points around the circumference.

4.6.5 Case Study—Piston Diameter Tester


Description The basic instrument consists of a base plate, which carries a serrated hardened
ground and reference table and a vertical column, which holds one ‘C-Frame’ assembly. The additional
100 Metrology and Measurement

C-Frames are extra. The C-Frames are made to float on


leaf springs and are self-aligning. They carry a screwed
ball point on one side and a dial gauge on the other.
The distance between the ball point and the contact
point of the dial gauge can be adjusted with a master
shown in Fig. 4.41. The serrated reference table carries
a pair of hardened ground and lapped stoppers placed
at 900 apart. Before inspecting the piston for diameters
at various heights, the instrument should be set with a
piston-master. The component piston is pushed towards
stoppers on the reference table and rests against them.
The C-Frames align themselves and the dials on the
frames indicate the diameter at particular heights. The
nut behind the vertical column is released and the suit-
able slip gauges are inserted vertically between them to
adjust the heights of the C-Frames.
Fig. 4.41 Piston diameter tester
(Courtesy, Kudale Instruments Pvt
Applications This instrument finds its utility at Ltd., Pune)

the piston customer’s end for GRADING the pistons.


This can be done by checking the diameters at various
heights. This instrument may be used in conjunction with dial gauges or pneumatic gauges.

4.7 CYLINDRICITY

Cylindricity values are becoming more important in the measurement of components, particularly as
an aid to improve the efficiency and cost-effectiveness of systems, for example, in an automotive fuel
injection, the need for greater economy demands greater precision in components. But to describe it,
we require a minimum of two roundness planes, which form a cylinder. However, in a majority of cases
this would not be enough information. It is often necessary to increase the number of measured planes.
The amount of planes depends on the component and application. There are many ways of defining
cylindricity of a component.
A best way to describe cylindricity of a component is by the minimum-zone method of analysis.
This can be described as the radial separation of two co-axial cylinders fitted to the total measured
surface under test such that their radial difference is at a minimum. For the purposes of inspection, a
tolerance may be put to the cylindricity analysis, and in the above case, it may be written as the surface
of the component is required to lie between two co-axial cylindrical surfaces having a radial separation of the specified
tolerance. Refer Fig. 4.43.

4.7.1 Reference Cylinder


A reference cylinder is a true cylinder, which is fitted to the analyzed data in order to measure the devia-
tions from it. There are a number of ways of assessing out-of-roundness using a number of types of
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 101

0,01

Tol. 0,01

Actual surface
Fig. 4.42 Representation of cylindricity

Tolerance or cylindricity value

Fig. 4.43 Cylindricity

reference circles. All reference circles are used to establish the centre of the component. Roundness
is then established as the radial deviations from the component centre. There are four internationally
recognized reference cylinders. These are the Least Squares, Minimum Zone, Maximum Inscribed and
Minimum Circumscribed cylinders.

a. Least Squares The least squares cylinder is constructed from the average radial departure of
all the measured data from the least-squares axis.

b. Minimum Zone The minimum-zone cylinder can be described as the total separation of two
concentric cylinders, which totally enclose the data and are kept to a minimum separation.

c. Minimum Circumscribed The minimum circumscribed cylinder is a cylinder of minimum


radius that totally encloses the data.
102 Metrology and Measurement

d. Maximum Inscribed The maximum inscribed cylinder is the largest cylinder that is enclosed
by the data.

+
+ +

+
+
+ +

+
+
+
+
+

Fig. 4.44 LS Fig. 4.45 MZ Fig. 4.46 MC Fig. 4.47 MI

Least
squares
line
through
profile
data
at cross
section

Fig. 4.48 Least-squares cylinder

4.7.2 Cylinder Parallelism


Cylinder parallelism is a measurement of the taper of the cylinder and is given as the parallelism of two
least-square lines constructed through the vertical sides of the profile, usually, the maximum V.
The following are the examples of ‘runout’ (Fig. 4.49) which may be due to machining of the part on
the machine tool (for example, lathe, drilling machine), or if its spindle is held in poor bearing, or due to
deflection of the workpiece as the tool is brought to bear on it. The shaft ground between centres may lead
to runout of dimension and may also be due to poor alignment of the centre or deflection of the shaft.
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 103

Tol. 0,1 Tol. 0,1


Tol. 0,1 at any radius at any position
at any position

R
+
R

Common axis
Datum φ Datum φ
Datum surface Datum surface
A R
(i) (ii) (iii)
Fig. 4.49 Examples of runout

4.8 COAXIALITY

Coaxiality is the relationship of one axis to another. There are two recognized methods of calculating
coaxiality.

i. ISO has defined coaxiality as the diameter of a cylinder that is coaxial with the datum axis and
will just enclose the axis of the cylinder referred for coaxiality evaluation.
ii. DIN Standard has defined coaxiality as the diameter of a cylinder of defined length, with its
axis co-axial to the datum axis that will totally enclose the centroids of the planes forming the
cylinder axis under evaluation.

Coaxiality ISO Coaxiality DIN

Datum axis
Datum axis

+
Component
Axis

+ +
Component
Axis
+ +
Coaxiality axis
Coaxiality axis
Fig. 4.50 Coaxiality
104 Metrology and Measurement

4.9 ECCENTRICITY AND CONCENTRICITY

Eccentricity is the term used to describe the position of the centre of a profile relative to some datum
point. It is a vector quantity in that it has magnitude and direction. The magnitude of the eccentricity is
expressed simply as the distance between the datum point and profile centre. The direction is expressed
simply as an angle from the datum point to the profile centre. Concentricity is twice the eccentricity and
is the diameter of a circle traced by the component centre orbiting about the datum axis.

4.10 INDUSTRIAL APPLICATIONS

1. Form-tester On-site measuring instruments for assessing form and location deviations as per
DIN ISO 1101, e.g., roundness errors are indispensable today for rapidly determining and eliminating
manufacturing errors and obtaining less rework and fewer rejects. Mahr meets this challenge with its
easy-to-operate and flexible MMQ10 form-measuring station shown in Fig. 4.51. One can get this high-
performance, high-quality measuring station at an incredibly low price. Benefit from our competence
to increase your precision.

Fig. 4.51 Form-tester


(Courtesy: Mahr Gmbh Esslingen)
[Features • Compact with an integrated evaluation computer and printer• Mobile, low weight, and small
dimensions • Rapid workpiece alignment with computer support and clever mechanics • Universal and reli-
able • Suited for shop-floor application; compressed air supply not required • High loading capacity]
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 105

• Universal form and positional tolerance check system for roundness measurements on the shop-
floor and in the measuring room
• Evaluation either with the FORM-PC or with an integrated evaluation PC
• FORM-PC as powerful evaluation system running under Windows 98 or Windows NT
• Comfortable software for the evaluation of deviation of form and position as per DIN ISO 1101:
roundness, roundness within segments, radial and axial runout, concentricity, coaxiality flatness,
straightness, parallelism, and perpendicularity.

2. Form-tester (Piston Profile Tester)

Description The instrument shown in Fig. 4.52 ( Plate 6) consists of long slotted CI base, which
carries a hardened vertical column. The vertical column carries a lead screw and a nut, which in turn
traverses up and down with a floating C-Frame, as the hand wheel is rotated by hand. The floating
C-Frame carries a fixed ball point on one side and a sliding ball point with a dial gauge on the other.
The diameter of the piston is checked in between these two points. The taper over the entire length
of a piston is checked as the C-Frame traverses up and down. The piston is mounted on the register
diameter by pushing on a hardened ground and lapped seat, which carries a circulated disc, graduated
in angles and rotates around a vertical shaft. The piston is rotated by hand and the ovality is noted on
the dial gauge in the C-Frame. The angular difference between major and minor axes is noted on a disc
against a curser line.

Application This instrument is useful for checking the piston ovality (d-m), piston major
diameter(d ), piston minor diameter(m), taper over the total length of piston (l) and the angular differ-
ence between the major and minor axes.

3. Roundcyl 500 It has been designed to meet the manufacturer’s requirements for speed, high
accuracy, simple operation and at a price that can be justified. It is a rugged instrument which can be
used in the laboratory as well as on the shop floor.
It has a measuring capacity that can accommodate the majority of the components needed to be
analyzed for geometric form. It meets the stringent demands of quality assurance in the global environ-
ment. Using this instrument, cylindricity can be measured by collecting the data of 3 to 10 levels and
then ploting the graph. Roundcyl-500 uses an IBM-compatible computer with standard peripherals.
The user has an option to use his own hardware, provided it meets the specification criteria. Communi-
cation between the computer and the operator is via a simple drop-down menu. Figure 4.53(a) ( Plate
6) shows the view of Roundcyl-500, and 4.53(b) (Plate 6) shows a user-friendly optional menu bar.
Some of the measured profiles are also shown. Once a component is measured, results can further be
analyzed by changing filters, magnification, eliminating centering correction. Results can be stored on a
PC hard disk for reanalyzing at a later date. Different measuring sequences can be saved on a hard disk
enabling the Roundcyl-500 to be used in semi-automatic mode.
It has got measuring capacity of Maximum Diameter: 500 mm, Maximum workpiece height: 25 mm. Its
vertical and horizontal travels are 320 mm and 250 mm respectively. For measurement, the instrument uses
106 Metrology and Measurement

a lever-type gauge head of minimized friction movement probe having a measurement range of ±300 mm
with a standard stylus. For measurement, contact with the workpiece surface is made with a 2-mm diameter
steel ball with gauging pressure of approximately 0.16 N. The rotating spindle has an air bearing and is
mounted directly on a granite plate. With the use of this instrument, the geometrical parameters measured
are cylindricity, roundness, concentricity, coaxiality, circularity, flatness, squareness, and parallelism.

4. Squareness/Transfer Gauge (Plate Mate) It is a versatile, easy-to-use squareness/


transfer gauge able to handle a wide range of jobs. The dual upright posts ensure centreline alignment
as you change the height of the indicator. The indicator holder glides easily up and down the dual posts
needing only one hand. The fine-adjustment knob located on the base of the tool allows precise control
for use with 0.0001 indicators. The radial foot used for squareness checking is adjustable vertically for
checking parts with heels or avoiding other part features such as holes and radius.

(a) (b) (c) (d) (e)


Fig. 4.54 Squareness/Transfer gauge

It accommodates vertical and horizontal style indicators with either a 5/32 diameter or a dovetail adap-
tation. The indicator holder can be interchanged from side to side on the posts for transfer uses,. and can
be inverted on the posts for taller parts. It is manufactured from hardened tool steel for durability.

Review Questions

1. List down methods of straightness measurement. Discuss straightness measurement using spirit
level.
2. Explain the concept of a reference plane.
3. Explain the following terms:
(a) Straightness (b) Flatness (c) Squareness (d) Roundness
4. State the importance of geometric tolerance of manufacturing components.
5. Describe the following methods of checking straightness of a surface:
a. Parallelism of two planes
b. Parallelism of two axes to a plane
c. Parallelism of two axes
Straightness, Flatness, Squareness, Parallelism, Roundness, and Cylindricity Measurements 107

6. Describe the various methods for checking squareness of machined surfaces.


7. Describe the optical test for squareness with the help of a neat sketch.
8. Define roundness and state the causes of out-of-roundness. Discuss the commonly used methods
of measurement of roundness.
9. Describe the roundness-checking method by using roundness measuring machine.
10. Define the terms: (a) Eccentricity ( b) Concentricity (c) Coaxiality (d) Reference cylinder
11. What are the precautions to be taken while measuring roundness of surfaces?
12. Explain the importance of reference planes in engineering measurement.
13. Explain the procedure of use of straight edges to check straightness.
14. Using a linear reading adjustable spirit level, describe the suitable method of determining the paral-
lelism of two bearing surfaces as shown in the following figure:

1m

2m

15. Review the methods used in testing of flatness.


16. Identify and explain the method used to access the ability of a grinding process to grind accurately
parallel opposite faces of the block.
17. Explain the laser measurement system used for checking/measuring geometrical features.
5 Introduction to
Metrology of Machine
Metrology
Tools

‘Machine tool metrology is necessary to ensure that a machine tool is capable of producing
products with desired accuracy and precision…’
D Y Kulkarni, Inteltek Co. Ltd., Pune

TESTS FOR MACHINE TOOLS a great believer in the importance of inter-


The modern industry uses a large national standardization. Before the
number of machine tools for producing Second World War, he was an active
various components with varying member of the committee ISO/TC39.
degrees of precision. The quality of Schlesinger’s classic tests were intended
manufactured products, apart from to cover those portions of manually oper-
depending on the skills of operators, ated machines where a skilled operator
also depends largely upon the accuracy measures the workpiece during the oper-
of machine tools being used while pro- ation and is able to eliminate function
ducing them. The quality of a machine effects such as deformations due to
tool depends on the rigidity and stiffness weights, clamping forces or thermal influ-
of the machine, fitment of the parts and ences, and dynamic displacement errors
their alignment with each other along of a machine and its related components.
with the accuracy and quality of support-
ing devices. The stiffness and rigidity Machine tools are very sensitive to
values are finalized by the designer impact or shock; even heavy castings
during the prototype testing and need may not be rigid enough to withstand
not be reconsidered during the commis- stresses caused by a fall during trans-
sioning of a machine tool at the user’s portation, resulting deformations and
place. In addition to manufacturing accu- possibly cracks, rendering the entire
racy, the working accuracy of a machine machine tool useless. In general,
tool is influenced by the geometry of cut- machine tool tests must be carried out
ting tools, material properties of cutting at the user’s place and not only before
tools and the workpiece, parameters of the transportation. Machine tools are
cutting conditions like speed, feed and then carefully aligned during installa-
depth of cut, work-holding and clamping tion. According to Dr G Schlesinger, the
devices, skill of the operator, working steps to be followed for execution of an
environment and like parameters. acceptance test are as follows:

Dr G Schlesinger was a pioneer in design- 1) Decision regarding suitable location


ing Machine Tool Alignment Tests. He was of the machine tool
Metrology of Machine Tools 109

2) Layout of a proper foundation plan The continuously increasing demand for


highly accurate machine components
3) Preparing the foundation, followed has led to considerable research towards
by curing the means by which the geometric accu-
racies of a machine can be improved and
4) Lifting and erecting the machine tool
maintained. To ensure that a machine
on the foundation
tool is capable of manufacturing prod-
5) Leveling the machine tool before ucts with desired accuracy, certain tests
starting the test are required to be performed on it. The
machine tools are tested at different
6) Connecting and grouting the foun- stages such as during manufacturing,
dation bolts assembling, installation and overhauling
as per the accuracy test chart in order to
7) Carrying out second-leveling after check their confirmation of meeting the
setting of foundation bolts desired specification levels. In general,
8) Checking final leveling before test- these tests are classified on a broad basis
ing and commissioning as practical (performance) tests and geo-
metric (alignment) tests.

5.1 GEOMETRICAL (ALIGNMENT TESTS)

Geometric accuracy largely influences the product quality and precision to be maintained during the
service life of a machine tool. The distinct field of metrology, primarily concerned with geometric
tests (alignment) of machine tools under static and dynamic conditions, is defined as machine tool
metrology. Geometric tests are carried out to check the grade of manufacturing accuracy describ-
ing the degree of accuracy with which a machine tool has been assembled. Alignment tests check the
relationship between various elements such as forms and positions of machine-tool parts and displace-
ment relative to one another, when the machine tool is unloaded. Various geometrical checks generally
carried out on machine tools are as follows.
i. Work tables and slideways for flatness
ii. Guideways for straightness
iii. Columns, uprights and base plates for deviation from the vertical and horizontal planes
iv. True running and alignment of shafts and spindles relative to other areas at surfaces
v. Spindles for correct location and their accuracy of rotation
vi. Ensuring the accuracy of rotation involves checking eccentricity, out of roundness, periodical
and axial slip, camming
vii. Parallelism, equidistance, alignment of sideways, and axis of various moving parts with respect
to the reference plane
viii. Checking of lead screws, indexing devices and other subassemblies for specific errors

5.1.1 Equipments Required for Alignment Tests


For an alignment test, any type of equipment may be used as long as the specified measurement can
be carried out with the required degree of accuracy. However, the following types of equipment are
generally used to carry out alignment tests.
110 Metrology and Measurement

1. Dial Gauges These are mostly used for alignment tests. The dial gauges used should have a
measuring accuracy in the order of 0.01 mm. The initial plunger pressure should vary between 40 to
100 grams and for very fine measurements, a pressure as low as 20 grams is desirable. Too low a spring
pressure on the plunger is the source of error in case of swingover measurements as the upper-position
spring pressure and plunger weight acts in the same direction, while in a lower position they act in the
opposite direction. The dial gauge is fixed to a robust and stiff base (e.g., magnetic base) and bars to
avoid displacements due to shock or vibration.

2. Test Mandrels These are used for checking the true running of the spindle. Test mandrels
deliver quality checking such as straightness and roundness during the acceptance test. There are two
types of test mandrels, namely, a) mandrel with a cylindrical measuring surface and taper shank that
can be inserted into the taper bore of the main spindle, and b) cylindrical mandrels that can be held
between centres. Test mandrels are hardened, stress-relieved and ground to ensure accuracy in testing.
The deflection caused by the weight of the mandrel is known as ‘natural sag’, which is not affordable to
get overlooked. Slag occurs when the mandrel is fixed between centres and is more marked when it is
supported at one end only by the taper shank, while the outer end is free to overhang. To keep the slag
within permissible limits, mandrels with a taper shank vary between 100 and 500 mm.

3. Spirit Levels Spirit levels are used in the form of bubble tubes, which are mounted on a cast-
iron bases. Horizontal and frame are the two types of spirit levels used for alignment tests. Spirit levels
are used for high-precision measurements having a tolerance of 0.02 mm to 0.04 mm per 1 m, and
having a sensitivity of about 0.03 mm to 0.05 mm per 1 m for each division. A bubble movement of one
division corresponds to a change in slope of 6 to 12 seconds.

4. Straight Edges and Squares Straight edges are made up of heavy, well ribbed cast iron or
steel and are made free of internal stresses. Their bearing surfaces are as wide as possible. The error at
the top of a standard square should be less than ±0.01 mm. A steel square is a precision tool used for
engraving the lines and also for comparing the squareness of two surfaces with each other.

5. Optical Alignment Telescope It is used to indicate the errors of alignment in vertical as


well as horizontal planes of the optical axis.

6. Waviness-Meter It is used for recording and examining the surface waviness with a magni-
fication of 50:1.

7. Autocollimator This can be used for checking deflections of long beds in horizontal, vertical
or an inclined plane, owing to its sensitivity in measuring.

5.2 PERFORMANCE TEST (PRACTICAL TEST)

The sequence in which the alignment/geometrical tests are given is related to the subassemblies of a
machine and does not define the practical order of testing. In order to make checking or mounting of
Metrology of Machine Tools 111

instruments easier, tests are carried out in any convenient sequence. When inspecting a machine, it is
necessary to carry out all the tests described below, except for alignment test, which may be omitted in
mutual agreement between the buyer and the manufacturer.
Alignment tests alone are inadequate for machine testing as they do not include variations in rigidity
of machine-tool components, quality of their manufacture and assembly, the influence of the machine-
fixture, cutting tool-workpiece, and system rigidity on accuracy of machining. It consists of checking
the accuracy of a finished component under dynamic loading. Performance/practical test is carried out
to know whether the machine tool is capable of producing the parts within the specified limits or not.
These tests should be carried out after the primary idle running of the machine tool with essential parts
of the machine having a stabilized working temperature. Moreover, these performance tests are carried
out only with the finishing cuts and not with roughing cuts, which are liable to generate appreciable cut-
ting forces. The manufacturer specifies the details of test pieces, cutting and test conditions.
Now let us consider the Indian Machine Tool Manufacturers Associations’ Standard—IMTMAS:
5-1988, which describes both geometrical and practical tests for CNC turning centres with a horizontal
spindle up to and including a 1250-mm turning diameter having corresponding permissible deviations with
reference to the IS: 2063-1962-Code for testing machine tools. (For conducting a performance test, the
specimens to be manufactured are also standardized, one such standard specimen is shown in Fig. 5.1.)
When establishing the tolerance for a measuring range different from that indicated in standards IS:
2063–1962, it is taken into consideration that the minimum tolerance is 0.002 mm for any proportional
value, and the calculated value is rounded off to the nearest 0.001 mm. However, the least count of
all measuring instruments need not be finer than 0.001 mm. The testing instruments are of approved
type and are to be calibrated at a recognized temperature confirming to the relevant published Indian
Standards. Whatever alternate methods of testings are suggested, the choice of a manual method of
testing is left to the manufacturers.

L2 L4 TAPER 3 L5

L3
R1

1× 45°
R1

R1
∅ D ROUGH

∅ D2

∅ D3

∅ D3

∅ D1
∅ D1

45° 60° R2

TAPER 1
L1

Fig. 5.1 A sample standard specimen for conducting a performance test


112 Metrology and Measurement

The methods employed are as follows:


000/000 for deviation of perpendicularity, which are the ratios
000 for any length of 000 for deviation of straightness and parallelism—this expression is used for
local permissible deviation, the measuring length being obligatory
000 For deviation of straightness and parallelism—this expression is used to recommend a measur-
ing length, but in case the proportionality rule comes into operation, the measuring length differs from
those indicated.

5.3 MACHINE-TOOL TESTING

5.3.1 Alignment Testing of Lathe


Table 5.1 Specifications of alignment testing of lathe

Sl. Measuring Permissible


No. Test Item Figure Instruments Error (mm)
1. Leveling of Precision level or 0.01 to 0.02
machines (Straight- any other optical
ness of sideway— instruments
carriage)
(a) Longitudinal direc- (a)
tion—straightness
of sideways in
vertical plane
(b) In transverse
direction (b)

2. Straightness of car- Dial gauge and 0.015 to 0.02


riage movement in test mandrel or
horizontal plane or straight edges
possibly in a plane (a) with parallel faces,
defined by the axis of between centres
centres and tool point
( Whenever test ( b) is
carried out, test (a) is
not necessary)
(b)

3. Parallelism of b Dial gauge 0.02 to 0.04


tailstock movement a
to the carriage
movement
(a) In horizontal Constant
plane, and ( b) in
vertical plane
Metrology of Machine Tools 113

Sl. Measuring Permissible


No. Test Item Figure Instruments Error (mm)
4. Parallelism of spindle L Dial gauge and 0.05 to 0.02
b
axis to the carriage a
test mandrel
movement
(a) in horizontal plane,
and ( b) in vertical
plane

5. Difference in the height Dial gauge and 0.03


between headstock and test mandrel
tailstock

6. Parallelism of longitu- Dial gauge and 0.04/300


dinal movement of tool test mandrel frees, end of
slide to the spindle axis the mandrel
inclined up
0.03

7. Run-out of spindle Dial gauge 0.01


nose—centering sieve
or cone

8. True running of the Dial gauge and a) 0.01


taper bore of the a b test mandrel b) 0.02 for
spindle (a) near to the L = 300
spindle nose, and (b) at L
a distance L

9. Squareness of the a Dial gauge and 0.02


transverse movement flat ground disc or
of the cross-slide to the straight edge.
spindle axis

(Continued )
114 Metrology and Measurement

10. Axial slip Dial gauge 0.015

11. Accuracy of the pitch, L Dial gauge and 0.015 to 0.04


generated by lead screw height bars
( Note: this test is to
be carried out only if
the customer requires a
certified lead screw.)

5.3.2 Alignment Testing of Column and Knee Type of Milling Machine

Table 5.2 Specifications of alignment testing of column and knee type of milling machine

Sl. Measuring Permissible


No. Test Item Figure Instruments Error (mm)
1. Table-top surface paral- Dial indicator 0.02
lel with centre line of with magnetic
spindle ( Position the base for firm grip
table at the centre of the
longitudinal movement
direction. Insert the
test bar into the spindle
hole. Read the indicator
at two places on the bar.
The largest difference is
the test value.)
2. To check the parallelism Dial indicator i. 0.02 for
of the table-top surface with magnetic 5000 mm
with longitudinal move- base for firm grip ii. 0.01 for
ment of the table (Fix 1000 mm
dial indicator on the
spindle or overarm.
Let the indicator point
touch the top surface.
Note the reading while
moving the table over
all its length. The larg-
est difference is the test
value.)
Metrology of Machine Tools 115

Sl. Measuring Permissible


No. Test Item Figure Instruments Error (mm)
3. Testing of work-table Dial indicator 0.01
flatness in longitudinal with magnetic
and cross direction base for firm grip
( place the work table at
the middle centre of all
movement directions.
Place the dial indicator
on the surface of the
table.)
4. Checking of spindle Dial indicator 0.06 to 0.1
periphery run-out with magnetic
( place the indicator base for firm grip
point at the periphery
of the spindle. Note the
reading while rotating
the spindle. The largest
difference is the test
value.)
5. Testing of spindle end Dial indicator 0.2
face runout ( place the with magnetic
dial indicator and touch base for firm grip
the edge of the spindle
and face. Note the indi-
cator while turning the
spindle. The largest dif-
ference is the test value.)
6. Alignment of arbor Dial indicator 0.02 to 0.03
support with the spindle with magnetic
( insert the test bar into base for firm grip
the arbor support hole.
Fix the indicator on
the spindle and allow
its point to touch the
bottom. Half of the
largest difference of the
reading in the spindle
in revolution is a test
value.)
116 Metrology and Measurement

5.3.3 Alignment Testing of Radial Drilling Machine

Table 5.3 Specifications of alignment testing of radial drilling machine

Sl. Measuring Permissible


No. Test Item Figure Instruments Error (mm)
1. Squareness of spindle Mid Dial indicator 0.01 to 0.1
axis to the base plate. position with magnetic
Arm and drilling head base for firm grip
locked before taking
measurement. Check
with the arm succes-
sively in its
1. Upper position
2. Mid-position
3. Lower position

2. Squareness of verti- Mid Dial indicator (a) 0.05


cal movement of the position with magnetic (b) 0.05
spindle in the base plate. base for firm grip
(a) In a plate parallel to
the plane of symmetry
of the machine
(b) In a plane perpen-
dicular to the plane
of symmetry of the
machine
[ Lock the arm and
drilling head.]

3. Leveling of base plate Mid position Dial indicator 0.025 to 0.03


with magnetic
base for firm grip
Metrology of Machine Tools 117

Sl. Measuring Permissible


No. Test Item Figure Instruments Error (mm)
4. Flatness of base plate Spirit level 0.04 to 0.1
A B

A B

A B

D D D

C C C

5.3.4 Testing of Computer Numerically Controlled Turning Centres


A Computer Numerically Controlled Turning Centre is defined as a turning centre that is a multifunc-
tional CNC turning machine including the capability of milling and drilling by the addition of a power-driven

9 10 6 5 4 7
2

3
8

Fig. 5.2 Computer Numerically Controlled Turning Centres


1. Headstock, 2. Work spindle/chuck, 3. Bed, 4. Carriage, 5. Turret slide, 6. Turret, 7. Tailstock, 8. Tailstock
spindle sleeve, 9. Non-rotating tool, 10. Power-driven tool
118 Metrology and Measurement

tool. It also has a work-holding spindle which can be oriented and driven discretely and/or as a feed
axis. Machine size range—turning diameter (maximum diameter that can be turned over the bed) up to
160 mm, 160 mm to 315 mm, 315 mm to 630 mm, 630 mm to 1250 mm. While preparing this standard,
IMTMAS considered assistance from UK proposal ISO TC 39/SC2 (Secr. 346) N-754, JIS B 6330 and
JIS B 6331, ISO 1708 and ISO 6155 Part-I.
Table 5.4 Specifications of a CNC turning centre

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
( I) Geometrical Tests: all diameters are in mm
BED
1. Leveling of car- Precision levels DC ≤ 500
riage slide ways a)
a) In longitudi- i) 0.015 (Convex)
nal direction 500 < DC ≤ 1000
(a)
b) In transverse ii) 0.02 (Convex—
direction local tolerance 0.008
for any length of 250)
(b) 0.03 Convex—local
tolerance of 0.01 for
DC – Distance between Centers
any length of 250.
b) 0.04/1000

(a) (b)

Carriage
2.

(a)

Wire

Deviation

(b)
Metrology of Machine Tools 119

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
3. b

L- Constant

Heat Stock Spindle


4.

a
F

5.

6.

7.

a b

(Continued )
120 Metrology and Measurement

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
8. a b

9.

10.

11. 50

Alternate

12. A B
b

a
Metrology of Machine Tools 121

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
13. a

b
50

14. A B

Alternate

15.

16. b

Alternate

(Continued )
122 Metrology and Measurement

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
17.

18.

19.

Rotating Tool Spindle (Axial and Radial)


20.
a

b
b a

21.

22. a

b
Bed
Metrology of Machine Tools 123

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
23. a

Cross slide

24.

Axial Tool

25.

Radial Tool

( II ) Practical Tests : all dimensions are in mm.


P1 ∅ d2

∅ d1
∅ D3
∅ D5
∅ D6

∅ D4

∅ D1

∅ D2

40 110
155

(Continued )
124 Metrology and Measurement

Sl. Measuring Permissible deviations


No. Figure Object Instruments for turning diameters
P2 10 max

∅ 10
min
∅ D2

10
max

P3 ∅D

P4 L

Review Questions

1. What is the meaning of alignment test?


2. State the alignment test of a milling machine.
3. Write short notes on
a. Alignment test of lathe machine
b. Alignment test of radial drilling machine
c. Acceptance tests for machine tools
4. Explain how an autocollimator can be used for straightness measurement.
5. Explain how the straightness of a lathe bed may be checked by using a spirit level.
Metrology of Machine Tools 125

6. Describe the set-up for testing the following in case of a horizontal milling machine.
a. Work-table surface parallel with the longitudinal movement
b. True running of the axis of rotation of labour
7. Explain the procedure with a neat sketch to check the alignment of both centres of a lathe machine
in a vertical plane.
8. Explain the principle of alignment, as applied to measuring instruments and machine tools.
9. State the geometrical checks made on machine tools before acceptance.
10. Distinguish between ‘alignment test’ and ‘performance test’.
11. Name the various instruments required for performing the alignment tests on machine tools.
12. Name the various alignment tests to be performed on the following machines. Describe any two of
them in detail using appropriate sketches.
a. Lathe
b. Drilling Machine
6 Limits, Fits and
Tolerances
(Limit Gauge and its Design)

‘Limit, Fits, and Tolerences’—Key terms… a base of Quality Control……


Timke N S (Director, Creative Tool India Ltd., Pune)

INTRODUCING GAUGES range of limit. This fit is known as


An exact size can’t be obtained in prac- ‘selective fit ’, usually used to avoid
tice repeatedly. It is therefore logical to extreme tightness and looseness. For
consider the variations in the dimensions the purpose of an assembly of machine
of the part as being acceptable, if its size parts, mainly the different types of fits
is known to lie between a maximum and for this purpose are clearence fit, transi-
minimum limit. This difference between tion fit and interference fit. IS: 2709 gives
the size limits is called tolerance. These suitable guidelines for selecting various
variations are permitted for unavoidable types of fits for intended applications.
imperfections in manufacturing, but it is The Newall system was probably the
seen that they do not affect the functional first system in Great Britain that
requirements of the part under consider- attempted to standardize the system of
ation. This is done intentionally to reduce limits and fits. In India, we follow IS: 919-
the manufacturing cost. 1963 for the system of limits and fits.
Under certain conditions, the limits im- A gauge is an inspection tool without a
posed on an assembly may be so close scale, and is the direct or reverse physi-
that to ensure random selection, the cal replica of the object dimension to be
close limits imposed on the individual measured. To avoid any dispute between
details would lead to an expensive the manufacturer and purchaser, IS:
method of manufacturing. A practical 3455-1971 gives the guidelines for
solution (alternative) to this problem is selecting the types of gauges for spe-
to mark individual parts to meet wider cific applications. The advantages of
tolerances, and then to separate them using gauges for cylindrical work are
into categories according to their actual that the GO ring gauge may detect
sizes. An assembly is then made from errors that may not be detected by the
the selected categories—this process GO gap gauge, such as lobbing and
being known as selective assembly. It raised imperfections. As per W Taylor,
is required ideally where the objective the GO gauge should check a time
is to make a ‘shaft’ and ‘hole’ with a dimension along with its related (geo-
finite fit and not within a permissible metrical) parameters.
Limits, Fits and Tolerances 127

6.1 INTRODUCTION

The proper functioning of a manufactured product for a designed life depends upon its correct size rela-
tionship between the various components of the assembly. This means that components must fit with
each other in the required fashion. (For example, if the shaft is to slide in a hole, there must be enough
clearance between the shaft and the hole to allow the oil film to be maintained for lubrication.) If the
clearance between two parts is too small, it may lead to splitting of components. And if clearance is
too large, there would be vibration and rapid wear ultimately leading to failure. To achieve the required
conditions, the components must be produced with exact dimensions specified at the design stage in
part drawing. But, every production process involves mainly three elements, viz., man, machine and
materials (tool and job material). Each of these has some natural (inherent) variations, which are due to
chance causes and are difficult to trace and control, as well as some unnatural variations which are due
to assignable causes and can be systematically traced and controlled. Hence, it is very difficult to pro-
duce extremely similar or identical (sized) components. Thus, it can be concluded that due to inevitable
inaccuracies of manufacturing methods, it is not possible to produce parts to specified dimensions but
they can be manufactured economically to the required size that lies between two limits. The terms shaft
and hole are referred for external and internal dimensions. Then by specifying a definite size for one and
varying the other, we could obtain the desired condition of the relationship of the fitment between
the shaft and the hole. Practically, it is impossible to do so. Hence, generally the degree of tightness or
looseness between the two mating parts, which is called fit, is specified.

6.2 CONCEPT OF INTERCHANGEABILITY

The concept of mass production originated with the automobile industry. MODEL-T of Ford Motors
was the first machine to be mass-produced. The concept of interchangeability was introduced first in
the United States. But in the early days, it was aimed at quick and easy replacement of damaged parts by
attaining greater precision in manufacture and not at achieving chip products in large quantities. Till the
1940’s, every component was manufactured in-house. After the 1940’s, however, the automobile com-
panies started outsourcing for carrying out roughing operations. Slowly and gradually, the outsourcing
moved on from roughing components to finished components and from finished components to fin-
ished assemblies. The automobile industry started asking suppliers to plan for the design, development
and manufacture of products to be used in producing cars and trucks.
In mass production, the repetitive production of products and their components entirely depends
upon interchangeability. When one component assembles properly (and which satisfies the functional-
ity aspect of the assembly/product) with any mating component, both chosen at random, then it is
known as interchangeability. In other words, it is a condition which exists when two or more items pos-
sess such functional and physical characteristics so as to be equivalent in performance and durability;
and are capable of being exchanged one for the other without alteration of the items themselves, or
of adjoining items, except for adjustment, and without selection for fit and performance. As per ISO-
IEC, interchangeability is the ability of one product, process or service to be used in place of another
to fulfill the same requirements.
128 Metrology and Measurement

This condition that exists between devices or systems that exhibit equivalent functionality,
interface features and performance to allow one to be exchanged for another, without alteration,
and achieve the same operational service is called interchangeability. Moreover, we could say, it is
an alternative term for compatibility. And hence it requires the uniformity of the size of compo-
nents produced, which ensures interchangeability. The manufacturing time is reduced and parts, if
needed, may be replaced without any difficulty. For example, if we buy a spark plug of a scooter
from the market and then we find that it fits in the threaded hole positioned in a cylinder head of
a scooter automatically. We just need to specify the size of the spark plug to the shopkeeper. The
threaded-hole and spark-plug dimensions are standardized and designed to fit with each other.
Standardization is necessary for interchangeable parts and is important for economic reasons. Some
examples are shown in Fig. 6.1.
In mass production, since the parts need to be produced in minimum time, certain variations are
allowed in the sizes of parts. Shafts and hole sizes are specified and acceptable variation in the size
is specified. This allows deviation from size in such a way that any shaft will mate with any hole and
functions correctly for the designed life of the assembly. But the manufacturing system must have the
ability to interchange the system components with minimum effect on the system accuracy. And inter-
changeability ensures the universal exchange of a mechanism or assembly. Another parallel terminol-
ogy, ‘exchangeability’ is the quality of being capable of exchange or interchange.

Rolled-ball screw
assembly

Stud

Roller bearing
assembly

(a) (b)
Drill chuck
assembly

(c)
Fig. 6.1 Examples of interchangeability
Limits, Fits and Tolerances 129

Using interchangeability, the production of mating parts can be carried out at different places by
different operators, which reduce assembly time considerably along with reducing skill requirements at
work. Proper division of labour can be done. One important advantage is the replacement of worn-out
or defective parts and repair becomes very easy.

6.3 SELECTIVE ASSEMBLY

A product’s performance is often influenced by the clearance or, in some cases, by the preload of its
mating parts. Achieving consistent and correct clearances and preloads can be a challenge for assem-
blers. Tight tolerances often increase assembly costs because labour expenses and the scrap rate go up.
The tighter the tolerances, the more difficult and costly the component parts are to assemble. Keeping
costs down while maintaining tight assembly tolerances can be made easier by a process called selective
assembly, or match gauging.
The term selective assembly describes any technique used when components are assembled from sub-
components such that the final assembly satisfies higher tolerance specifications than those used to
make its subcomponents. The use of selective assembly is inconsistent with the notion of interchange-
able parts, and the technique is rarely used at this time. However, certain new technologies call for
assemblies to be produced to a level of precision that is difficult to reach using standard high-volume
machining practices.
To match gauge for selective assembly, one group of components
is measured and sorted into groups by dimension, prior to the assem-
bly process. This is done for both mating parts. One or more compo-
nents are then measured and matched with a presorted part to obtain an
optimal fit to complete the assembly. It results in complete protection
against defective assemblies and reduces the matching cost. Consider the Shaft
case of bearing assembly on shaft, (shown in Fig. 6.2) done by selective
assembly method. Pick and measure a shaft. If it is a bit big, pick a big
bearing to get the right clearance. If it is a bit small, pick a small bear- Bearing Clearance is
ing. For this to work over a long stretch, there must be about the same important
number of big shafts as big bearings, and the same for small ones. Fig. 6.2 Bearing assembly
on a shaft
By focusing on the fit between mating parts, rather than the
absolute size of each component, looser component tolerances
can be allowed. This reduces assembly costs without sacrificing product performance. In addi-
tion, parts that fall outside their print tolerance may still be useable if a mating part for it can be
found or manufactured, thus reducing scrap.
Consider the example of a system in the assembly of a shaft with a hole. Let the hole size be 25±0.02
and the clearance required for assembly be 0.14 mm on the diameter. Let the tolerance on the hole and
shaft be each equal to 0.04. Then, dimension range between hole diameter (25±0.02 mm) and shaft diam-
eter (24.88±0.02 mm) could be used. By sorting and grading, the shafts and holes can be economically
selectively assembled with the clearance of 0.14 mm as combinations given as follows.
130 Metrology and Measurement

Hole diameter and shaft diameter pairs (respectively) are


24.97 and 24.83, or 25.00 and 24.86, or 25.02 and 24.88, etc.
Not all products are candidates for selective assembly. When tolerances are broad or clearances are
not critical to the function of the final assembly, selective assembly isn’t necessary. Selective assembly
works best when the clearance or preload tolerance between parts is tight. Selective assembly is also a
good strategy when a large number of components must be stacked together to form the assembly, as
with an automobile transmission system. In that instance, holding tolerances tight enough for random
assembly while maintaining the correct clearance or preload would be impractical.

6.4 SYSTEM’S TERMINOLOGIES

The system of tolerances and fits, ISO, can be applied in tolerances and deviations of smooth parts and
for fits created by their coupling. It is used particularly for cylindrical parts with round sections. Toler-
ances and deviations in this standard can also be applied in smooth parts of other sections. Similarly,
the system can be used for coupling (fits) of cylindrical parts and for fits with parts having two parallel
surfaces (e.g., fits of keys in grooves).
The primary aim of any general system of standard fits and limits is to give guidance to the user for
selecting basic fundamental clearances and interferences for a given application; and for a fit, to deter-
mine tolerances and deviations of parts under consideration according to the standard ISO 286:1988.
This standard is identical with the European standard EN 20286:1993 and defines an internationally
recognized system of tolerances, deviations and fits. The standard ISO 286 is used as an international
standard for linear dimension tolerances and has been accepted in most industrially developed coun-
tries in identical or modified wording as a national standard ( JIS B 0401, DIN ISO 286, BS EN 20286,
CSN EN 20286, etc.). In India, we follow Indian Standards (i.e., IS: 919). This standard specifies the 18
grades of fundamental tolerances, which are the guidelines for accuracy of manufacturing. The Bureau
of Indian Standards (BIS) recommends a hole-basis system and the use of a shaft-basis (unilateral or
bilateral) system is also included. This standard uses terms for describing a system of limits and fits.
These terminologies can be well explained using the conventional diagram shown in Fig. 6.3.

1. Shaft The term ‘shaft’, used in this standard has a wide meaning and serves for specification of
all outer elements of the part, including those elements, which do not have cylindrical shapes.

2. Hole The term ‘hole’ can be used for specification of all inner elements regardless of their
shape.

3. When an assembly is made of two parts, one is known as the male (outer element of the part) sur-
face and the other one as the female (inner element of the part) surface. The male surface is referred as
‘shaft’ and the female surface is referred as ‘hole’.

4. Basic Size The basic size or normal size is the standard size for the part and is the same both
for the hole and its shaft. This is the size which is obtained by calculation for strength.
Limits, Fits and Tolerances 131

Lower deviantion
Upper deviantion

Lower deviantion
Upper deviantion
Tolerance

Zero line
Tolerance
Clearance fit
Zero line
Hole Hole

Shaft
Shaft
Min DIA
Max DIA

Max DIA

Basic size
Min DIA
Line of zero
deviation

Fig. 6.3 Conventional diagram

5. Actual Size Actual size is the dimension as measured on a manufactured part. As already men-
tioned, the actual size will never be equal to the basic size and it is sufficient if it is within predetermined
limits.

6. Limits of Size These are the maximum and minimum permissible sizes of the part (extreme
permissible sizes of the feature of the part).

7. Maximum Limit The maximum limit or high limit is the maximum size permitted for the part.

8. Minimum Limit The minimum limit or low limit is the minimum size permitted for the part.

9. Zero Line In a graphical representation of limits and fits, a zero line is a straight line to which the
deviations are referred to. It is a line of zero deviation and represents the basic size. When the zero line is
drawn horizontally, positive deviations are shown above and negative deviations are shown below this line.

10. Deviation It is the algebraic difference between a size (actual, limit of size, etc.) and the cor-
responding basic size.

11. Upper Deviation It is designated as ES (for hole) and es (for shaft). It is the algebraic
difference between the maximum limit of the size and the corresponding basic size. When the maximum
132 Metrology and Measurement

limit of size is greater than the basic size, it is a positive quantity and when the minimum limit of size
is less than the basic size then it is a negative quantity.

12. Lower Deviation It is designated as EI (for hole) and ei (for shaft). It is the algebraic
difference between the minimum limits of size and the corresponding basic size. When the minimum
limit of size is greater than the basic size, it is a positive quantity and when the minimum limit of size
is less than the basic size then it is a negative quantity.

13. Fundamental Deviations (FD) This is the deviation, either upper or the lower deviation,
which is the nearest one to the zero line for either a hole or a shaft. It fixes the position of the tolerance
zone in relation to the zero line (refer Fig. 6.4).

14. Actual Deviation It is the algebraic difference between an actual size and the corresponding
basic size.

15. Mean Deviation It is the arithmetical mean between the upper and lower deviation.

1. Upper deviation = max. limit of size − basic size


2. Lower deviation = min. limit of size − basic size
3. Tolerance = max. limit of size − min. limit of size

= upper deviation − lower deviation

Lower
deviation

Upper Tolerance
deviation

Zero line
Max. limit of
Basic size

Min. limit of
size

size

Fig. 6.4 Deviations and tolerance


Limits, Fits and Tolerances 133

16. Tolerance It is the difference between the upper limit and the lower limit of a dimension. It
is also the maximum permissible variation in a dimension.

17. Tolerance Zone It is a function of basic size. It is defined by its magnitude and by its posi-
tion in relation to the zero line. It is the zone bounded by the two limits of size of a part in the graphical
presentation of tolerance.

18. Tolerance Grade It is the degree of accuracy of manufacturing. It is designated by the let-
ters IT (stands for International Tolerance). Numbers, i.e., IT0, IT01, IT1, follow these letters up to
IT16; the larger the number, the larger the tolerance.

19. Tolerance Class This term is used for a combination of fundamental deviation and toler-
ance grade.

20. Allowance It is an intentional difference between the maximum material limits of mating parts.
For a shaft, the maximum material limit will be its high limit and for a hole, it will be its low limit.

21. Fits The relationship existing between two parts, shaft and hole, which are to be assembled,
with respect to the difference in their sizes is called fit.

6.5 LIMITS AND TOLERANCES

In the earlier part of the nineteenth century, the majority of components were actually mated together, their
dimension being adjusted (machined) until the required assembly-fit was obtained. This trial-and-error
type of assembly method demands an operator’s skill. So, in this case, the quality and quantity of the output
depends upon the operator. In today’s context of a mass-production environment, interchangeability

Tolerance

Allowance

Tolerance
High limit

Low limit

Max.size
Min.size

Fig. 6.5 Tolerance and allowance


134 Metrology and Measurement

and continuous assembly of many complex components could not exist under such a system. Modern
production engineering is based on a system of limits, fits and tolerances.

6.5.1 Limits
In a mass-production environment and in case of outsourcing, different operators on different similar
machines and at different locations produce subassemblies. So according to K J Hume “It is never possible
to make anything exactly to a given size of dimensions”. And producing a perfect size is not only difficult, but is
also a costly affair. Hence, to make the production economical some permissible variation in dimension
has to be allowed to account for variability. Thus, dimensions of manufactured parts, only made to lie
between two extreme dimensional specifications, are called maximum and minimum limits. The maxi-
mum limit is the largest size and the minimum limit is the smallest size permitted for that dimension.

6.5.2 Tolerance
The inevitable human failings and machine limitations,

Tolerance
prevent achieving ideal production conditions. Hence, a
purposefully permissible variation in size or dimension

Tolerance
called tolerance (refer Fig. 6.6) is to be considered for pro-
ducing a part dimension. And the difference between the Hole
upper and lower margins for variation of workmanship
is called tolerance zone. To understand tolerance zone, one
must know the term basic size. Basic size is the dimension
worked out of purely design considerations. Thus, gener-
Max DIA

Shaft
ally basic dimensions are first specified and then the value
Min DIA

Max DIA
Min DIA
(of tolerance) is indicated as to how much variation in the
basic size can be tolerated without affecting the function-
ing of the assembly.
Tolerance can be specified on both the meeting ele-
ments, i.e., on the shaft and/or on the hole. For example, Fig. 6.6 Tolerance
a shaft of 30-mm basic size along with a tolerance value of
0.04 may be written as 30 ± 0.04. Therefore, the maximum permissible size (upper limit) is 30.04 mm
and the minimum permissible size ( lower limit) is 29.96 mm. Then the value of the tolerance zone is
(upper limit − lower limit) = 0.08 mm.
The practical meaning of the word tolerance is that the worker is not expected to produce a part
with exact specified size, but that a definite small size error (variation) is permitted. Thus, tolerance is
the amount by which the job is allowed to deviate from the dimensional accuracy without affecting
its functional aspect when assembled with its mating part and put into actual service. If high-perfor-
mance requirement is the criteria for designing the assembly then functional requirements will be the
dominating factor in deciding the tolerance value. But in some cases why are close tolerances specified
for a specific job? This question may be answered by giving reasons like inexperienced designer, creed
for precision, fear of interference, change in company or vendor’s standards or may be the practice of
Limits, Fits and Tolerances 135

using tight tolerances. But an experienced designer


first refers the information available with him the
technical specification of the machine tools used
for producing the part, the material used, and the
accuracy of the measuring instrument used to
Cost
inspect the produced part. And then, he estab-
lishes the realistic and optimum values of toler-
ances. The effect of a working tolerance on the
cost of production is shown in Fig. 6.7. It is very
clear that production cost and tolerance have an
inversely proportional relationship.
This is because, as closer tolerances are speci-
fied, to achieve that practically we have to use very
Work tolerance
high-precision machines and tools, trained and
highly skilled operators, highly precise and accurate Fig. 6.7 Effect of working tolerance on
production cost
testing and inspection devices, and close supervi-
sion and control.
Tolerances can be specified by two systems:

1. Unilateral Tolerances System In this type of system, the part dimension is allowed to
vary on one side of the basic size, i.e., either below or above it (refer Fig. 6.8). This system is preferred
in an interchangeable manufacturing environment. This is because it is easy and simple to determine
deviations. This system helps standardize the GO gauge end. This type of tolerancing method is
helpful for the operator, as he has to machine the upper limit of the shaft and the lower limit of the
hole knowing fully well that still some margin is left for machining before the part is rejected.
Examples of unilateral systems
+0.02 +0.02
+0.01
1) 30 , 2) 30+0.00 ,
+0.00 +0.00
3) 30−0.01 , 4) 30−0.02 .

2. Bilateral Tolerances System In this system, the dimension of the part is allowed to vary
in both the directions of the basic size. So, limits of the tolerances lie on either side of the basic size.
Using this system, as tolerances are varied, the type of fit gets varied. When a machine is set for a basic
size of the part then for mass production, the part tolerances are specified by the bilateral system.
Examples of bilateral systems
+0.00
±0.02
1) 30−0.01 , 2) 30
136 Metrology and Measurement

Tolerance

Tolerance

Basic size

Fig. 6.8 Unilateral tolerance system

6.5.3 Maximum and Minimum Metal Limits


Consider the tolerance value of a tolerance specified for a shaft along with a basic dimension given as
30±0. 04 mm. Hence, the upper dimension limit will be 30.04 mm and the lower dimension limit will be
29.96 mm. Then, the Maximum Metal Limit (MML) for the shaft is 30.04 mm, as this limit indicates
the maximum allowable amount of metal. And the Least (minimum) Metal Limit (LML) of the shaft
dimension is 29.96 mm, as it gives a minimum allowable amount of metal. Similar terminologies are
used for a hole. Figure 6.9 explains the concept clearly.

This is the maximum


Maximum material condition.
Hole
size

Minimum
size

Shaft
Minimum This is the maximum
size material condition.
Maxmium
size

Fig. 6.9 Maximum and minimum metal limits


Limits, Fits and Tolerances 137

Figure 6.10 shows logical ways to meet the assembly tolerances. This diagram is called ‘logical
tree of tolerancing’. It explains the means when no deterministic and statistical co-ordinations
exist.

6.6 FITS

The variations in the dimensions of a shaft or a hole can be tolerated within desired limits to arrange for
any desired fits. A fit is the relationship between two meeting parts, viz., shaft and hole. This relationship
is nothing but the algebraic difference between their sizes.
It can be defined as ‘the relationship existing between two parts, shaft and hole, which are to be
assembled, with respect to the difference in their sizes before the relationship is called fit. It is also the
degree of tightness or looseness between the two mating parts. Depending on the mutual position of
tolerance zones of the coupled parts, three types of fits can be distinguished:

How to meet assembly


tolerances

DETERMINISTIC NO
COORDINATION STATISTICAL COORDINATION
COORDINATION
100% WORST CASE
INSPECTION STATISTICAL TOLERANCING
TOLERANCING

TOOL & DIE BUILD TOOL & DIE


FITTING SELECTIVE 100 %
ADJUSTMENT TO PRINT BUILD TO PRINT
ASSEMBLY INSPECTION
SIMULTANEOUS (FUNCTIONAL PART TOL = (NET BUILD)
MACHINING BUILD) ASSY TOL / N
BALANCE IF PROCESS MEAN PART TOL =
DISTRIBUTIONS ASSY TOL/N
= NORMAL
IGNORE MEAN, USE RESET NOMINAL USE SPC & CPK FAILURE TO
SPC & CP ON PARTS, TO PROCESS ON PARTS TO KEEP USE SPC LEADS TO
MAY BE HARD TO MEAN PROCESS MEAN = UNDOCUMENTED
DIAGNOSE PROBLEMS NOMINAL MEAN SHIFT
LATER
USE SPC & ERRORS GROW
CPK ON PARTS ERRORS GROW
WITH N
WITH N
CPK > 1: CPK < 1:
SAMPLE 100%
INSP INSP

PROBABILITY OF MEETS ASSEMBLY


TOO HARD OR NOT
MEETING ASSEMBLY TOLERANCES
ECONOMICAL TO
TOLERANCES IS 100% OF THE TIME-
MEET ASSEMBLY
LOGIC TREE OF PARTS ARE
TOLERANCES WITHOUT HIGH ENOUGH-
TOLERANCING INTERCHANGEALBE
COORDINATION- PARTS ARE
PARTS ARE NOT INTERCHANGEABLE
INTERCHANGEABLE ALMOST ALL THE TIME

Fig. 6.10 Logical tree of tolerancing


138 Metrology and Measurement

A. Clearance Fit It is a fit that always enables a clearance between the hole and shaft in the coupling.
The lower limit size of the hole is greater or at least equal to the upper limit size of the shaft.

B. Transition Fit It is a fit where (depending on the actual sizes of the hole and shaft) both
clearance and interference may occur in the coupling. Tolerance zones of the hole and shaft partly or
completely interfere.

C. Interference Fit It is a fit that always ensures some interference between the hole and shaft
in the coupling. The upper limit size of the hole is smaller or at least equal to the lower limit size of the
shaft.

Hole tolerance Shaft tolerance


Hole Hole
Hole

Shaft Shaft Shaft

(a) Clearance fit (b) Interference fit (c) Transition fit


Fig. 6.11 Types of fits

Properties and fields of use of preferred fits are described in the following overview. When
selecting a fit, it is often necessary to take into account not only constructional and technological but
also economic aspects. Selection of a suitable fit is important, particularly in view of those measuring
instruments, gauges and tools which are implemented in production. Therefore, while selecting a fit,
proven plant practices may be followed.

6.6.1 Clearance Fit


When the difference between the sizes of the hole and shaft, before assembly, is positive then the fit is
called clearance fit. In other words, in this type of fit, the largest permissible shaft diameter is the small-
est permissible diameter of the hole.
For example,
1. Maximum size of hole—50.1 mm; Maximum size of shaft— 49.85 mm
2. Minimum size of hole— 49.9 mm; Minimum size of shaft— 49.65 mm
Fits with guaranteed clearance are designed for movable couplings of parts (pivots, running and
sliding fits of shafts, guiding bushings, sliding gears and clutch disks, pistons of hydraulic machines, etc.).
Limits, Fits and Tolerances 139

The parts can be easily slid one into the other and turned. The tolerance of the coupled parts and fit
clearance increases with increasing class of the fit.

Minimum Clearance In case of a clearance fit, it is the difference between the minimum size of
the hole and the maximum size of the shaft.

Maximum Clearance In case of a clearance fit, it is the difference between the maximum size of
the hole and minimum size of the shaft.

1. Slide Clearance Fits (RC) When the mating parts are required to move slowly but in regu-
lar fashion in relation to each other, e.g., in the sliding change gears in the quick change gear box of a
machine tool, tailstock movement of a lathe, and feed movement of the spindle in case of a drilling
machine, sliding fits are employed. In this type of fit, the clearances kept are very small and may reduce
to zero. But, for slow and non-linear type of motion, e.g., motion between lathe and dividing head or
the movement between piston and slide valves, an ‘easy slide fit ’ is used. In this type of clearance fit, a
small clearance is guaranteed.

2. Running Clearance Fits (RC) When just a sufficient clearance for an intended purpose
(e.g., lubrication) is required to maintain between two mating parts which are generally at lower/mod-
erate speeds, e.g., gear box bearings, shafts carrying pulleys, etc., then a close running fit is employed.
Medium-running fits are used to compensate the mounting errors. For this type, a considerable
clearance is maintained, e.g., in the shaft of a centrifugal pump. In case of considerable amount of
working temperature variations and/or high-speed rotary assembly, loose running fits are employed.
The following are grades of clearance fits recommended for specific requirements:

RC 1 Close sliding fits with negligible clearances for precise guiding of shafts with high require-
ments for fit accuracy. No noticeable clearance after assembly. This type is not designed for free
run.

RC 2 Sliding fits with small clearances for precise guiding of shafts with high requirements for fit
precision. This type is not designed for free run; in case of greater sizes, a seizure of the parts may oc-
cur even at low temperatures.

RC 3 Precision-running fits with small clearances with increased requirements for fit precision. De-
signed for precision machines running at low speeds and low bearing pressures. Not suitable where
noticeable temperature differences occur.

RC 4 Close running fits with smaller clearances with higher requirements for fit precision. Designed
for precise machines with moderate circumferential speeds and bearing pressures.

RC 5, RC 6 Medium-running fits with greater clearances with common requirements for fit preci-
sion. Designed for machines running at higher speeds and considerable bearing pressures.
140 Metrology and Measurement

RC 7 Free running fits without any special requirements for precise guiding of shafts. Suitable for
great temperature variations.

RC 8, RC 9 Loose running fits with great clearances and parts having great tolerances. Fits exposed
to effects of corrosion, contamination by dust and thermal or mechanical deformations.

6.6.2 Interference Fit


When the difference between the sizes of the hole and shaft before assembly is negative then the fit is
called interference fit.
For example,
Maximum size of hole— 49.85 mm; Maximum size of shaft—50.1 mm
Minimum size of hole— 49.65 mm; Minimum size of shaft— 49.9 mm

Minimum Clearance In case of interference fit, it is the arithmetical difference between maximum
size of the hole and the minimum size of shaft before assembly.

Maximum Interference In case of interference fit, it is the arithmetical difference between the
minimum size of the hole and the maximum size of the shaft before assembly.
Interference fits are rigid (fixed) fits based on the principle of constant elastic pre-stressing of
connected parts using interference in their contact area. Outer loading is transferred by friction between
the shaft and hole created in the fit during assembly. The friction is caused by inner normal forces
created as a result of elastic deformations of connected parts.
Interference fits are suitable for transfer of both large torques and axial forces in rarely
disassembled couplings of the shaft and hub. These fits enable high reliability of transfer of even
high loads; including alternating loads or loads with impacts. They are typically used for fastening
geared wheels, pulleys, bearings, flywheels, turbine rotors and electromotors onto their shafts,
with gear rings pressed onto wheel bodies, and arms and journals pressed onto crankshafts.
Press on, in general, means inserting a shaft of larger diameter into a hub opening, which is smaller.
After the parts have been connected (pressed-on), the shaft diameter decreases and the hub opening
increases, in the process of which both parts settle on the common diameter. Pressure in the contact
area is then evenly distributed, shown in Fig. 6.12. The interference d, given by the difference between
the assembly-shaft diameter and hub-opening diameter, is a characteristic feature and a basic quantity
of interference fit. The value of contact pressure, as well as loading capacity and strength of the fit,
depends on the interference size.
With respect to the fact that it is not practically possible to manufacture contact area diameters
of connected parts with absolute accuracy, the manufacturing (assembly) interference is a vague and
accidental value. Its size is defined by two tabular values of marginal interferences, which are given by
the selected fit (by allowed manufacturing tolerances of connected parts). Interference fits are then
Limits, Fits and Tolerances 141

Fig. 6.12 Even pressure distribution in contact area

designed and checked on the basis of these marginal assembly interferences. There are two basic ways
of solving assembly process in case of interference fits:

1. Longitudinal Press (Force Fit)[FN] Longitudinal press is the forcible pushing of the
shaft into the hub under pressure or using mechanical or hydraulic jigs in case of smaller parts. When
using longitudinal pressure, surface unevenness of connected parts is partially stripped and smoothed.
This results in reduction of the original assembly interference and thus reduction of the assembly-
loading capacity. The amount of mounting smoothness of the surface depends on load-bearing treat-
ment of the connected part edge surface, speed of press and mainly on the roughness of connected
parts. The press speed should not exceed 2 mm/s. To prevent seizing, steel parts are usually greased.
It is also necessary to grease contact areas in case of large couplings with large interference, where
extremely high press forces are required. Parts from different materials may be dry-pressed. Greasing
contact areas enables the press process; however, on the other hand it leads to a decrease in friction
coefficient and coupling loading capacity. From the technological point of view, a longitudinal press is
relatively simple and undemanding; but, it shows lower assembly loading capacity and reliability than
a transverse press.

2. Transverse Press (Shrink Fit)[FN] Transverse press is the non-forcible inserting of


the part after previous hole heating (dilatation), or after shaft cooling (restriction). In case of shrink-
fit, the effective interference also decreases to a certain level due to ‘subsidence’. This decrease, how-
ever, is significantly smaller than in the case of a longitudinal press. The value of subsidence depends
on the roughness of the connected areas. The loading capacity of shrink fit couplings is approximately
1.5 times higher than in the case of force fits. The selection of heating or cooling depends on dimen-
sions of the parts and technical possibilities. During hole heating, it is necessary to observe that the
temperature when structural changes in material occur (in case of steels, it is approx. 200°C to 400°C)
is not exceeded. Heating of outer parts is usually done in an oil bath (up to 150°C) or gas or an electric
furnace. Parts with small diameters have to be heated to a much higher temperature than large ones.
Cooling of shafts is usually done rather with smaller couplings, using carbon dioxide (−70°C) or con-
densed air (−190°C). For couplings with large assembly interferences, a combination of both meth-
ods may be used. Shrink fitting is unsuitable for parts made of heat-treated steels and in the case of a
heated part fitted on a hardened one. In such cases, it is necessary to cool the inner part or force fit the
coupling. The following are grades of interference fits recommended for specific requirements:
142 Metrology and Measurement

FN 1 Light-drive fits with small interferences designed for thin sections, long fits or fits with cast-
iron external members

FN 2 Medium-drive fits with medium interferences designed for ordinary steel parts or fits with high-
grade cast-iron external members

FN 3 Heavy-drive fits with great interferences designed for heavier steel parts

FN 4, FN_5 Force fits with maximum interferences designed for highly loaded couplings

6.6.3 Transition Fit


A fit which may provide either a clearance or interference is called a transition fit. Here, the tolerance
zones of the hole and shaft overlap. This type of fit lies between clearance and interference fit. These
types include clearance or interference fits designed for demountable unmovable couplings where pre-
cision of fits of the coupled parts is the main requirement. The part must be fixed mechanically to
prevent one moving against the other during assembly.

1. Push Fit (LT) Consider the examples like changing gears, slipping, bushing, etc., whose sub-
components are dis-assembled during operations of the machines. It requires a small clearence, where
a push fit could be suitably employed.

2. Wringing Fit (LT) In case of reusable/repairable parts, the sub-parts must be replaced with-
out any difficulty. In these cases, assembly is done employing a wringing fit. The following are grades
of transition fits recommended for specific requirements:

LT 1, LT2 Tight fits with small clearances or negligible interferences (easy detachable fits of hubs
of gears, pulleys and bushings, retaining rings, bearing bushings, etc.). The part can be assembled or
disassembled manually.

LT 3, LT4 Similar fits with small clearances or interferences (demountable fits of hubs of gears and
pulleys, manual wheels, clutches, brake disks, etc.). The parts can be coupled or disassembled without
any great force by using a rubber mallet.

LT 5, LT6 Fixed fits with negligible clearances or small interferences (fixed plugs, driven bushings,
armatures of electric motors on shafts, gear rims, flushed bolts, etc.). It can be used for the assembly
of parts using low-pressing forces.

6.7 SYSTEM OF FIT

Although there can be generally coupled parts without any tolerance zones, only two methods of coupling
of holes and shafts are recommended due to constructional, technological and economic reasons.
Limits, Fits and Tolerances 143

6.7.1 Hole-basis System


Combinations of various shaft tolerance zones with the hole tolerance zone ‘H’ achieve the desired clear-
ances and interferences in the fit. In this system, as shown in Fig. 6.13(a) of tolerances and fits, the lower
deviation of the hole is always equal to zero and it is assumed as the basic size. Hole-basis system is pre-
ferred from the manufacturing point of view. This is because, it is more convenient and economical to use
standard tools, e.g., drils, reamers, broaches, etc. (whose sizes are not adjustable) to produce a hole. It also
requires less space to store the standard tools used to produce shafts of varied dimensions.
On the other hand, shaft size can be varied comparatively easily about the basic size by means of
turning and/or grinding operations. And also, gauging of shafts can be conveniently and quickly done
with the help of adjustable gap gauges.

6.7.2 Shaft-basis System


Combinations of various hole tolerance zones with the shaft tolerance zone h achieve the desired clear-
ances and interferences in the fit. In this system of tolerances and fits, the upper deviation of the hole is
always equal to zero and it is assumed as the basic size. The system shown in Fig. 6.13(b) is not suitable for
mass production as it is inconvenient, time-consuming and a costly affair to produce a shaft of exact size.
It also requires a large amount of capital and storage space to store the tools used to produce holes
of different sizes. And it is not convenient and easy to inspect the produced hole and make it fit the
standard sized shaft.

Shaft

Shaft
Hole Hole

Shaft
Hole
Shaft

Shaft
Fig. 6.13(a) Hole-basis system

Hole

Hole Shaft
Shaft

Hole

Shaft
Hole

Hole
Fig. 6.13(b) Shaft-basis system
144 Metrology and Measurement

6.8 INDIAN STANDARDS SPECIFICATIONS AND APPLICATION

As discussed in the earlier article, in India we have IS: 919 recommendation for limits and fits for
engineering. This standard is mostly based on British Standards BS: 1916-1953. This IS standard was
first published in 1963 and modified several times, the last modification being in 1990. In the Indian
Standard, the total range of sizes up to 3150 mm has been covered in two parts. Sizes up to 500 mm are
covered in IS: 919 and sizes above 500 mm, up to 3150 mm, are covered in IS: 2101. However, it is yet to
adopt several recommendations of ISO: 286. All these standards make use of two entities of the stan-
dard limits, fits and tolerences terminology system—standard tolerances and fundamental deviation.

6.8.1 Tolerances Grades and Fundamental Deviation


The tolerance of a size is defined as the difference between the upper and lower limit dimensions of
the part. When choosing a suitable dimension, it is necessary to also take into account the used method
of machining of the part in the production process. In order to meet the requirements of various
production methods for accuracy of the product, the Indian Standard, in line with the IS: 919 system,
implements 18 grades of accuracy (tolerances). Each of the tolerances of this system is marked IT with
the attached grade of accuracy (IT01, IT0, IT1 ... IT16). But, ISO: 286: 1988 specifies 20 grades of
tolerances (i.e., from IT01 to IT18).
As the class of work required and the type of machine tool used governs the selection of the grade of
tolerance, the type of fit to be obtained depends upon the magnitudes of the fundamental deviations, since
the qualitative criterion for selection of a fit includes a sum of deviations (in absolute values) of limit values
of the clearance or interference respectively of the designed fit from the desired values. IS: 919 recom-
mends 25 types of fundamental deviations. But, ISO: 286: 1988 recommends 28 numbers of fundamental
deviations. The relationship between basic size, tolerance, and fundamental deviations is diagrammatically
represented in Fig. 6.14 and 6.15(a) and (b). In general arrangement of a system, for any basic size there are
25 different holes. These fundamental deviations are indicated by letters symbols for shafts and holes.
The 25 holes are designated by the capital letters A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,
W,X,Y,Z,ZA,ZB,ZC. And the shafts are designated with the lowercase letters: a,b,c,d,e,f,g,h,i,j,k,l,m,n,
o,p,q,r,s,t,u,v,w,x,y,z,za,zb,zc. As per IS recommendations, each of the 25 holes has a choice of 18 toler-
ances, as discussed earlier. Also for shafts, for any given size there are 25 different shafts designated

Table 6.1 Field of use of individual tolerances of the ISO system

IT01 to IT6 For production of gauges and measuring instruments


IT5 to IT12 For fits in precision and general engineering
IT11 to IT16 For production of semi-products
IT16 to IT18 For structures
IT11 to IT18 For specification of limit deviations of non-tolerated dimensions
Limits, Fits and Tolerances 145

Table 6.2 Machining process associated with ISO IT tolerance grade

IT Grade 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lapping

Honing

Super finish-
ing
Cylindrical
grinding
Diamond
turning

Plan grinding

Broaching

Reaming

Boring, Turn-
ing

Sawing

Milling
Planing,
Shaping

Extruding

Cold rolling,
Drawing

Drilling

Die casting

Forging

Sand casting

Hot rolling,
Flame cutting
146 Metrology and Measurement

Lower
deviation
Tolerance

Upper Tolerance zone


deviation

Zero line

Max. limit of
Basic size

Min. limit of
size

size
Fig. 6.14 Basic size and its deviation

Fundamental
deviation
Tolerance zone
Tolerance

Zero line

(a)

Tolerance zone

Tolerance

Zero line

Fundamental deviation
(b)
Fig. 6.15 Disposition of fundamental deviation and tolerance zone w.r.t the zero line
Limits, Fits and Tolerances 147

350 mm
300
250 A
200 HOLES
B
F 150 EI CD
U 50 MN BASIC SIZE
E P R
0 F G
N H J K S T
U V
D –50 XY
A Jc Z ES
–150 Za
M
–200 Zb
E Zc
N –250
T –300
A
L
300
D
250 zb zc
E 200 za
jc
V 150 y z
50 u v w
ei
I
g h j s t BASIC SIZE
A 0 p r
f k mn
T –50 e
d
I –150 es c
b SHAFTS
O –200
N –250 a
–300
350 mm
Fig. 6.16 Position of fundamental deviations

and each has 1\18 grades of tolerences. The arrangement for the representation of the position of
fundamental deviations is shown in Fig. 6.16.
Hole ‘A’ and shaft ‘a’ have the largest fundamental deviations, hole being positive and shaft being
negative. The fundamental deviations for both hole ‘H’ and shaft ‘h’ are zero. For the shafts ‘a’ to ‘g’,
the upper deviation is below the zero line, and for the shafts ‘j’ to ‘zc’, it is above the zero line. For the
holes ‘A’ to ‘G’, the lower deviation is above the zero line, and for the holes ‘J’ to ‘Zc’; it is below the
zero line. The shaft for which upper deviation is zero is called basic shaft (i.e., ‘h’) and the hole for which
lower deviation is zero is called basic hole (i.e., ‘H’).

6.8.2 Designation of Holes and Shafts


One should always keep in mind that the value of a fundamental deviation is a function of the range
of basic size in which the basic size falls but not the specific basic size. These ranges (diameter steps up
to 500 mm) of basic size have been specified in IS: 919 ( please refer Table 6.3). Holes or shafts can be
described completely in the following manner:
1) Hole = 55 H7 means
55 = the basic size of the hole
H = the position of the hole w.r.t zero line. For this case it is on the zero line.
148 Metrology and Measurement

7 = the tolerance grade, i.e., IT7. By knowing this value, the limits for 55-mm size can be
found out.
2) Shaft = 60 m9 means
60 = the basic size of the shaft.
m = the position of the shaft w.r.t zero line. In this case, it is above the zero line.
9 = the tolerance grade, i.e., IT9. By knowing this value, the limits for 60-mm size can be
found out.
For deciding limits we have to find out the value of tolerance grades, first for hole and then of the shaft
(as the hole basis system has been followed) to suit the requirements for the type of fit to be employed in
the application under consideration. So, the calculation for tolerance grade is done as follows:
The fundamental tolerance unit is denoted as i (in microns). It is used to express various IT grades
from IT5 to IT16, where the value of i in terms of the diameter D (in mm) can be calculated as

i = 0.45 3 D + 0.001D
The diameter ‘D’ (in mm) is the geometric mean of the diameter steps (please refer Table 6.3).
Tolerances are same for all diameter sizes, which fall in the specific range of the diameter step. These
steps are the recommendations of IS: 919.
The values of tolerances for tolerance grades IT5 to IT16 are given in Table 6.4.
For the values of tolerance grades IT01 to IT4, the formulae are
For IT01 = 0.3 + 0.008D
For IT0 = 0.5 + 0.012D
For IT1 = 0.8 + 0.02D

Table 6.3 Geometric mean of diameter steps

General Cases 0–3, 6–10, 18–30, 30–50, 50–80, 80–120, 120–180, 180–250,
(mm) 250–315, 315–400, 400–500

10–14, 14–18, 18–24, 24–30, 30–40, 40–50, 50–65, 65–80, 80–100,


Special Cases
100–120, 120–140, 140–160, 160–180, 180–200, 200–225, 225–250,
(mm)
250–280, 280–315, 315–355, 355–400, 400–450, 450–500

Table 6.4 Tolerance grades IT5 to IT16

Grade IT5 IT6 IT7 IT8 IT9 IT10 IT11 IT12 IT13 IT14 IT15 IT16
Tolerance 7i 10i 16i 25i 40i 64i 100i 160i 250i 400i 640i 1000i
Limits, Fits and Tolerances 149

The values of tolerance grades IT2 to IT4 are regularly selected approximately and geometrically
between the values of IT1 to IT5. The seven finest grades, i.e., IT1 to IT5 are applicable for the sizes
up to 500 and the remaining eleven coarser grades, i. e., IT6 to IT16 are used for sizes between 500 mm

Table 6.5 Fundamental deviations for shaft

Upper Deviations (es) Lower Deviations (es)

Shaft Designation In Microns (For D in mm) Shaft Designation In Microns (For D in mm)

= − (265 + 1.3 D) For D ≤ 120 and


a j5 to j8 No formula
= − 3.5 D For D > 120

= −(140 + 0.85 D) For D ≤ 160 and


b k4 to k8 = + 0.6
3
D
= − 1.8 D For D > 160

= − (520. 2 D) For D ≤ 40 and K for grade


c =0
= − (9.5 + 0.8 D) For D > 40 ≤ 3 and ≥ 8

d = − (16 D0. 41) m = + (IT7 − IT6)

e = − 11 D0. 41 n = −5D0. 24

f = − 5.5 D0. 41 p = + IT7 + 0 to 5

= Geometrical mean
g = − 5.5 D0. 34 r
values el for p and s

= + (IT8 + 1 to 4)
For D ≤ 50
s
= + IT7 to + 0.4D
For D > 50

t = IT7 + 0.63 D

u = + IT7 to + D

v = + IT7 + 1.25 D
h =0
x = + IT7 + 1.6 D

y = + IT7 + 2 D

z = IT7 + 2.5 D

za = IT8 + 3.15 D

zb = IT9 + 4 D

zc = IT10 + 5 D
150 Metrology and Measurement

up to 3150 mm. The manufacturing processes that could produce accuracies expressed in terms of
IT grades are already discussed earlier (refer Table 6.2). The formulae for fundamental deviations for
shafts for sizes up to 500 mm are as follows:

Table 6.6 Most commonly used shafts

Letter a b c d e f g h j js k m n p r s t u v x y z za zc zc
Symbol
Grade IT 01 and 0 + +
IT 1 + +
IT 2 + +
IT 3 + +
IT 4 + + + + + + + + + +
IT 5 + + + + + + + + + + + + + + + + +
IT 6 + + + + + + + + + + + + + + + + + + + +
IT 7 + + + + + + + + + + + + + + + + + + + + + +
IT 8 + + + + + + + + + + +
IT 9 + + + + + + + +
IT 10 + + +
IT 11 + + + + + +
IT 12 + +
IT 13 + +
IT 14 + +
IT 15 + +
IT 16 + +

Table 6.7 Most commonly used holes

Letter A B C D E F G H J JS K M N P R S T U V X Y Z ZA ZB ZC
Symbol
Grade IT 01 and 0 + +
IT 1 + +
IT 2 + +
Limits, Fits and Tolerances 151

Table 6.7 Cont’d

IT 3 + +
IT 4 + + + + + + + + + + +
IT 5 + + + + + + + + + + + + + + + + +
IT 6 + + + + + + + + + + + + + + + + + + + +
IT 7 + + + + + + + + + + + + + + + + + + + +
IT 8 + + + + + + + + + + + + + + + + +
IT 9 + + + + + + + + + + + +
IT 10 + + + + +
IT 11 + + + + + + +
IT 12 + +
IT 13 + +
IT 14 + +
IT 15 + +
IT 16 + +

The deviations for the holes are derived from the corresponding values of the shafts. The limit
system (symbol) used for the holes is the same for shaft limits, i.e., grade and letter. Thus, deviation for
hole (ES) is equal to deviation for the shaft (in terms of magnitude) of the same letter symbol but of
opposite signs.

6.8.3 Classification of Most Commonly Used Types of Fits


The list of recommended fits given in Table 6.8 is as per IS recommendations. But the list
of recommended fits given below here is for information only and cannot be taken as a fixed
listing. The enumeration of actually used fits may differ depending on the type and field of
production, local standards and national usage and last but not least, on the industry practices.
Properties and fields of use of some of the selected fits are described in the following over-
view. While selecting a fit, it is often necessary to take into account not only constructional
and technological views, but also economic aspects. Selection of a suitable fit is important
particularly in view of those measuring instruments, gauges and tools which are implemented
in the production. Therefore, the following proven industry practices may be considered as
guidance while selecting a fit.
Fields of use of selected fits (preferred fits are in bold):
152 Metrology and Measurement

Table 6.8 As per IS: 2709–1964

Class With Holes


Type of Remarks
of Fits H6 H7 H8 H11
Shaft a - - - a 11 Loose clearence fit not widely used

Shaft b - - - - --- do ---


Shaft c - c8 c9 c11 Slack running fit
Shaft d - d8 d9 d11 Loose running fit
Clearance Fit

Shaft e e7 e8 e8-e9 - Easy running fit


Shaft f f6 f7 f8 - Normal running fit
Close running fit or sliding fit, also spigot
Shaft g g6 g7 g7 -
and location fit

Precision sliding fit; also fine spigot fit


Shaft h h5 h6 h7-h8 h11
and location fit
Push fit for very accurate location with easy
Shaft j j5 j6 j7 -
assembly and dismantling
Transition Fit

Light keying fit (true transition fit) for keyed shaft,


Shaft k k5 k6 k7 -
non-running locked pin, etc.
Shaft m m5 m6 m7 - Medium keying fit
Heavy keying fit (for tight assembly of meeting
Shaft n n5 n6 n7 -
surface)
Light press fit with easy dismantling for non-ferrous
Shaft p p5 p6 - - parts; standard press fit with easy dismantling for
ferrous and non-ferrous parts assembly
Medium drive fit with easy dismantling for ferrous
Shaft r r5 r6 - - parts assembly; Light drive fit with easy dismantling
for non-ferrous fit for non-ferrous parts.
Interference Fit

Heavy drive for ferrous parts, permanent or semi-


Shaft s s5 s6 s7 - permanent assembled press and for non-ferrous
parts.
Shaft t t5 t6 t7 - Force fit on for ferrous parts for permanent assembly
Shaft u u5 u6 u7 - Heavy force fit or shrink fit
Shaft v,
- - - - Very large interference fit; not recommended for use
x,y and z
Limits, Fits and Tolerances 153

Clearance Fits H11/a11, H11/c11, H11/c9, H11/d11, A11/h11, C11/h11, D11/h11


Fits with great clearances with parts having great tolerances.

Use Pivots, latches, fits of parts exposed to corrosive effects, contamination with dust and thermal
or mechanical deformations
H9/C9, H9/d10, H9/d9, H8/d9, H8/d8, D10/h9, D9/h9, D9/h8
Running fits with greater clearances without any special requirements for accuracy of guiding
shafts.

Use Multiple fits of shafts of production and piston machines, parts rotating very rarely or only
swinging
H9/e9, H8/e8, H7/e7, E9/h9, E8/h8, E8/h7
Running fits with greater clearances without any special requirements for fit accuracy.

Use Fits of long shafts, e.g., in agricultural machines, bearings of pumps, fans and piston machines
H9/f8, H8/f8, H8/f 7, H7/f 7, F8/h7, F8/h6
Running fits with smaller clearances with general requirements for fit accuracy.

Use Main fits of machine tools. General fits of shafts, regulator bearings, machine tool spindles,
sliding rods
H8/g7, H7/g6, G7/h6
Running fits with very small clearances for accurate guiding of shafts. Without any noticeable
clearance after assembly.

Use Parts of machine tools, sliding gears and clutch disks, crankshaft journals, pistons of hydraulic
machines, rods sliding in bearings, grinding machine spindles
H11/h11, H11/h9
Slipping fits of parts with great tolerances. The parts can easily be slid one into the other and
turned.

Use Easily demountable parts, distance rings, parts of machines fixed to shafts using pins, bolts, rivets
or welds
H8/h9, H8/h8, H8/h7, H7/h6
Sliding fits with very small clearances for precise guiding and centering of parts. Mounting by slid-
ing on without use of any great force; after lubrication the parts can be turned and slid by hand.

Use Precise guiding of machines and preparations, exchangeable wheels, roller guides.
154 Metrology and Measurement

Transition Fits H8/j7, H7/js6, H7/j6, J7/h6


Tight fits with small clearances or negligible interference. The parts can be assembled or disas-
sembled manually.

Use Easily dismountable fits of hubs of gears, pulleys and bushings, retaining rings, frequently re-
moved bearing bushings
H8/k7, H7/k6, K8/h7, K7/h6
Similar fits with small clearances or small interferences. The parts can be assembled or disassembled
without great force using a rubber mallet.

Use Demountable fits of hubs of gears and pulleys, manual wheels, clutches, brake disks
H8/p7, H8/m7, H8/n7, H7/m6, H7/n6, M8/h6, N8/h7, N7/h6
Fixed fits with negligible clearances or small interferences. Mounting of fits using pressing and light
force.

Use Fixed plugs, driven bushings, armatures of electric motors on shafts, gear rims, flushed bolts

Interference Fits H8/r7, H7/p6, H7/r6, P7/h6, R7/h6


Pressed fits with guaranteed interference. Assembly of the parts can be carried out using cold pressing.

Use Hubs of clutch disks, bearing bushings


H8/s7, H8/t7, H7/s6, H7/t6, S7/h6, T7/h6
Pressed fits with medium interference. Assembly of parts using hot pressing. Assembly using cold
pressing only with use of large forces.

Use Permanent coupling of gears with shafts, bearing bushings


H8/u8, H8/u7, H8/x8, H7/u6, U8/h7, U7/h6
Pressed fits with big interferences; Assembly using pressing and great forces under different
temperatures of the parts.

Use Permanent couplings of gears with shafts, flanges

6.8.4 Deviation for the Sizes above 500 mm and up to 3150 mm


If the dimensions of the components to be assembled are of larger size then many practical
difficulties may arise in measuring the large diameters accurately and achieving interchangeability.
Hence, it is not practicable to manufacture large parts to small tolerances. Therefore, for the sizes
above 500 mm and up to 3150 mm, the tolerance unit as specified in IS: 2101 for various grades is
i (in microns) = 0.04 D + 2.1
Limits, Fits and Tolerances 155

Fundamental deviations for the holes and shafts for diameters above 500 mm and up to 3150 mm
are given in Table 6.9
Table 6.9 Fundamental deviations for shaft & Holes (for D > 500 mm)

Shaft Holes Formulae for Deviation in µ


Fundamental
Fundamental Deviation
Type Deviation Type For D in mm
(with sign)
(with sign)
d D 16 D 0.14
e E 11 D 0.14
− ve. + ve.
f F 5.5 D 0.14
g G 2.5 D 0.14
h Zero Deviation H Zero Deviation 0
js − ve. JS + ve. 0.41 Tn
k K 0
m M 0.24 D + 12.6
n N 0.04 D + 21
+ ve. − ve
p P 0.072 D + 37.8
Geometric Mean Between p and s
r R
or P and S
s S IT7 + 0.4 D
t + ve. T − ve IT7 + 0.63 D
u U IT7 + D

Hole Tolerance Zones The tolerance zone is defined as a spherical zone limited by the upper and
lower limit dimensions of the part. As per ISO system, though the general sets of basic deviations (A ... ZC)
and tolerance grades (IT1 ... IT18) can be used for prescriptions of hole tolerance zones by their mutual
combinations, in practice, only a limited range of tolerance zones is used. An overview of tolerance zones
for general use can be found in Table 6.10. The tolerance zones not included in this table are considered
special zones and their use is recommended only in technically well-grounded cases.

Shaft Tolerance Zones The tolerance zone is defined as a spherical zone limited by the upper
and lower limit dimensions of the part. The tolerance zone is therefore determined by the amount
of the tolerance and its position related to the basic size. As per ISO system, though the general sets
of basic deviations (a ... zc) and tolerance grades (IT1 ... IT18) can be used for prescriptions of shaft
tolerance zones by their mutual combinations, in practice, only a limited range of tolerance zones is
156 Metrology and Measurement

Table 6.10 Hole tolerance zones for general use

A9 A10 A11 A12 A13


B8 B9 B10 B11 B12 B13
C8 C9 C10 C11 C12 C13
CD6 CD7 CD8 CD9 CD10
D6 D7 D8 D9 D10 D11 D12 D13
E5 E6 E7 E8 E9 E10
EF3 EF4 EF5 EF6 EF7 EF8 EF9 EF10
F3 F4 F5 F6 F7 F8 F9 F10
FG3 FG4 FG5 FG6 FG7 FG8 FG9 FG10
G3 G4 G5 G6 G7 G8 G9 G10
H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15 H16 H17 H18
JS1 JS2 JS3 JS4 JS5 JS6 JS7 JS8 JS9 JS10 JS11 JS12 JS13 JS14 JS15 JS16 JS17 JS18
J6 J7 J8
K3 K4 K5 K6 K7 K8
M3 M4 M5 M6 M7 M8 M9 M10
N3 N4 N5 N6 N7 N8 N9 N10 N11
P3 P4 P5 P6 P7 P8 P9 P10
R3 R4 R5 R6 R7 R8 R9 R10
S3 S4 S5 S6 S7 S8 S9 S10
T5 T6 T7 T8
U5 U6 U7 U8 U9 U10
V5 V6 V7 V8
X5 X6 X7 X8 X9 X10
Y6 Y7 Y8 Y9 Y10
Z6 Z7 Z8 Z9 Z10 Z11
ZA6 ZA7 ZA8 ZA9 ZA10 ZA11
ZB7 ZB8 ZB9 ZB10 ZB11
ZC7 ZC8 ZC9 ZC10 ZC11

used. An overview of tolerance zones for general use can be found in Table 6.11. The tolerance zones
not included in this table are considered special zones and their use is recommended only in technically
well-grounded cases.
Prescribed hole tolerance zones for routine use (for basic sizes up to 3150 mm)

Note: Tolerance zones with thin print are specified only for basic sizes up to 500 mm.
Hint: For hole tolerances, tolerance zones H7, H8, H9 and H11 are preferably used.
Limits, Fits and Tolerances 157

Table 6.11 Shaft tolerance zone for general use

a9 a10 a11 a12 a13


b9 b10 b11 b12 b13
c8 c9 c10 c11 c12
cd5 cd6 cd7 cd8 cd9 cd10
d5 d6 d7 d8 d9 d10 d11 d12 d13
e5 e6 e7 e8 e9 e10
ef3 ef4 ef5 ef6 ef7 ef8 ef9 ef10
f3 f4 f5 f6 f7 f8 f9 f10
fg3 fg4 fg5 fg6 fg7 fg8 fg9 fg10
g3 g4 g5 g6 g7 g8 g9 g10
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18
js1 js2 js3 js4 js5 js6 js7 js8 js9 js10 js11 js12 js13 js14 js15 js16 js17 js18
j5 j6 j7
k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13
m3 m4 m5 m6 m7 m8 m9
n3 n4 n5 n6 n7 n8 n9
p3 p4 p5 p6 p7 p8 p9 p10
r3 r4 r5 r6 r7 r8 r9 r10
s3 s4 s5 s6 s7 s8 s9 s10
t5 t6 t7 t8
u5 u6 u7 u8 u9
v5 v6 v7 v8
x5 x6 x7 x8 x9 x10
y6 y7 y8 y9 y10
z6 z7 z8 z9 z10 z11
za6 za7 za8 za9 za10 za11
zb7 zb8 zb9 zb10 zb11
zc7 zc8 zc9 zc10 zc11

Prescribed shaft-tolerance zones for routine use (for basic sizes up to 3150 mm)
Note: Tolerance zones with thin print are specified only for basic sizes up to 500 mm.
Hint: For shaft tolerances, tolerance zones h6, h7, h9 and h11 are preferably used.

6.8.5 Illustration for Determining Type of Fit


Determine type of Fit of 55 H7 f8:
158 Metrology and Measurement

1. Determine value of D = 50×65 = 57.008 mm


2. Determine value of i = 0.45× 3 57.008 + 0.001 (57.008) = 1.789 microns.
3. Now consider first for Hole H7,
Value for the Tolerance IT7 (From Table 6.4) = 16 (i ) = 16 (1.789) = 0.028 mm
As the H-hole lies on the zero line (refer Fig. 6.16), its fundamental deviation is zero and lower devia-
tion is zero.
Basic size = 55 mm
∴ Basic size + Fundamental deviation = Lower limit of size = 55 mm

∴ Lower limit + Tolerance = Upper limit

i.e., 55 mm + 0.028 = 60.028 mm


Hence, hole size varies between 55.00 mm to 55.028 mm.
4. Now consider for shaft 55f8,

Upper limit

Tolerance

Upper
deviation

Zero line

Lower limit
Fig. 6.17 For hole ‘H’

Lower deviation
Upper limit

Zero line

Lower
deviation Tolerance

Lower limit
Fig. 6.18 For shaft ‘f’
Limits, Fits and Tolerances 159

Value for the tolerance IT8 (From Table 6.4) = 25 (i ) = 25 (1.789 microns) = 0.0447 mm
As the h-shaft lies below the zero line (refer Fig. 6.16), its fundamental deviation is the upper devia-
tion. Hence, the formula for fundamental deviation from Table 6.5 is = [−5.5 D 0 . 41 ].
∴ −5.5 D 0 . 41 = = −5.5 (57.0.08) 0 . 41 = = −28.86 microns = −0.0288 mm

∴ Now, upper limit of shaft = Basic size + Fundamental deviation

= 55 mm + (−0.0288) = 54.9712 mm
And, lower limit of shaft = Upper limit of shaft + Value for the Tolerance IT8
= 54.9712 − 0.0447 = 54.9265 mm
Hence, shaft size varies between 54.9712 mm to 54.9265 mm.
5. To check the type of fit we have to calculate
Maximum clearance = 55.028 mm − 54.9265 mm = 0.1015 mm [∴ clearance exists]
Minimum clearance = 54.9712 mm − 55.00 mm = 0.028 mm [∴ clearance exists]
6. Therefore, we can conclude that the type of fit of 55 H7f8 assembly results into ‘Clearance fit’.

6.8.6 Conversion of Hole-Basis is Fit into Equivalent Shaft-Basis Fit


In most of the applications, hole-basis fits are recommended but the designer has the freedom to
use the shaft-basis fit system in his design. The IS system has provided this conversion procedure for
dimension sizes up to and including 500 mm. Equivalent fits on the hole-basis and shaft-basis system
as per IS: 2709 are given in Table 6.12.

6.9 GEOMETRICAL TOLERANCES

In most of cases, it is necessary to specify the geometry features of the part/component, viz., straight-
ness, flatness, roundness, cylindricity, etc., along with linear dimensions alone. The word ‘geometrical
features’ depicts that geometrical tolerances (in context of the accuracy of the dimensions) of every
entity of the part has a relationship with each other. And hence it is accepted that these should be speci-
fied separately. The importance of every aspect of geometry of the part/component is discussed thor-
oughly in the previous chapter. Tables 6.13,14,15 illustrate the geometrical tolerance symbols. Table
6.16 explains ways of representations of geometrical (features) tolerance symbols.
To understand the importance of specifications of the geometrical tolerance symbols in engineer-
ing drawing, consider Fig. 6.19. This figure shows the assembly of a shaft and hole. To get the proper
assembly fit, specifying only diameter values will not give the correct idea about the same. A little
consideration will show that apart from diameter values, some more information is required to be
specified, i.e., information about geometrical tolerances. In absence of this information, when the
mating parts are in maximum metal condition then the worse condition of the assembly of shaft and
hole occurs.
160 Metrology and Measurement

Table 6.12 Equivalent fits for the hole-basis and shaft-basis system
Clearance Transition Interference
Hole Basis Shaft Basis Hole Basis Shaft Basis Hole Basis Shaft Basis
H7-c8 C8-h8 H6-j5 J6-h5 H6-n5 N6-h5
H8-c9 C9-h8 H7-j6 J7-h6
H11-c11 C11-h11 H8-j7 J8-h7 H6-p5 P6-h5
H7-p6 P7-h6
H7-d8 D8-h7 H6-k5 K6-h5
H8-d9 D9-h8 H7-k6 K7-h6 H6-r5 R6-h5
H1-d11 D11-h11 K8-k7 K8-h7 H7-r6 R7-h6
H6-e7 E7-h6 H6-m5 M6-h5 H6-s5 S6-h6
H7-e8 E8-h8 H7-m6 M7-h6 H7-s6 S7-h6
H6-f6 F6-h6 H8-m7 M8-h7 H8-s7 S8-h7
H7-f7 F7-h7 H7-n6 N7-h6 H7-t6 T7-h6
H8-f8 F8-h8 H8-n7 N8-h7 H7-t6 T7-h6
H8-t7 T8-h7
H6-g5 G7-h5 H8-p7 P8-h7
H7-g6 G7-h6 H6-u5 U6-h5
H8-g7 G8-h7 H8-r7 R8-h7 H7-u6 U7-h6
H8-u7 U8-h7

Shaft
Shaft φ 20.58 mm
φ 20.58 mm

Hole

Fig. 6.19 Assembly of shaft and hole


Limits, Fits and Tolerances 161

Table 6.13 Geometrical tolerance symbols

Broad Classification Geometric


(Kind of feature) Characteristic Symbol Definition
1 2 3 4
Individual features Flatness Condition of surface having all ele-
ments in one plane

A single surface, element, Straightness Condition where an element of a


or size feature which surface or an axis is the straight line
relates to a perfect
geometric counterpart of
itself as the desired form.
Roundness Condition on a surface of revolution
(Circularity) (cylinder, cone, sphere) where all
points of the surface intersected
by any plane I) perpendicular to a
common axis (cylinder, cone) or II)
passing through a common centre
(sphere) are equivalent from the centre
Cylindricity Condition of a surface of a revolu-
tion in which all points of the surface
are equidistant from a common axis

Table 6.14 Geometrical tolerance symbols

Broad
Classification Geometric
(Kind of feature) Characteristic Symbol Definition
1 2 3 4
Individual or related Profile or A line Condition permitting a
feature uniform amount of profile
variation, either unilaterally
or bilaterally, along a line
element of a feature.
A single surface or Profile of a surface Condition permitting a
element feature whose uniform amount of profile
perfect geometrical profile variation, either unilaterally
is described which may, or of bilaterally, on a surface.
may not relate to a datum.
162 Metrology and Measurement

Table 6.15 Geometrical tolerance symbol

Broad
Classification Geometric
(Kind of Feature) Characteristics Symbol Definition
Relation feature Perpendicularity Condition of a surface axis, or line
(squareness or which is 90° from a datum plane or
normality) datum axis
A single feature or Angularity Condition of a surface, axis, or
element feature which centre plane which is at a specified
relates to a datum, or angle (other than 90° from a datum
datums in form and plane or axis).
attitude (orientation)
Parallelism Condition of a surface, line or axis
which is equidistant at all points
from a datum plane or axis
Circular runout Composite control or circular
elements of a surface indepen-
dently at any circular measuring
position as the part is rotated
through 360°
Total runout Simultaneous composite control
of all elements of surface at all
Total circular and profile measuring
positions as the part is rotated
through 360°

6.10 LIMIT GAUGES AND DESIGN OF LIMIT GAUGES

The main requirement of using interchangeability in the manufactured component (considering cost
of manufacturing) is to attain the close adherence to specified size (not necessarily to the exact basic
size) to fulfill functional requirements. Therefore, it is a permitted variation in the size which results in
economy, but, on the other hand, a system of control and inspection is to be employed. The problem
of inspecting the specific dimension of a component in this type of environment could be solved by
using limit gauges. Limit gauges are used to ensure whether the size of the component being inspected
lies within specified limits or not; however they are not meant for measuring the exact size.

6.10.1 Taylor’s Principles


In the United Kingdom during the years 1789–1864, Richard Roberts, a machine tool manufacturer,
reportedly used a plug and collar gauge to inspect dimensions. In 1857, Joseph Whiteworth demon-
strated the use of internal and external gauges for a shaft-based limit system. In 1905, Willium Taylor
explained the concept of a relationship between the two processes of checking the component, i.e.,
checking the specific dimensions of a component and checking the different elements of dimension, i.e.,
geometric features. His concepts, known as Taylor’s principles, are used in the design of limit gauges.
Limits, Fits and Tolerances 163

Table 6.16 Ways of representations of geometrical tolerance symbols

1 2
Feature Control Symbol The feature control symbol consists of a frame con-
Symbol of At max. material
Tolerance
taining the geometric characteristic symbol, datums,
diameter condition references, tolerances, etc.

0.24 M A B C

Geometrical Datum
characteristic References

Taylor states that the ‘GO’ gauge should check all the possible elements of dimensions at a time
(roundness, size, location, etc.) and the ‘NO GO’ gauge should check only one element of the dimen-
sion at a time. And also, according to Taylor, ‘GO’ and ‘NO GO’ gauges should be designed to check
maximum and minimum material limits.

‘GO’ Limit This designation is applied to that limit between the two size limits which corresponds to
the maximum material limit considerations, i.e., the upper limit of a shaft and the lower limit of a hole. The
form of the ‘GO’ gauge should be such that it can check one feature of the component in one pass.

5.00 ± 0.12 − 3 Holes 0.02

+ 0.24 M A B C -A-

A D D E

14.6

10.62

B C C F
0.02 A

20.4 0.02 B
-C-
5.75

0.02 A
-B-
(a)
Fig. 6.20(a) Example of representation of features of geometric tolerances in engineering
drawing of parts
164 Metrology and Measurement

D2/D3 A–B
D6 A–B
D1 A–B D3
D6
D2 B
A D1 D4 D5

L4 L1
L2
L3

(b)
Fig. 6.20(b) Examples of representation of features of geometric tolerances in engineering
drawing of parts

Max. limit
Tolerance zone

Min. limit
NO
GO
GO

The 'GO' plug


gauge is the
size of the
minimum limit The'NO GO'
of the hole. plug gauge
corresponds to
the maximum
limit.

Fig. 6.21 Plug gauge

‘NO GO’ Limit This designation is applied to that limit between the two size limits which corre-
sponds to the minimum material condition, i.e., the lower limit of a shaft and the upper limit of a hole.
Limits, Fits and Tolerances 165

Tolerance
zone Max. limit

GO Min. limit NO GO

The 'GO' snap


gauge is the
size of the minimum
limit of the shaft.

The 'NO GO'


snap gauge
corresponds to
the minimum
limit.
Fig. 6.22 Snap gauge

Work
NO GO (H)
gauge

Higher limit (H)


for hole Direction of wear
of gauge

Tolerance
for shaft

T GO (L) gauge

Plug gauges
Margin for wear
Lower limit (L) (provided when tolerance
for hole is over 0.0035 IN)

Fig. 6.23 Provision of margin for wear on GO gauges

6.10.2 Types of Limit Gauges


Fixed gauges are designed to access a specific attribute based on comparative gauging. Fixed gauges are
available for quickly assessing approximate size in a GO/NO-GO manner. Specific fixed gauges are designed
for measuring attributes such as drill size, plate thickness, wire thickness, radius or fillet size, screw pitch,
weld size and pipe or tube size. These gauges are available in English or metric configurations.
166 Metrology and Measurement

GO (H) gauge

A
Higher limit (H)
T
for hole

Margin for wear


(provided when
tolerance is over
Tolerance 0.0035 IN)
for shaft

Direction of wear
of gauge

Gap and ring gauges NO GO (L)


gauge
Lower limit (L)
for hole
Fig. 6.24 Provision of margin for wear on GO gauges

Some of the most common types of fixed gauges are detailed below:
Gauges in general are classified as non-dimensional gauges and dimensional gauges.

(A) Non-dimensional gauges are classified specifically,


a. On the basis of type
i. Standard Gauges
ii. Limit Gauges
b. On the basis of purpose
i. Workshop Gauges
ii. Inspection Gauges
iii. Reference/Master Gauge
c. On the basis of geometry of surface
i. Plug Gauge
ii. Snap/Ring Gauges
d. On the basis of its design
i. Single/double Limit Gauge
ii. Fixed/Adjustable Gauges
iii. Solid/Hollow Gauges
Apart from these categories of gauges, angle gauges have a series of fixed angles for comparative
assessment of the angle between two surfaces. Centre gauges are fixed with a V-shaped notch for finding
the centre of a part or bar with a round or square cross section. Drill gauges are fixed with a series of
precise holes used to gauge drill diameter size. Gear tooth gauges are fixed gauges used for determining
diametrical pitch of involute gears.
Limits, Fits and Tolerances 167

Pipe and tube gauges have a fixed design to quickly access pipe, tube, or hose features, such as outer
diameter, inner diameter, taper, or tube bead. Radius, fillet, or ball gauges are used for comparatively
determining the diameter or radius of a fillet, radius, or ball. Screw and thread pitch gauges are serrated to
comparatively assess thread or screw pitch and type.
Taper gauges consist of a series of strips that reduce in width along the gauge length, and are used to
gauge the size of a hole or slot in a part. Thickness gauges consist of a series of gauge stock fashioned to a
precise thickness for gauging purposes. Taper and thickness gauges are often referred as feller gauges.
US standard gauges have a series of open, key-shaped holes and are used to gauge sheet or plate thick-
ness. Weld gauges are used for assessing weld fillet or bead size. Fixed wire gauges have a series of open
key-shaped holes and are used to gauge wire diameter size.
25.00 STD
In addition to specific fixed gauge types, there are two
less-focused device groupings or materials that may be used (a)
for this type of comparative gauging. Gauge stock is a mate- GO NOT GO
rial that is fashioned to a precise thickness for gauging pur-
24.985 25.015
poses. Gauge stock is available in rolls or individual strips.
Gauge sets and tool kits consist of several gauges and acces-
(b)
sories packaged together; often in a case with adjusting tools.
Tool kits sometimes contain alternate extensions, contact
tips, holders, bases, and standards. Out of these types of
gauges, some of the gauges are discussed as follows:

1. Plug Gauges Plug and pin gauges are used for GO/
NO-GO assessment of hole and slot dimensions or locations (c)
compared to specified tolerances. Dimensional standards are Fig. 6.25 (a) Diagram of single-
used for comparative gauging as well as for checking, cali- ended plug gauge (b) Diagram of
brating or setting of gauges or other standards. Plug, pin, set- double-ended plug gauge
(c) Double-ended plug gauge
ting disc, annular plug, hex and spherical plug individual
gauges or gauge sets fit into this category. Plug gauges e
are made to a variety of tolerance grades in metric and
English dimensions for master, setting or working appli-
0 25 H7 .21
cations.
Plugs are available in progressive or stepped, double end
or wire, plain (smooth, unthreaded), threaded, cylindrical o d c b
and tapered forms to go, no-go or nominal tolerances.
Limit plug gauge
a -- GO-side
2. Ring Gauges Ring gauges are used for GO/
b -- NO-GO side
NO-GO assessment compared to the specified dimen-
c -- Red marking
sional tolerances or attributes of pins, shafts, or threaded
d -- Basic size
studs. Ring gauges are used for comparative gauging as
e -- Tolerence
well as for checking, calibrating or setting of gauges or
other standards. Individual ring gauges or ring-gauge sets Fig. 6.26 Specifications of dimensions
on double-ended plug gauge
168 Metrology and Measurement

are made to a variety of tolerance grades in metric b a


and English dimensions for master, setting or work-
ing applications. Rings are available in plain (smooth, c
unthreaded), threaded, cylindrical and tapered forms
to GO, NO-GO or nominal tolerances. There are
three main types of ring gauges: GO, NO-GO, and
master or setting ring gauges.
GO ring gauges provide a precision tool for pro-
c
duction of comparative gauging based on a fixed
limit. GO gauges consist of a fixed limit gauge with a
gauging limit based on the plus or minus tolerances
Limit plug gauge
of the inspected part. A GO ring gauge’s dimensions a--GO-side
are based on the maximum OD tolerance of the b--NO-GO side
round bar or part being gauged. A GO plug gauge’s c--Red marking
dimensions are based on the minimum ID tolerance Fig. 6.27 Use of limit plug gauge
of the hole or part being gauged. The GO plug (ID)
gauge should be specified to a plus gaugemakers’ tol-
erance from the minimum part tolerance. The GO ring (OD) gauge should be specified to a minus
gaugemakers’ tolerance from the maximum part tolerance. NO-GO, or NOT-GO, gauges provide a

Plate plug gauge

(a) (b)
Fig. 6.28 (a) How to use double-ended plug gauge (b) Plate plug gauge
(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India.)

precision tool for production of comparative gauging based on a fixed limit. NO-GO gauges consist of
a fixed limit gauge with a gauging limit based on the minimum or maximum tolerances of the inspected
part. A NO-GO ring gauge’s dimensions are based on the minimum OD tolerance of the round bar or
part being gauged. The NO GO ring (OD) gauge should be specified to a plus gaugemakers’ tolerance
from the minimum part tolerance. Master and setting ring gauges include gauge blocks, master or set-
ting discs. Setting rings are types of master gauges used to calibrate or set micrometers, comparators, or
Limits, Fits and Tolerances 169

Fig. 6.29 Ring gauge


(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India)

other gauging systems. Working gauges are used in the shop for dimensional inspection and are periodi-
cally checked against a master gauge.

3. Snap Gauges Snap gauges are used in production settings where specific diametrical or
thickness measurements must be repeated frequently with precision and accuracy. Snap gauges are
mechanical gauges (Fig. 6.27) that use comparison or the physical movement and displacement of
a gauging element (e.g., spindle, slide, stem) to determine the dimensions of a part or feature. In
this case, snap gauges are similar to micrometers, calipers, indicators, plug gauges, and ring gauges.
Snap gauges are available in fixed and variable forms. The variable forms often have movable, top
sensitive contact attached to an indicator or comparator. The non-adjustable or fixed limit forms
typically have a set of sequential gaps for GO/NO-GO gauging of product thickness or diameter.
Fixed limit snap gauges [Fig. 6.30 (a), (b)] are factory sets or are otherwise not adjustable by the user.
A common example of this type of device is the AGD fixed limit style-snap gauge. These gauges are set
to GO and NO-GO tolerances. A snap gauges’ GO contact dimensions are based on the maximum tol-
erance of the round bar, thickness or part feature being gauged. NO-GO contact dimensions are based
on the minimum tolerance of the round bar, thickness, or part feature being gauged by the snap gauge.
Variable, or top sensitive contact, snap gauges [Fig. 6.30 (c), (d)] use a variable contact point that
moves up during part gauging. The contact point moves providing a GO to NO GO gauging range.
The top contact is normally connected to a dial indicator that provides visual indication of any dia-
metrical or thickness variations.
There are a number of optional snap gauge features that can aid in gauging speed or extending the
measurement range of a particular snap gauge. These features include interchangeable anvils, lock-
ing and back or part support. Snap gauges having replaceable anvils, contact points, styli, spindles, or
other contacting tips or faces to allow for gauging of many different items easily. Back or part support
involves a protrusion or stem located behind a part to hold or stop the part from moving past a cer-
tain point during gauging. Similarly, lockable devices have a slide or spindle on the gauge that can be
locked in a fixed position. Both of these features can be used to quickly foster GO/NO-GO gauging.
Figure 6.30 (e) shows the setting of a gap of the GO side of snap gauges using slip gauges.

4. Air Gauges Air gauges use pneumatic pressure and flow to measure and sort dimensional
attributes. They provide a high degree of speed and accuracy in high-volume production environments.
170 Metrology and Measurement

NOT
GO GO Adjustable
anvils
NO GO
GO

(a) (b) (c)

(d) (e)

Fig. 6.30 Snap gauges


(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India.)

Air metrology instruments shown in Fig. 6.31 can provide comparative or quantitative measurements
such as thickness, depth, internal diameter (ID), outer diameter (OD), bore, taper and roundness. Air
gauges and gauging systems may also use an indicator or amplifiers such as air columns combined with
air probes or gauges.
There are several types of air gauges. Air plugs are production-quality, functional gauges for
evaluating hole and slot dimensions or locations against specified tolerances. Air rings are also
production-quality, functional gauges, but are used for evaluating specified tolerances of the
dimensions or attributes of pins, shafts, or threaded studs. Air-gauging systems or stations are
large, complex units available in bench-top or floor-mounted configurations. These systems often
include several custom gauges for specific applications, as well as fixtures or other components
for holding or manipulating parts during inspection. Air probes, or gauge heads, are also used
in conjunction with other gauges, and connect to remote displays, readouts, or analog amplifi-
ers. Test indicators and comparators are instruments for comparative measurements where the
linear movement of a precision spindle is amplified and displayed on a dial or digital display. Dial
displays use a pointer or needle mounted in a graduated disc dial with a reference point of zero.
Digital displays present metrology data numerically or alphanumerically, and are often used with
Limits, Fits and Tolerances 171

air gauges that have data-output capabilities. Remote gauges are used on electronic or optical
gauges, probes, or gauge heads that lack an integral gauge.
Air gauges use changes in pressure or flow rates to measure dimensions and determine attributes.
Backpressure systems use master restrictor jets, as well as additional adjustable bleeds or restrictions to
measure pressure changes and adjust for changes in air tooling.
Flow systems use tubes or meters to measure flow rates through air jets, orifices, or nozzles. Back-
pressure systems have high sensitivity and versatility, but a lower range than flow systems. Flow system
gauges require larger volumes of air and nozzles, and are useful where larger measurement ranges are
required. Differential, balanced air, single master, or zero setting air-gauge systems are back pressure
systems with a third zero-setting restrictor.
Some air gauges are handheld and portable. Others are designed for use on a bench top or table, or
mount on floors or machines. Operators who use bench top, table-based, and floor-mounted air gauges
load parts and measure dimensions manually. Automatic gauges (Fig. 6.32), such as the inline gauges
on production lines, perform both functions automatically. In semi-automatic systems, operators load
parts manually and gauges measure automatically. Typically, machine-mounted gauges include test indi-
cators, dial indicators, and or micrometer heads.

(1) Thickness or wall thickness measurement with Millipneu jet air probe, (2) Diameter measurement of
cylindrical through bores with Millipneu jet air plug gauge, (3) Diameter measurement of cylindrical blind
bores with Millipneu jet air plug gauge, (4) Diameter measurement of cylindrical through bores with Mil-
lipneu ball contact air plug gauge, (5) Diameter measurement of cylindrical blind bores with Millipneu
lever contact plug gauge, (6) Diameter or thickness measurement with adjustable Millipneu jet air caliper
gauge (7) Diameter measurement of cylindrical shafts with Millipneu jet air ring gauge, (8) Straightness
measurement of a cylindrical bore with Millipneu special jet air plug gauge, (9) Mating measurement
between bore and shaft with Millipneu jet air plug gauge and jet air ring gauge, (10) Conicity measurement
of an inner cone with Millipneu taper jet air plug gauge. Measurement based on differential measurement
method (11) Measurement of perpendicularity of a cylindrical bore to the end face with Millipneu special
jet air plug gauge—measurement based on differential measurement method (12) Measurement of spacing
between separate cylindrical bores with Millipneu jet air plug gauges. Measurement based on differential
measurement method (13) Measurement of spacing between incomplete cylindrical bores with Millipneu jet
air plug gauges—measurement based on differential measurement method (14) Conicity measurement, form
measurement and diameter measurement of inner cone with Millipneu taper jet air plug gauge (15) Multiple
internal and external measurements with measuring jets and Millipneu contact gauges in conjunction with a
Millipneu seven-column gauge ( Refer Fig. 6.33).

Jet Air Plug Gauges: Millipneu Jet Air Plug Guage Millipneu jet air plug gauges are used for
testing cylindrical through bores or blind bores. The plug gauge bodies are equipped with two oppos-
ing measuring jets, which record the measured value without contact. This arrangement allows the
diameter, the diametric roundness and the cylindricity of bores to be calculated using a single jet air
plug gauge. The diameter is measured immediately after the jet air plug gauge is introduced, while the
172 Metrology and Measurement

Fig. 6.31 Jet air plug gauges


(Courtesy, Mahr GMBH, Esslingen)

Fig. 6.32 Automatic gauge system


(Courtesy, Mahr GMBH, Esslingen)

diametric roundness deviation can be tested by rotation around 180° and the cylindricity by movement
in a longitudinal direction. The measuring range of the jet air plug gauges is a maximal 76 μm (.003 in).
Jet air plug gauges are supplied as standard in hardened or chrome-plated versions and, if required, with
a shut-off valve in the handle.
Limits, Fits and Tolerances 173

1 2 3

4 5 6

7 8 9

10 11 12

13 14 15

Fig. 6.33 (1 to 15) Air gauges—Practical examples


(Courtesy, Mahr GMBH, Esslingen)
174 Metrology and Measurement

The long service life, particularly of the jet air gauges, which are matched
to Millipneu dial gauges, is due in part to the fact that the hardened measuring
jets are recessed relative to the generated surface of the measuring body and
are thus extensively protected against damage.

5. Bore Gauges and ID Gauges Bore gauges and ID gauges are


designed for dimensional measurement or assessment of the internal diam-
eter of components. Bore gauges and ID gauges are available that employ Fig. 6.34
variable or fixed mechanical, electronic or pneumatic technologies. Special-
ized bore gauges have the capability to measure the degree of roundness (lobes), taper or internal steps,
grooves or serrations. Mechanical gauges use comparison or the physical movement and displacement
of a gauging elementput/sensing probe (e.g., spindle, slide, stem) to determine the dimensions of a
part or feature (refer Fig. 6.35). Micrometers, calipers, indicators, plug gauges, ring gauges or snap
gauges are examples of mechanical gauges. Electronic bore gauges use LVDT, capacitance, inductive
or other electronic probes to sense the distance or displacement of a contact or stylus. Mechanical
gauges such as micrometers, plug gauges, and snap gauges may employ an integral electronic probe in
addition to the mechanical gauging elements.
Pneumatic bore gauges or gauging systems use the changes in flow or pressure in air nozzles or inlets
internally located in air plugs, probes, rings or snaps or other pneumatic gauges. Pneumatic compara-
tors, digital readouts, analog amplifiers, columns or flow meter/rotameter tubes are used to display air
gauging dimensional data.
Specific types of bore gauges and ID gauges include internal calipers, slot gauges, indicating bore
gauges, and 3-point bore gauges. Calipers use a precise slide movement for inside, outside, depth or
step measurements. While calipers do not typically provide the precision of micrometers, they provide
a versatile and broad range of measurement capabilities: inside (ID), outside (OD), depth, step, thick-
ness and length. Shop tool, spring-type or firm-joint calipers consist of two legs with a scissor action
and are usually used for comparative gauging or measurement transfer; although some spring-type
calipers have dial indicators.
Slot gauges are expanding collet-type gauges used for comparative measurement of small holes. The
gauge is expanded in the hole and then removed and measured with a micrometer or other external
(OD) gauge. Alternately, the hole or slot gauge can be set and used to check if a hole is above or below
a specific size or tolerance.
Indicating bore gauges are gauging devices for comparative measurements where the linear move-
ment of a spindle or plunger is amplified and displayed on a dial, column or digital display. Typi-
cally, indicators have a lower discrimination (∼0.001" to 0.0001") and greater range (∼+/− 1.000" to
+/− 0.050" total) compared to comparators. The level or precision is sufficient for measurement of
precision ground parts and for the calibration of other gauges. Three-point bore gauges have three
contact points mounted on arms that expand out from a central point. Three-point bore gauges can
detect lobe or out-of-roundness conditions, which is an advantage over two-point ID gauges. These
gauges usually have dial or digital displays.
Limits, Fits and Tolerances 175

Sensing
probe

Fig. 6.35 Bore Gauges

6. Taper Limit Gauges Taper gauges are used as plug


gauges for checking tapered holes and taper ring gauges are
used for checking the tapered shaft. They are used to check the
diameter at the bigger end and the change in diameter per unit
length. It is not a measuring angle.
(a) Figure 6.36 (a) shows a taper ring gauge, and (b) taper plug
gauges which are generally defined by the distance the gauge
enters the hole. Therefore, two lines on the taper surface gauge
are used to denote upper (in red) and (in blue) lower limits.
NO Checking of taper becomes critical when dimensions of a
GO machine taper are the included angle and the diameter at a specific
reference level. Machine tapers differ widely in size, angle, and
other characteristics, which may be due to the function of their
intended application. For example, this may range from small size
twist drill shank to heavy machine spindle nose. And this creates
the limitation on the use of a standard fixed limit taper gauge.
Therefore, these are used conveniently and are a dependable
GO means for taper inspection.

7. Thread Gauges Thread gauges are dimensional


instruments for measuring thread size, pitch or other param-
eters. A variety of thread-gauging instruments and tools exist,
such as measuring wires, tri-roll comparators, thread-plug
gauges, thread-ring gauges and thread micrometers. The appro-
(b) priate variable or fixed limit gauge for an application should
Fig. 6.36 (a) Taper ring gauges be selected based on internal and external thread type, specific
(b) Taper plug gauges thread designation (UNS, UNF, UNC, NPT, ACME, Buttress),
176 Metrology and Measurement

part tolerances and gauging frequency (shop vs, high-volume


production).
Thread gauges can be one of any number of types of gauges.
These include plug, ring, 3-wire, micrometer, tri-roll compara-
tor, measuring wire, screw thread insert (STI), and thread- Fig. 6.37 Use of taper gauges
gauging roll-thread gauges. Thread plug gauges measure GO/
NO-GO assessment of hole and slot dimensions or locations
compared to specified tolerances. Thread ring gauges measure
GO/NO-GO assessment compared to specified tolerances of
the dimensions or attributes of pins, shafts or threaded studs.
Three-wire thread gauges are gauges that use thread wires to
gauge thread size with one wire mounted in one holder and two Fig. 6.38 Use of taper holes
wires mounted in a second holder. The holders are placed in the
measuring gauge and brought in contact with the threads.
Thread micrometers are micrometers for measuring
threads. A tri-roll comparator is a specialized thread gauge e
GO sid
employing three threads roll and a digital or dial display. The
thread-gauging rolls can be interchanged to measure different
GO
thread sizes. A measuring wire is a specialized wire manufac- Not
s de
i
tured to precise gauge sizes for measuring external threads. The
wire is wrapped or placed in the thread cavity and then a mea-
surement is made with a micrometer or other OD gauge. STI Fig. 6.39 Flat taper gauges
gauges, also referred as helical coils or helicoils, are used where used for testing of tapers in ac-
a screw thread insert will be used. STI gauges are widely applied cordance with light gap method
in the automotive industry. Thread-gauging rolls are threaded
rolls for use on roll-thread comparators.
Different thread types, profiles, and geometries provide different functionalities. Thread designations
include UNC, UNF, UNEF, UN, M/MJ (metric), NPT, NPTF, NPSF, ANPT, BSPT, BSPP, ACME,
and buttress. Thread gauges measure the size or diameter of the feature being measured. English pitch
is the threads per inch that the gauge can measure. Metric pitch is the metric thread spacing that the
gauge can measure.

a--GO
b--NO-GO
D2

Testing of flank D2 with the


Thread limit snap gauge
NO-GO side of the limit gauge
Fig. 6.40 Thread gauges
Limits, Fits and Tolerances 177

Fig. 6.41
(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India)

Common shapes or geometries measured include cylindrical and tapered or pipe shapes. A GO gauge
provides a precision tool for production of comparative gauging based on a fixed limit. GO gauges
consist of a fixed-limit gauge with a gauging limit based on the plus or minus tolerances of the inspected
part. NO-GO or NOT-GO gauges provide a precision tool for production of comparative gauging
based on a fixed limit. NO-GO gauges consist of a fixed-limit gauge with a gauging limit based on the
minimum or maximum tolerances of the inspected part. GO/NO-GO gauges are configured with a GO
gauge pin on one end and a NO-GO gauge pin on the opposite end of the handle. GO/NO-GO gauges
provide a precision tool for production of comparative gauging based on fixed limits. GO/NO-GO
gauges are manufactured in the form of stepped pins with the GO gauge surface and the NO-GO gauge
surface on the same side of the handle. The gauge can save type in gauging since the gauge does not
have to be reversed for NO-GO gauging. Master gauge blocks, master or setting discs, and setting rings
are types of master gauges used to calibrate or set micrometers, comparators, or other gauging systems.
Fixed limit or step gauges are specialized thread plug gauges for gauging taper pipe threads. Notches or
external steps indicate maximum and minimum allowable tolerances. Tolerance classes for thread gauges
include Class XX, Class X, Class Y, Class Z, Class ZZ and thread Class W.
Measurement units for thread gauges can be either English or metric. Some gauges are configured
to measure both. The display on the gauge can be non-graduated meaning that the gauge has no dis-
play, dial or analog, digital display, column or bar graph display, remote display, direct reading scale, or
vernier scale.

8. Splined Gauges These are made form blanks whose design various according to the size range
to be accommodated. The splined gauges are available as plug gauges (as shown in Fig. 6.42) or ring gauges
as per the demand. The basic forms of splines will be involute, serrated or straight-sided. Form selection
depend upon dimensions, the torque to be transmitted, manufacturing considerations and type of fit.

9. Radius Gauge These gauges are used to inspect inside and outside radius on the part profile.
With the help of radius gauges we could measure the unknown radius but for limited values. While doing
178 Metrology and Measurement

inspection using these gauges for unknown radius dimen-


sions, some trial and error procedure is to be followed.
The size of the radius is mentioned on the surface of these Splined plug
gauges
gauges (refer Fig. 6.43).

10. Filler Gauge In case of machine assembly or


finished product assembly, the distance between two
meeting surfaces of subcomponents cannot be measused
by any conventional measuring instruments. To solve the
problem of measuring the dimension in the gap, a stack of
exact size of filler gauges (refer Fig. 6.44) are to be made
to fit propely in the gap. Then, the size of the stack will
give the dimension in the gap. Sizes of these fillers are Fig. 6.42 Splined gauge
mentioned on the surface which help to build the required
stack size.

For inspecting
internal radius

For inspecting
external radius

Fig. 6.43 Radius gauge


(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India)

6.10.3 Dimensional Gauges


These provide quantitative measurements of a product’s or component’s attributes such as wall thick-
ness, depth, height, length, ID, OD, taper or bore. Dimensional gauges and instruments encompass
the following:
Limits, Fits and Tolerances 179

Fig. 6.44 Filler gauge


(Courtesy, Metrology Lab, Sinhgad C.O.E., Pune, India)

Air or pneumatic gauges, Bore and ID gauges, Calipers, Digital or electronic gauges, Custom or
OEM gauges, Depth gauges, Masters, setting gauges and other dimensional standards (gauge blocks,
end measuring rods, gauging balls), gauge head or probes, gauge sets or measuring tool kits, gauging
systems or stations, GO-NO GO, attribute or functional gauges (plugs, rings, snaps, flush-pins), height
gauges, indicators and comparators, Laser micrometers, Mechanical micrometers, Micrometer heads,
Thickness gauges, Thread or serration gauges, Specialty and other gauges—designed specifically for
gear, spring, runout, impeller, form or other special functions. The specific gauge best suited for an
application will depend on the part geometry, production volume, gauging conditions (inline vs offline
and environmental factors) and the dimensional tolerance requirements particular to the component
or design. The following figures 6.45 (a to j) are some special-purpose dedicated (fixed and adjustable)
gauges and inspection templates.

6.10.4 Design of Limit Gauges


1. Guidelines for Gauge Design While designing gauges for specific applications, the fol-
lowing guidelines for gauge design are to be considered.

i. The form of GO gauges should exactly coincide with the form of the mating part.
ii. GO gauges should enable simultaneous checking of several dimensions. It must always be put in
maximum impassability.
iii. NO GO gauges should enable checking of only dimensions at a time. It must always be put in
maximum passability.
180 Metrology and Measurement

(a) (b)

(c) (d)

(e)

(Continued)
Limits, Fits and Tolerances 181

(f)

(g)

(h) (i)

(j)

Fig. 6.45 Figures (a), (b), (c) and (d) are the fixed gauges;
(e), and (f) are special types of adjustable gauges; (g), (h), (i),
and (j) are the dedicated inspection templates
182 Metrology and Measurement

2. Material Considerations for Gauges Gauges are inspection tools requiring high degree
of wear resistance. Apart from this, a gauge is also required to ensure stability for its size and shape and
corrosion resistance, and should have low temperature coefficient. Therefore, gauges are made from
special types of alloy and by special processes. Such few materials are listed along with their special
properties in Table 6.17.

Table 6.17 Materials used for gauge with their special properties

Type of Material Special Properties

1. Chromium Plating Increased wear resistance (restoring the worn gauges to original size)
2. Flame-plated tungsten carbide Increasing the size of coating substantially increases wear life (where
frequency of usage is comparatively high)
3. Tungsten carbide Great stability, wear resistance. Controlled temperature environment
is required (used in case of extensive usage and against high abrasive
work surfaces).

4. Ceramic Greatest degree of wear resistance, more brittleness, high differential


coefficient of thermal expansion

3. Gauge Tolerance The expected function of the fixed gauges and the dimensions to be
measured are the variables that necessarily require wide veriety of gauge type. Gauges are used as
a tool to inspect the dimensions (but, they are not used to measure the dimensions). As like any
other part/component, gauges after all must be manufactured by some processes, which require
manufacturing tolerance. After knowing the maximum and minimum metal conditions of the job
dimension under inspection, the size of gauge tolerance on the gauge is allowed. This tolerance,
to anticipate the imperfection in the workmanship of the gauge-maker, is called gaugemaker’s tol-
erance. Technically, the gauge tolerance should be as small as possible, but it increases the manu-
facturing cost (refer Article 6.4.2). There is no universally accepted policy for the amount of gauge
tolerance to be considered while designing the size of the gauge. In industry, limit gauges are made
10 times more accurate than the tolerances to be controlled. In other words, limit gauges are usually
provided with the gauge tolerance of 1/10th of work tolerance. Tolerances on inspection gauges are
generally 5% of the work tolerance, and that on a reference or master gauge is generally 10% of the
gauge tolerance.
After determining the magnitude of gauge tolerance, to avoid the gauge in accepting defective work,
the position of gauge tolerance with respect to the work limits is to be decided. There are two types
of systems of tolerance allocation, viz., unilateral and bilateral (refer Fig. 6.46). In case of a bilateral
system, GO and NOT GO tolerance zones are divided into two parts by upper and lower limits of
the workpiece tolerance zone. The main disadvantage of this system is that those parts which are not
within the tolerance zone can pass the inspection and vice versa. In case of a unilateral system, the work
tolerance entirely includes the gauge-tolerance zone. It reduces the work tolerance by some magnitude
Limits, Fits and Tolerances 183

Upper limit

Work
tolerance

Lower limit Bilateral Unilateral


system system
Fig. 6.46 Systems of gauge-tolerance allocation

of the gauge tolerance. Therefore, this system ensures that the gauge will allow those components only
which are within the work tolerance zone.

4. Wear Allowance As soon as the gauge is put into service, its measuring surface rubs con-
stantly against the surface of the workpiece. This results into wearing of the measuring surfaces of
the gauge. Hence, it loses its initial dimensions. Consider a GO gauge that is made exactly to the
maximum material size (condition) of the dimension to be gauged. The slightest wear of the gauging
member causes the gauge size to pass those parts which are not within its design tolerance zone. In
other words, the size of the GO plug gauge is reduced due to wear and that of a snap or ring gauge
is increased.
For the reason of gauge economy, it is customary to provide a certain amount of wear allowance while
dimensioning the gauge, and it leads to a change in the design size of the gauge. Wear allowance must be
applied to a GO gauge and is not needed for NOT-GO gauges as wear develops in the direction of safety.
Wear allowance is usually taken as 10 % of gauge tolerance. It is applied in the direction opposite to wear,
i.e., in case of a plug gauge, wear allowance is added and in ring or gap/snap gauge, it is subtracted.

Upper limit Upper limit


GO GO

Work Work
tolerance tolerance

NOT NOT
GO GO

Lower limit Lower limit


Plug gauge Snap gauge
Fig. 6.47
184 Metrology and Measurement

5. Allocation of Gauge Tolerance and Wear Allowance Allocation of gauge toler-


ance is as per policy decision. According to purpose, gauges can be classified as workshop gauges,
inspection gauges and general gauges. For allocating gauge tolerance and wear allowance for the above-
said gauges, the following guiding principles are used:

1. No work should be produced by workshops or accepted by the inspection department which lies
outside the prescribed limits of size.
2. No work should be rejected which lies within the prescribed limits of size.

These two principles pertain to two situations and the common conclusion (solution) to this is to
employ two sets of gauges, one set to be used during manufacturing (known as workshop gauge) and
the other (inspection gauges) to be used for final inspection of parts. Tolerances on workshop gauges
are arranged to fall inside the work tolerances, and tolerances on inspection gauges are arranged to fall
outside the work tolerances. To approach the first principle, general gauges are recommended. In this
type of gauges, the tolerance zone for a GO gauge is placed inside the work tolerance and the tolerance
zone for a NOT-GO gauge is placed outside the work tolerance (refer Fig. 6.48).

Design size

Gaugemaker's
Master
Object tolerance
gauge
Fig. 6.48 Tolerance zone for gauges

In case of a master gauge (setting gauge for comparator instruments), the gaugemaker’s tolerances
are distributed bilaterally. It is done by using two parameters, the first is the size of the object and the
other is the median size of the permissible object size limits.

6. Gauging Force It is the amount of force applied for inserting the gauge into the part geom-
etry during inspection of the part-using gauge. In this process so many parameters are involved, viz.,
material of part, elasticity of material, gauging dimensions and conditions, etc. Therefore, it is very dif-
ficult to standardize the gauging force. In practice, if a GO gauge fails to assemble with the part then
it is quite definitely outside the maximum metal limit. Similarly, if a NO-GO gauge assembles freely
under its own weight then the part under inspection is obviously rejected. Chamfering is provided on
GO gauges to avoid jamming.

7. Twelve Questions for Dimensional Gauge Selection How do we select a dimen-


sional gauge? There are literally thousands of varieties, many of which could perform the inspection
Limits, Fits and Tolerances 185

LL of hole Workshop General


Inspection
Min. metal
limit of hole

NOT-GO gauges
Gauge tolerance
Hole Tolerance (+ve)

Plug gauges Direction of


wear of
GO gauge

LL of hole

GO gauges

Max. metal
limit of hole

Gauge tolerance Wear allowance

Max. metal
limit of shaft

GO gauges
Shaft Tolerance (−ve)

HL of shaft
Direction of
wear of
GO gauge
Ring / Gap gauges

NOT-GO gauges
Min. metal
limit of shaft

L.L. of shaft
HL = Higher limit, LL = Lower limit

Fig. 6.49 Allocation of gauge tolerance and wear allowance


186 Metrology and Measurement

task at hand but not all of which will be efficient, practical or cost-effective. The first step in finding the
best tool for the job is to take a hard look at the application. Answers to the following questions will
help the user zero in on the gauging requirements.
• What is the nature of the feature to be inspected? Are you measuring a dimension or a location?
Is the measurement a length, a height, a depth or an inside or outside diameter?
• How much accuracy is required? There should be a reasonable relationship between the specified
tolerance and the gauge’s ability to resolve and repeat. Naturally, the gauge must be more precise
than the manufacturing tolerance, but a gauge can be too accurate for an application.
• What’s in the budget for gauge acquisition? Inspection costs increase sharply as gauge accuracy
improves. Don’t buy more than you need.
• What’s in the budget for maintenance? Is the gauge designed to be repairable or will you toss it
aside when it loses accuracy? How often is maintenance required? Will maintenance be performed
in-house or by an outside vendor? Remember to figure in the costs of mastering and calibrating.
• How much time is available, per part, for inspection? Fixed, purpose-built gauging may seem less
economical than a more flexible, multipurpose instrument, but if it saves a thousand hours of
labour over the course of a production run, it may pay for itself many times over.
• How foolproof must the gauge be, and how much training is required? Fixed gauging is less prone
to error than adjustable gauging. Digital display is not necessarily easier to read than analog. Can
you depend on your inspectors to read the gauge results accurately at the desired rate of through-
put? If not then some level of automation may be useful.
• Is the work piece dirty or clean? Some gauges can generate accurate results even on dirty parts,
others can’t.
• Is the inspection environment dirty or clean, stable or unstable? Will the gauge be subject to
vibration, dust, changes in temperature, etc.? Some gauges handle these annoyances better
than others.
• How is the part produced? Every machine tool imposes certain geometric and surface-finish irreg-
ularities on workpieces. Do you need to measure them, or at least take them into consideration
when performing a measurement?
• Are you going to bring the gauge to the part or vice-versa? This is partly a function of part size
and partly of processing requirements. Do you need to measure the part while it is still chucked in
a machine tool, or will you measure it only after it is finished?
• What is the part made of? Is it compressible? Easily scratched? Many standard gauges can be
modified to avoid such influences.
• What happens to the part after it is inspected? Are bad parts discarded or reworked? Is there a
sorting requirement by size? This may affect the design of the inspection station as well as many
related logistics.
Limits, Fits and Tolerances 187

Illustrative Examples

Example 1 Design a plug gauge for checking the hole of 70H8. Use i = 0.45 3 D + 0.001D, IT8 = 25i,
Diameter step = 50 to 80 mm.

Solution: Internal dimension = 70H8 d1 = 50, d 2 = 80


D = d1 ×d 2 = 50×80 = 63.245 mm

i = 0.45 3 63.245 + 0.001D = 1.8561 micron

Tolerance for IT8 = 25i, = 25. 1.8561 = 46.4036 microns


Hole dimensions
GO limit of hole = 70.00 mm
NO GO limit of hole = 70.00 + 0.04640 = 70.04640 mm
GO plug gauge design
Workmanship allowance = 10 % hole tolerance = 10/100 × 0.4640 = 0.004640 mm
Hole tolerance is less than 87.5 micron. It is necessary to provide wear allowance on a GO plug
gauge.
Lower limit of GO = 70.000 mm
Upper limit of GO = 70.0000 + 0.004640 = 70.00464 mm
+0.004640
Sizes of GO = 70 +0.00000
NO GO plug gauge
Workmanship allowance = 0.004640
+0.004640 +0.04640 +0.004640
NO GO Sizes = 70 +0.04640−0.004640 = 70 +0.04176

Example 2 Design and make a drawing of general purpose ‘GO’ and ‘NO-GO’ plug gauge for inspecting
a hole of 22 D8. Data with usual notations:
i. i (microns) = i = 0.45 3 D + 0.001D
ii. Fundamental deviations for hole D = 16 0.44.
iii. Value for IT8 = 25i
188 Metrology and Measurement

Solution:
(a) Firstly, find out the dimension of hole specified, i.e., 22 D8.
For a diameter of 22-mm step size (refer Table 6.3) = (18 − 30) mm
∴ D = d1 ×d2 = 18×30 = 23.2379 mm

And, i = 0.45 3 D + 0.001D

∴ i = 0.45 3 23.2379 + 0.001(23.2379)

= 1.3074 microns
Tolerance value for IT8 = 25 i ….(refer Table 6.4)
= 25 (1.3074) = 32.685 microns = 0.03268 mm.
( b) Now Fundamental Deviation (FD) for hole,
D = 16(0. 44)
D = 16 [23.2379](0.44)
D = 63.86 microns = 0.06386 mm.
Lower limit of the hole = basic size + FD
= (22.00 + 0.06386) mm
= 22.06386 mm
And upper limit of the hole = Lower limit + Tolerance
= (22.06386 + 0.03268) mm
= 22.0965 mm

For hole Upper level


′D′ Tolerance

Lower level

Fundamental Basic
deviation size

Fig. 6.50
Limits, Fits and Tolerances 189

(c) Now consider gaugemaker’s tolerance (refer Article 6.9.4 (c)) = 10% of work tolerance.
= 0.03268(0.1) mm
= 0.00327 mm
(d ) wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
= 0.00327 (0.1) mm = 0.000327 mm
(e ) For designing general-purpose gauge
∴ Size of GO plug gauge after considering wear allowance = (22.06386 + 0.000327) mm
= 22.0641 mm
+0.00327 +0.00327
∴ GO size is 22.0641−0.00 mm and NO-GO size is 22.965−0.00 mm.
Refer Fig. 6.49

22.0997
NO-GO

22.0965
Work
tolerance
= 0.0326 mm
22.06737

22.0641 GO

Wear
allowance
= 0.003
22.0965

Fig. 6.51 Graphical representetion of genral-purpose gauge

Example 3 Design a ‘Workshop’ type GO and NO-GO Gauge suitable for 25 H7 .Data with usual
notations:
1. i (in microns) = i = 0.45 3 D + 0.001D
2. The value for IT7 = 16i.
Solution:
(a ) Firstly, find out the dimension of hole specified, i.e., 25 H7.
190 Metrology and Measurement

For a diameter of 25-mm step size (refer Table. 6.3) = (18 − 30) mm

∴ D = d1 ×d 2 = 18×30 = 23.2379 mm

And, i = 0.45 3 D + 0.001D

∴ i = 0.45 3 23.2379 + 0.001(23.2379)

= 1.3074 microns
Tolerance value for IT7 = 16 i ….(refer Table 6.4)
= 16(1.3074) = 20.85 microns ≅ 21 microns
= 0.021 mm
+0.021
(b) Limits for 25 H7 = 25.00 −0.00 mm
∴ Tolerance on hole = 0.021 mm
(c) Now consider gaugemaker’s tolerance [refer Article 6.9.4 (c)] = 10% of work tolerance
∴ tolerance on GO Gauge = 0.0021 mm, similarly, NO-GO is also = 0.0021 mm.
(d) As tolerance on the hole is less than 0.1 mm, therefore no wear allowance will be provided.
(e) For designing workshop-type gauge
Refer Fig. 6.52.

20.021

NO-GO
Work
+ Ve

Tolerance
= 0.021 mm

GO

20.000

Fig. 6.52 Graphical representetion of workshop-type gauge


Limits, Fits and Tolerances 191

Example 4 Design ‘workshop’, ‘inspection’, and ‘general type’ GO and NO-GO gauges for checking the
assembly φ25H7/f8 and comment on the type of fit. Data with usual notations:

1) i (microns) = i = 0.45 3 D + 0.001D


2) Fundamental deviation for shaft ‘f ’ = −5.5 D 0.412.
3) Value for IT7 = 16i and IT8 = 25i.
4) 25 mm falls in the diameter step of 18 and 30.
Solution:
(a) Firstly, find out the dimension of hole specified, i.e., 25 H7
For a diameter of 25-mm step size (refer Table 6.3) = (18 − 30) mm

∴ D = d1 ×d 2 = 18×30 = 23.2379 mm

And, i = 0.45 3 D + 0.001D

∴ i = 0.45 3 23.2379 + 0.001(23.2379)


= 1.3074 microns
Tolerance value for IT7 = 16 i ….(Refer Table 6.4)
= 16(1.3074) = 20.85 microns ≅ 21microns
= 0.021 mm
+0.021
(b) Limits for 25 H7 = 25.00 −0.00 mm
∴ tolerance on hole = 0.021 mm
Tolerance value for IT8 = 25i ….(refer Table 6.4)
= 25(1.3074)
= 32.6435 ≅ 33 microns
(c) Fundamental deviation for shaft ‘f ’ = −5.5 D0.412
= −5.5(23.2)0.412
= −10.34 ≅ −10 microns
192 Metrology and Measurement

+0.010
Limits for shaft f 8 25.00−0.043 mm .
(d) Now consider gaugemaker’s tolerance for hole gauging [refer Article 6.9.4 (c)] = 10% of work
tolerance.
∴ tolerance on GO Gauge = 0.0021 mm.
(e) Wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
∴ wear allowance = 0.1(0.0021) = 0.00021 mm
(f ) Now consider gaugemaker’s tolerance for shaft gauging [refer Article 6.9.4 (c)] = 10% of work
tolerance.
∴ tolerance on GO Gauge = 0.0033 mm
(g) Wear allowance [refer Article 6.9.4 (d)] is considered as 10% of gaugemaker’s tolerance
∴ wear allowance = 0.1 (0.0033) = 0.00033 mm
(h) Now the gauge limits can be calculated by referring Fig. 6.49 and the values are tabulated as
follows:

Table 6.18

Plug Gauge (for hole gauging) Ring Gauge (for shaft gauging)
Types of Gauges
GO gauge NO-GO gauge GO gauge NO-GO gauge
+0.00231 +0.0201 −0.01033 −0.0397
Workshop 25.00 +0.00021 25.00 +0.0002 25.00−0.01363 25.00−0.0430

−0.0000 +0.0231 −0.0067 −0.0463


Inspection 25.00 −0.0021 25.00 +0.0210 25.00−0.0100 25.00 −0.0430

+0.00231 +0.0231 −0.01033 −0.0463


General 25.00 +0.00021 25.00 +0.0210 25.00 −0.01363 25.00−0.0430
Limits, Fits and Tolerances 193

= 25.021

LL of hole General
Workshop Inspection
Min. metal
limit of hole

NOT-GO gauges
Gauge tolerance
Hole Tolerance (+ve)

Direction of
Plug gauges wear of
GO gauge
LL of hole
= 25.000

GO gauges

Max. metal
limit of hole

Gauge tolerance Wear allowance

Max. metal
limit of shaft

GO gauges
Shaft Tolerance ( ve)

HL of shaft
Direction of
= 24.990
wear of
Ring/Gap gauges
GO gauge

NOT-GO gauges

Min. metal
limit of shaft

LL of shaft
= 24.957 HL = Higher limit, LL = Lower limit

Fig. 6.53 Allocation of gauge tolerance and wear allowance


194 Metrology and Measurement

Review Questions

1. Explain the concept of interchangeability with examples.


2. Discuss the need of the use of selective assembly by giving a practical example.
3. Define the terms:
(a) Limits (b) Tolerance (c) Basic size (d) Fundamental Deviation (e) Fit
(f ) Gaugemaker’s Tolerance (g) Wear allowance (h) Go and NO-GO Gauge
4. Explain the need and types of giving the tolerances with examples.
5. Discuss unilateral and bilateral systems of writing tolerances with suitable examples and explain
which system is preferred in interchangeable manufacture and why.
6. State and explain Taylor’s principle of limit-gauge design.
7. Write a short note on limit gauges.
8. Define fits and explain in brief the types of fits.
9. Explain with a neat diagram the essential conditions of interference and clearance.
10. Write down the examples of use of the following types of fits:
(a) Push fit (b) Press fit (c) Running clearance fit (d) Wringing fit (e) Sharing fit
11. Differentiate between
(a) Tolerance and allowance
(b) Interchangeable manufacturing and selective assembly concepts
(c) Hole-base system and shaft-base system
(d) Measuring instrument and gauge
(e) Workshop gauge and inspection gauge
12. Explain with a sketch the allocation of gauge tolerance and wear allowance for workshop, inspec-
tion and general grade conditions.
13. Enumerate the types of plug gauges and draw neat sketches of any three of them by stating their
applications.
14. Discuss the use of taper plug gauges.
15. Draw the sketch of a progressive-type solid plug gauge and discuss the advantages and limitations
of this type of gauging.
16. Explain the use of bore gauge and filler gauge.
17. Discuss the various applications of air plug gauges.
Limits, Fits and Tolerances 195

18. Describe the procedure to use splined and thread gauges.


19. Design a plug gauge for checking the hole of 70 H8. Use i = i = 0.45 3 D + 0.001D , IT 8 = 25;
Diameter step of 50 to 80 mm.
20. Design a workshop type of GO and NOGO ring gauge for inspection of 30 F8 shaft. Use the fol-
lowing data with usual notations:

a. i = 0.45 3 D + 0.001D
b. The Fundamental Deviation for shaft ‘f ’ = −5.5 D 0.41
c. The value for standard tolerance grade IT8 = 25i
d. The diameter steps available are 18–30, 30–50, 50–80
21. Design GO and NOGO limit plug gauges for checking a hole having a size of 40 [0.00, 0.04]. Assume
the gaugemaker’s tolerance to be equal to 10% of work tolerance and wear allowance equal to 10%
of gaugemaker’s tolerance.
22. A shaft of 35±0.004 mm is to be checked by means of GO and NOGO gauges. Design the dimen-
sions of the gauge required.
23. A 25-mm H8F7 fit is to be checked. The limits of size for the H8 hole are high limit = 25.03 mm
and low limit equal to basic size. The limits of the size for an F7 shaft are high limit = 24.97 mm
and low limit = 24.95 mm. Taking gaugemaker’s tolerance equal to 10% of the work tolerance,
design a plug gauge and gap gauge to check the fit.
24. Design a plug and ring gauge to control the production of a 90-mm shaft and hole part of H8e9.
Data given:

a. i = 0.45 3 D + 0.001D
b. The upper deviations for ‘e’ shaft = −11 D0.41
c. The value for standard tolerance grade IT8 = 25i and IT9 = 40i
d. 90 mm lies in the diameter step of 80 mm and 100 mm
25. Explain in brief what is meant by the term tolerance zone as used in positional or geometrical tol-
erancing. How are they specified on a drawing?
26. Describe some precautions to be taken in prescribing the accuracy of a limit gauge.
27. What is a gauge? Provide suitable definition and explain how a workshop gauge differs from an
inspection gauge.
28. Design a suitable limit gauge confirming to Taylor’s principle for checking a 60H7 square hole that
is 25 mm wide. How many gauges are required to check this work? Sketch these gauges and justify
your comments.
29. A 70-mm m6 shaft is to be checked by GO-NO-GO snap gauges. Assume 5% wear allowance and
10% gaugemaker’s tolerance (% of the tolerance of the shaft). The fundamental deviation for m fit
is (IT7 – IT6) where multipliers for grade IY 7 is 16 and IT 6 is 10. Sketch the workshop, inspection
and general gauges.
7 Angular Metrology

Checking of surfaces ends with angular metrology…


Prof. A P Deshmukh, Production Engineering Dept., D Y Patil College of Engineering, Pune

ANGULAR MEASUREMENT— and seconds. This serves as essential


THEN AND NOW part of linear measurement.
In ancient ages, angular measurement New age production methods demand
was used for setting up direction while for precise interchangeable parts and
traveling. Sailors on the high seas com- assemblies to increase the reliability of
pletely relied upon their prismatic com- a product. The helix angle of a shaving
passes for finding out a desired direc- cutter derives the surface finish of the
tion. Today, precise angular measure- product by defining its grain flow on the
ments help in the navigation of ships face. The normal pressure angle of the
and airplanes. They are also used in gear decides the quality of the gear to
land surveys, in astronomy for comput- cater to the needs of DIN/ISO/AGMA
ing distances between stars and plan- tolerances. The contacting angle of the
ets, measuring the distance of air travel probe with the surface decides the qual-
by projection, identifying the positions ity of measurement obtained by CMM.
of flying objects, and so on. Various types of measuring instru-
As an angle is the measure of the open- ments have different kinds of attach-
ing between two lines, absolute stan- ments set at appropriate angles for
dards are not required. The circle extensive modular systems. The appli-
obtained by rotating a line can be divided cations of angular measurements are
into sixty parts to form a degree, which versatile and are essential if the linear
can be further subdivided into minutes measurement values are not smaller.

7.1 INTRODUCTION

The concept of an angle is one of the most important concepts in geometry. The concepts of equality,
and sums and differences of angles are important and are used throughout geometry; but the subject
of trigonometry is based on the measurement of angles.
Angular Metrology 197

There are two commonly used units of measurement for angles. The more familiar unit of measure-
ment is the degree. A circle is divided into 360 equal degrees, and a right angle has 90 degrees in it. For
the time being, we’ll only consider angles between 0° and 360°.
Degrees may be further divided into minutes and seconds, but that division is not as universal as
is used to be. Parts of a degree are now frequently referred decimally. For instance, seven and a half
degrees is now usually written as 7.5°. Each degree is divided into 60 equal parts called minutes. So seven
and a half degrees can be called 7 degrees and 30 minutes, written as 7° 30'. Each minute is further
divided into 60 equal parts called seconds, and, for example, 2 degrees 5 minutes 30 seconds is written as
2° 5' 30''. The division of degrees into minutes and seconds of an angle is analogous to the division of
hours into minutes and seconds of time.
Usually, when a single angle is drawn on an xy-plane for analysis, we draw it with the vertex at the
origin (0, 0), one side of the angle along the x-axis, and the other side above the x-axis.

7.2 RADIANS AND ARC LENGTH

The other common measurement unit for angles is radian. For this measurement, consider the unit circle
(a circle of radius 1 unit) whose centre is the vertex of the angle in question. Then the angle cuts off
an arc of the circle, and the length of that arc is the radian measure of the angle. It is easy to convert
a degree measurement to radian measurement and vice versa. The circumference of the entire circle is
2π (π is about 3.14159), so it follows that 360° equals 2π radians. Hence, 1° equals π/180 radians and
1 radian equals 180/π degrees.
An alternate definition of radian is sometimes given as a ratio. Instead of taking the unit circle
with centre at the vertex of the angle, take any circle with its centre at the vertex of the angle. Then
the radian measure of the angle is the ratio of the length of the subtended arc to the radius of the
circle. For instance, if the length of the arc is 3 and the radius of the circle is 2 then the radian mea-
sure is 1.5.
The reason that this definition works is that the length of the subtended arc is proportional to the
radius of the circle. In particular, the definition in terms of a ratio gives the same figure as that given
above using the unit circle. This alternate definition is more useful, however, since you can use it to
relate lengths of arcs to angles. The formula for this relation is
Radian measure times radius = arc length
For instance, an arc of 0.3 radians in a circle of radius 4 has length 0.3 times 4, that is, 1.2.
Table 7.1 shows common angles in both degree measurement and radian measurement. Note that
the radian measurement is given in terms of π. It could, of course, be given decimally, but radian mea-
surement often appears with a factor of π.
The basic standards for angle measurement used by NPL depend either on the accurate division of
a circle or on the generation of a known angle by means of a precision sine-bar. Several methods are
available for dividing a circle but the one employed by NPL for undertaking measurements for the precision
198 Metrology and Measurement

engineering industry is based on the accurate meshing of two similar sets of uniformly spaced vee-
serrations formed in the top (rotatable) and base (fixed) members of an indexing table.

Table 7.1 Angle measurements

Angle Degrees Radians

90° π/2

60° π/3

45° π/4

30° π/6

7.3 ANGLE-MEASURING DEVICES

Protractors and angle gauges measure the angle between two surfaces of a part or assembly. Fixed angle
gauges, universal protractors, combination sets, protractor heads, sine bars, and T bevels are used for
angular measurement. Protractors and angle gauges fall under the category of measuring tools. Mea-
suring tools are instruments and fixed gauges that provide comparative and quantitative measurements
of a product or component’s dimensional, form and orientation attributes such as length, thickness,
level, plumbness and squareness. Measuring or shop tools include rules, linear scales, protractors and
angle gauges, level sensors and inclinometers, and squares and fixed gauges. Measuring tools are used
in construction and building (contractors), drafting and drawing (designers), machine shops and tool
rooms (machinists), field work (surveyors) and offices.
The types of protractors and angle gauges available include angle square, rule depth or angle gauge,
combination set or square, fixed angle gauge, protractor head, rectangular or semicircular head protrac-
tor, sine bar or block or plate, universal or sliding bevel, and universal or bevel protractor. An angle
square consists of a square with angular graduations along the longest face or hypotenuse. A rule depth
or angle gauge is a combination rule with an attachment for indicating the depth orientation of the hole
with respect to the top surface. Combination squares measure length; centre, angular or squareness
determination, and have transfer or marking capability. These multiple tasks are possible because these
sets have a series of optional heads (square, centre or protractor). Fixed angle gauges have a series of
fixed angles for comparative assessment of the angle between two surfaces. Protractor heads are an
attachment or an optional part of a combination square set. The protractor head slides onto the steel
rule and provides a tool for angular measurement or transfer. Rectangular or semicircular head protrac-
tors have long, thin, unruled blades and heads with direct reading angular graduations. Sine bars, blocks,
tables or plates are used for precision angular measurement and are used in machine shops, tool rooms
Angular Metrology 199

or inspection labs. Trigonometric calculations are used to determine the angles. Universal bevels, slid-
ing bevels, combination bevels, or T-bevels are used to transfer or duplicate angle measurements. Usu-
ally, bevels do not have any graduations. Universal or bevel protractors have a base arm and a blade
with a wide angular range. Bevel protractors have a graduated, angular direct reading or vernier scale
located on a large disc. Protractors and angle gauges can be level-sensing or inclinometers. Mechanical
or electronic tools indicate or measure the inclination of a surface relative to the earth’s surface usually
in reference to the horizontal (level), vertical (plumb) or both axes.
These include graduated or non-graduated audible indicators or buzzers, columns or bar graphs,
dials, digital displays, direct reading scales, remote displays, and vernier scales. Features for protrac-
tors and angle gauges include them to be either machine or instrument mounted with a certificate of
calibration, locking feature, marking capability, and linear rule. Common materials of construction
for protractors and angle gauges include aluminium,
brass or bronze, cast metal or iron, plastic, fiberglass,
glass, granite, stainless steel, steel and wood.
A very wide variety of devices and sizes have been
developed to handle almost any situation, including
optical, and the newer, laser types. Some may have
measuring graduations, and movable blades and
accessories such as scribers, bevel and centre finders.
(a)
Sometimes selecting the right one is a puzzlement.
Some of them are discussed as follows:

1. Protractor Refer Fig. 7.1.

(a) Protractor is the most common calibrated


device used in drawing. Although it is helpful in mea-
suring angles with reasonable accuracy, it does not
perform well in establishing layouts for work, since
it requires the use of a carefully placed and held (b)
straight edge.

(b) Machinist’s Protractor overcomes these


difficulties and is often referred as a bevel gauge.
Machinists use a similar tool with legs for less criti-
cal set-ups. This one has a centre finder, drill point
gauge and 5, 6, 7, 8, 9 circle divider.

(c) Arm Protractor is a very handy tool to set


up and measure odd angles. It is a protractor with
arms and a 10-minute vernier. By juggling the posi- (c)

tions, almost any type of angle can be handled. Fig. 7.1 Arm protractor
200 Metrology and Measurement

2. Squares Since the most common angles are right or perpendicular angles, squares are the most
common devices for drawing them. These range from the small machinist to large framing or rafter
types. Among the most useful for model making is the machinist square with blades starting at 2" and
up. These are precision ground on all surfaces, any of which can be used. The inside handle corner is
relieved and the outside blade corner is notched for clearance. Although they are designed for align-
ment of machine tools and work, they fit nicely inside rolling stock and structures for squaring corners.
Do not overlook the use of bar stock, of shape similar to the handle, for tighter fits.

3. Sheetrock or Drywall Used for drywall, a


tee-square type that spans a 4' sheet of plywood has a
movable cross-piece that can be turned to the marked
side or any angle for laying out parallel yard tracks, or
made straight for storage as shown in Fig. 7.2.

4. Five in One Square To avoid using cum-


bersome framing squares, newer, smaller aluminum or
Fig. 7.2 Sheetrock
plastic triangular substitutes have been developed with
added features. This one claims to replace a try square,
miter, protractor, line guide and saw guide. Instructions include a table
for rafter settings. An 8" × 8" square is shown in Fig. 7.3. A lip on either
side can help align vises and pieces on milling and drilling tables for
more critical work.

5. Bevels and Miters Fixed common angles are usually set up


with various triangles. Some may be flat like drafting triangles, while
others may have guides, similar to square handles, to align with estab-
lished references such as table edges or slots, shown in Fig. 7.4.
Fig. 7.3 8” × 8” square
6. Universal Bevel Vernier Protractor These angu-
lar-measuring tools range from vernier protractors reading to
5 minutes of a degree, to regular protractors reading to a degree
and easily being able to estimate to 30 minutes.
With this, all of our angular measurements are in degrees and
minutes. Figure 7.5 explains the construction of a vernier bevel
protractor. Fig. 7.4 Bevels and miters
It consists of a (sliding) blade (150 mm and 300 mm), which can be set at any angle with the stock.
The angle of any position is shown by degree graduations on the scale disc, which is graduated from 0°
to 90° in either direction. The reading of an angle should be noted comparing the angular scale (main
scale) reading with the vernier scale reading. The vernier scale has 12 divisions on each side of the
centre zero. Each division marked equals 5 minutes of an arc. These 12 divisions occupy the same space
as 23° on the main scale; therefore, each division of the vernier scale is equal to 1/12 of 23°.
Angular Metrology 201

Blade

Blade rotating nut


Working edge

Acute-angle attachment

Turret

Scale

Body

Vernier Stock
scale

Fig. 7.5 Vernier bevel protractor

Turret Eyepiece

Blade

Stock

Fig. 7.6 Optical vernier bevel protractor

An alternative to this is the optical bevel protractor (as shown in Fig. 7.6), which can measure angles up
to 2 minutes of an arc. It consists of a glass circle graduated in divisions of 10 minutes of an arc. The
blade clamps the inner rotating member, which carries a small microscope (eyepiece) through which
circular graduations can be viewed against the main scale. Figure 7.7 shows advancement in the case of
bevel protractor which gives a digital display of an angle.
202 Metrology and Measurement

Digital display

Fig. 7.7 Digital vernier bevel protractor

Fig. 7.8 Applications of bevel protractor


Angular Metrology 203

7. Combination Set Small movable combination ‘squares’ are useful for less critical applica-
tions, where others will not fit. This combination set has a graduated blade, square, 45 degree, centre
finder, scriber, and bubble level.

Square head

Spirit level Centre head

15 16 17 23 24 20 26 27 28 29 30
1 2 3 4 5 6 7

Steel rule
95 90 8
10 0

5
15

80
20

75
25

70
30 5
65
60
55 50 45 40 3

Graduated protractor head

Fig. 7.9 Schematic diagram of combination set

Procurator head Centre head


Square head

(a) (b)
Fig. 7.10 (a) and (b) Pictorial views of combination set

8. Angle Gauges A series of fixed angles are used for comparative assessment of the angle
between two surfaces. Important specifications to consider when searching for protractors and angle
gauges include angular range and angular resolution. There are many choices for scales or displays on
protractors and angle gauges.
204 Metrology and Measurement

(a) (b)
Fig. 7.11 (a) and (b) Use of centre head and square head of
combination set respectively

Dr Tomlinson developed angle gauges in 1941. By making different permutations and combinations
of the gauge setting, we could set an angle nearest to 3". The dimensions of angle gauges are 75 mm
length and 16 mm width. Common materials of construction for angle gauges include aluminium, brass
or bronze, cast metal or iron, plastic, fiberglass, glass, granite, stainless steel, steel, and wood. These are
hardened and stabilized. The measuring faces are lapped and polished to a high degree of accuracy and
flatness. Angle gauges are available in two sets (one set is shown in Fig. 7.12). One set consists of 12
pieces along with a square block. Their values are
1o, 3 o, 9 o 27 o and 41o
1', 3', 9', and 27' and
6", 18", and 30"
The other set contains 13 pieces with values of
1o, 3 o, 9 o 27 o and 41o
1', 3', 9', and 27' and
3", 6", 18", and 30"
The angle can be build up by proper combination of gauges, i.e., addition or subtraction, as shown
in Figs 7.13 and 7.14. Figure 7.15 shows a square plate used in conjunction with angle gauges. All its
faces are at right angles to each other. With the help of a square plate, the angle can be extended with
the range of an angle block set in degrees, minutes or seconds.
Angular Metrology 205

Fig. 7.12 Angle gauge block


(Courtesy, Metrology Lab Sinhgad COE., Pune)

9. Sine Bar A sine bar is a high precision and most β


accurate angle-measuring instrument. It is used in con-
junction with a set of angle gauges. This bar is made of
high-carbon, high-chromium corrosion-resistant steel
and it is essentially hardened, ground, lapped and sta-
bilized. It is kept on two hardened rollers of accurately α
equal diameters spaced at a known dimension (with α+β

options at 100 mm, 200 mm, and 300 mm) at each end. Fig. 7.13 Addition of angle gauges

α
(α − β)

Fig. 7.14 Subtraction of angle gauges


206 Metrology and Measurement

During it’s manufacturing, the various parts are hardened and stabilized
before grinding and lapping. The rollers are brought in contact with the
bar in such a way that the top surface of the bar is absolutely parallel to
the centreline of the (setting) rollers. The holes drilled in the body of the
sine bar to make it lighter and to facilitate handling, are known as relief
holes. This instrument is always worked with true surfaces like surface
plates. Sine bars are available in several designs for different applications.
Figure 7.16 shows the nomenclature for a sine bar as recommended by
IS: 5359–1969. Figure 7.17 shows the pictorial view of a sine bar of Fig. 7.15 Square plate
centre distance equal to 300 mm.

Relief holes End face


Upper surface

End face

Setting rollers

Lower face
100 or 200 or 300 mm

Fig. 7.16 Nomenclature for sine bar as recommended by IS: 5359–1969

Upper surface
Showing 300 -mm centre (working surface)
distance between two rollers Relief holes

Setting rollers

Fig. 7.17 Sine bar


(Courtesy, Metrology Lab Sinhgad COE, Pune)
Angular Metrology 207

Principle of using Sine Bar The law of trigonometry is the base for using a sine bar for angle mea-
surement. A sine bar is designed to set the angle precisely, generally in conjunction with slip gauges. The
angle is determined by an indirect method as a function of sine—for this reason, the instrument is called
‘sine bar’. Also, to set a given angle, one of the rollers of the bar is kept on the datum surfaces (generally,
the surface plate), and the combination of the slip gauge set is inserted under the second roller. If L is the
fixed distance between the two roller centres and H is the height of the combination slip gauge set then
H ⎡H ⎤
sinθ = …..(i) or θ = sin−1 ⎢ ⎥ …..(ii)
L ⎢⎣ L ⎥⎦
Thus, using the above principle it is possible to set out any precise angle by building height H. Use
formula no. (i). Refer Fig. 7.18, which explains the principle of using a sine bar for setting an angle. Or
measure the unknown angle by measuring the height difference between centres of two rollers and use
the above formula no. (ii).

een
t h betw
g L
Len ntre
Ce

Slip gauge set


of required
′θ′ height H

Fig. 7.18 Principle of using a sine bar

Figure 7.19 shows the accessories used for setting and measuring the angles, viz., slip gauges, and
dial indicator.
When the component is small and can be mounted on the sine bar then setting of instruments for
measuring unknown angles of the component surface is as shown in Fig. 7.20.
Refer Fig. 7.20. The height of slip gauges is adjusted until the dial gauge reads zero at both ends of
the component. And the actual angle is calculated by the above-said formula no. (ii).
When the component is large in size or heavy, the component is placed on the datum surface and the
sine bar is placed over the component, as shown in Fig. 7.21. The height over the rollers is measured
using a height gauge. A dial test gauge is mounted (on the slider instead of a blade) on the height gauge
as a fiducial indicator to ensure constant measuring pressure. This could be achieved by adjusting the
height gauge until the dial gauge shows the same zero reading each time.
208 Metrology and Measurement

Dial gauge with stand

Top surface of sine bar

Roller

Slip gauge

Fig. 7.19 Set of sine bar, slip gauge and dial indicator
(Courtesy, Metrology Lab Sinhgad C.O.E., Pune.)

Angle plate
Dial gauge

Direction of
movement of
dial gauges

Component

Slip gauge set of


required
′θ height H

Fig. 7.20 Sine bar used for small component


Angular Metrology 209

Vernier height gauge


Position1 Position2
Dial
gauges

Sine bar

Component

Fig. 7.21 Angle measurement using sine bar and vernier height gauge

Note down the two readings for the two positions shown in Fig. 7.21 of either of the rollers. If H is
the difference in the heights and L is distance between two roller centres of the sine bar then
⎡H ⎤
angle of the component surface = θ = sin−1 ⎢ ⎥
⎢⎣ L ⎥⎦

Other Aspects of Use of Sine Bar To measure and /or set the angle accurately using a sine bar
the main requirement is that it must be accurate. If the sine bar is to be accurate then it must accurately
have some important geometrical constructional features:

i. The axis of the rollers must be parallel to each other and the centre distance L must be precisely
known; this distance specifies the size of the sine bar.
ii. The rollers must be of identical diameters and round within a close tolerance.
iii. The top surface of the sine bar must have a high degree of flatness and it should be parallel to
the plane connecting the axis of the rollers.
210 Metrology and Measurement

The accuracy requirement and tolerance specified by IS: 5359–1969 for a 100-mm sine bar are as
follows:

i. Flatness of upper and lower surface = 0.001 mm


ii. Flatness of upper and lower surface w.r.t. datum surface when resting on it = 0.001 mm
iii. Flatness of side faces = 0.005 mm
iv. Squareness of side faces to upper surface = 0.003/25 mm
v. Parallelism of side faces to axis of rollers = 0.01/25 mm
vi. Flatness of end faces = 0.03 mm
vii. Squareness of end faces to the upper faces = 0.003/25 mm
viii. Parallelism of end faces to axis of rollers = 0.01/25 mm
ix. Straightness of individual rollers and freedom from lobbing and uniformity in diameter =
0.002 mm
x. Mean diameter of rollers = 0.002 mm
xi. Distance between roller axis = ± 0.003 mm
xii. Flatness of bearing surface of the setting foot = 0.003 mm

Any deviation of the dimensional size limit of the sine bar from the specifications mentioned above
may lead to an error in angular measurement. Hence some of the sources of errors in the sine bar may
be an error in distance between two rollers, errors in parallelism of upper and lower surface w.r.t. the
datum surface when resting on it and also with the plane of the roller axis, error in equality of size of
rollers and its cylindricity, error in parallelism between the two roller axes, error in flatness of upper
surface or error in slip-gauge combination and its wrong setting w.r.t. sine bar. The error may be due
to measuring high angles itself (this is due to the fact that as the angle increases, the error due to the
combined effect of the centre distance and gauge-block accumulated tolerance increases.) Below 45°,
this type of error is small. Hence, sine bars are not recommended for use for measuring or setting
angles larger than 45°.

10. Sine Centre These are used in situations where it is difficult to mount the component on the
sine bar. Figure 7.22 shows the construction of a sine centre. This equipment itself consists of a sine
bar, which is hinged at one roller end and mounted on the datum end.
Two blocks are mounted on the top surface of the sine bar, which carry two centres and can be
clamped at any position on the sine bar. These two centres can be adjusted depending upon the
length of the conical component. The procedure to measure is the same as it is in case of the use of
a sine bar. Figure 7.23 shows the use of a sine centre for measuring the included angle of the taper
plug gauge.
Apart from the sine bar and sine centres, sine tables are also used to measure angles. Specifically it
can be used for measuring compound angles. These are used for radial as well as linear measurement.

11. Vernier Clinometer Figure 7.24 explains the constructional details of a vernier clinometer.
It consists mainly of a spirit level mounted on a rotating member, which is hinged at one end in hous-
ing. One of the faces of the right-angle housing forms the base for the instrument. This base of the
Angular Metrology 211

Block

Centres

Support

Roller

200/300 mm
size
Fig. 7.22 Sine centre

Conical work

Slip
gauge
Datum
Roller surface
pivot

Fig. 7.23 Measurement of included angle of taper plug gauge using sine centre

instrument is placed on the surface whose angle is to be measured. Then the rotary member is rotated
and adjusted till the zero reading of the bubble in the spirit level is obtained. A circular scale fixed on
the housing can measure the angle of inclination of the rotary member, relative to a base against an
index.
212 Metrology and Measurement

Circular scale

Spirit Housing
level

Hinge
Base
Rotating member

Fig. 7.24 Vernier clinometer

A further modification in the Vernier clinometer is the micrometer clinometer (refer Fig. 7.25). It
consists of a spirit level whose one end is attached to the barrel of a micrometer and the outer end is
hinged on the base. The base is placed on the surface whose angle is to be measured. The micrometer
is adjusted till the level is horizontal. It is generally used for measuring small angles.

Spirit level
Base hinge
Micrometer

gauge

Fig. 7.25 Micrometer clinometer

Other types of clinometers are dial clinometer and optical clinometer, which use the same working
principle used in the case of a bevel protractor (and optical bevel protractor). The whole angle can be
observed through an opening in the dial on the circular scale and the fraction of an angle can be read
Angular Metrology 213

on the dial. In case of an optical clinometer, the reading can be taken by a measuring microscope on
a graduated scale provided on a fixed circular glass disc. With this instrument, angles even up to 1' can
be measured.

12. Autocollimator An autocollimator is used to detect and measure small angular tilts of a
reflecting surface placed in front of the objective lens of the autocollimator. Ideally, the area of
the reflecting surface should be at least equal to the area of the objective lens. However, this is not
generally the case when the autocollimator is used in conjunction with angle gauges or a polygon.
Therefore, since the objective lenses fitted to most commercial instruments have small but significant
wavefront errors, it is important to position the autocollimator so that its optical axis passes through
the centre of the reflecting face of the angle gauge or polygon, reducing the effect of wavefront errors
to a minimum. Figure 7.26 explains the working principle of an autocollimator.

Reflecting
mirror R

Source

X = 2f θ

Image

θ
Fig. 7.26 Principle of autocollimator

If a parallel beam of light is projected from the collimating lens and if a plane reflector R is set up
normal to the direction of the beam, the light will be deflected back along its own path and will be
brought to a focus exactly at the position of the light source. If the reflector is tilted through a small
angle θ, the parallel beam will be deflected through twice the angle, and will be brought to a focus in
the same plane as the light source but to one side of it. The image will not coincide but there will be a
distance = X = 2 f θ between them, where f is the focal length of the lens.
The autocollimator should be rotated about its optical axis, if such a provision exists, until a move-
ment of the reflected image perpendicular to the direction of measurement produces no change of
reading. For photoelectric autocollimators, this condition should be achieved using the photoelectric
detector.
The confusing method of making use of the appearance of the object wires seen directly through
the microscopes is removed in case of modern autocollimator design. Figure 7.27 shows the graticule
situated to one side of the instrument (along the axis perpendicular to the main axis). A transparent
214 Metrology and Measurement

Lamp
Target graticule

Objective lens
Micrometer

Beam splitter Reflector


Setting screw
θ
Fig. 7.27 Construction of autocollimator

beam splitter reflects the light from the graticule towards the objectives, and thus the microscope forms
no direct image. The image formed after reflection, whose angular variations are being measured, is
formed by the light from the objective, which passes through the 45ο beam splitter and this image is
picked up by the microscope. In this type of autocollimator, the microscope is fitted to a graticule opti-
cally at right angles to the eyepiece graticule. One of the important advantages of using an autocollima-
tor is that the instrument can be used at a considerable distance away from the reflector. In Fig. 7.28,
the set-up to measure the angular tilt in a horizontal plane (observe the direction of the curved arrow)
is shown. This set-up can also be used for measuring the flatness and straightness of the surface on
which the reflecting mirror is kept as a reflecting plane.
An autocollimator should ideally be used in an environment where air currents in the optical path
between the autocollimator and the reflecting surface are minimal. Such air currents, by introducing
changes in density and, therefore, of refractive index, produce random movements of the observed
image, impairing the accuracy of the autocollimator setting. For this reason, the distance between the
objective lens and the reflecting surface should be kept to a minimum and, where practicable, the light
path should be shielded from the surrounding air.
Calibration of an autocollimator can be made using the NPL-designed small angle generator.
In the case of visual and photoelectric-setting type autocollimators, small angles are generated to
check the periodic and progressive errors of the micrometer screw which enables the displace-
ment of the reflected image of the target cross-lines to be measured. In the case of automatic
position-sensing electronic autocollimators, the calibration points will have to be agreed upon with
the customer.
Angular Metrology 215

Square prism and


mirror to measure
small angular tilt
in the horizontal
plane

Lamp

Fig. 7.28 Set of autocollimator along with square prism and mirror to measure small
angular tilts in the horizontal plane
(Courtesy, Metrology Lab Sinhgad COE, Pune)

Measurement Uncertainties
• Visual setting autocollimator ±0.3 second of an arc
over any interval up to 10 minutes of an arc
• Photoelectric setting autocollimator Typically ±0.10 sec-
ond of an arc over any interval up to 10 minutes
of an arc
• Automatic position-sensing electronic autocollimator Typ-
ically ±0.10 second of an arc over any interval up
to 10 minutes of an arc

A combined effort of Renishaw and NPL has resulted


in a small-angle generator with an uncertainty of ±0.01
second of an arc over a range of 60 seconds of an arc. The
equipment shown in Fig. 7.29 has now been installed at NPL
where it has been evaluated and is being used to provide a
calibration service for high-accuracy autocollimators. Fig. 7.29 Autocollimator
216 Metrology and Measurement

The service has the following advantages over the previous service: (1) calibrations can be made at
any number of user-defined calibration points; and (2) improved measurement uncertainty.
The system has a total operating range of ±10 degrees but with an increased and yet to be quantified
measurement uncertainty.

13. Precision Angular Measurement

Case Study Generation of angles by indexing tables is achieved by the meshing of two similar
sets of serrations. Calibration of such tables submitted for test is effected by mounting the table
under test on top of one of the NPL indexing tables and using a mirror-autocollimator system to
compare angles generated by the table under test with similar angles generated by the NPL table. For
the purpose of assessing the accuracy of performance of the serrated type of table, it is considered
sufficient to intercompare each successive 30-degree interval with the NPL table, thus providing 144
comparative measurements. The small angular differences between the two tables are measured by a
photoelectric autocollimator capable of a discrimination of 0.02 second of an arc. A shortened test
may be applied to indexing tables, which have a reproducibility of setting significantly poorer than
the NPL tables, that is, greater than 0.05 second of an arc. For such tables, three sets of measure-
ments of twelve consecutive 30-degree rotations of the table under test are compared with the NPL
table. Between each set of measurements, the test table is moved through 120 degrees relative to the
NPL table.
The uncertainty of measurement is largely dependent on the quality of the two sets of serra-
tions. The criterion for assessing this quality is to check the reproducibility of angular positions of
the upper table relative to the base. Indexing tables similar to the NPL tables will normally repeat
angular positions in between 0.02 to 0.05 second of an arc. The uncertainty of measurement for the
calibration of these tables, based on 144 comparative measurements, is ±0.07 second of an arc.
Indexing tables having a slightly lower precision of angular setting, say between 0.05 and 0.2 second
of an arc, are calibrated by making 36 comparative measurements and the uncertainty of measurement
of the calibrated values will be between ±0.25 and ±0.5 second of an arc.

(a) (b)
Fig. 7.30 Indexing table
Angular Metrology 217

The basic standards for angle measurement used by NPL depend either on the accurate divi-
sion of a circle or on the generation of a known angle by means of a precision sine-bar. Several
methods are available for dividing a circle, but the one employed by NPL for undertaking measure-
ments for the precision engineering industry is based on the accurate meshing of two similar sets
of uniformly spaced vee-serrations formed in the top (rotatable) and base (fixed) members of an
indexing table.
NPL possesses two such indexing tables—one having 1440 serrations and the other 2160 ser-
rations—thus providing minimum incremental angles of 15 and 10 minutes of an arc respectively
throughout 360 degrees. The 1440 table is fitted with a sine-bar device, which enables the
15-minute of an arc increment to be subdivided to give a minimum increment of 0.1 second of an
arc. The accuracies of the NPL master indexing tables are checked by comparison with a similar
table. An essential accessory for the application of these indexing tables to the measurement of
the angle is an autocollimator or some other device for sensing the positions of the features that
define the angle to be calibrated. The autocollimator is used to measure the small angular differ-
ence between the item under test and the approximately equal angle generated by the serrations
of the indexing table.
The autocollimator is set to view either a mirror fixed to the upper (rotatable) member of the
indexing table or the reflecting surfaces of the item under test, e.g., the faces of a precision polygon
or an angle gauge. The settings of the table are made approximately and the small angular deviations
between the angle generated by the table and the angle of the test piece are measured by the autocol-
limator.
Angles generated using sine functions are realized by means of an NPL Small Angle Generator
designed to operate over a range of ±1 degree and intended primarily for the calibration of auto-
collimators and other instruments which measure small angular changes in the vertical plane. This
instrument is essentially a sine-bar, which can be tilted about the axis of a cylindrical shaft fitted
at one end of the bar. Predetermined angular tilts of the sine-bar are affected by inserting gauge
blocks between a ball-ended contact fitted to the sine-bar and a fixed three-point support platform.
The perpendicular separation of the axis of the cylindrical shaft and the centre of the ball-ended
contact of the sine-bar is 523.912 6 mm. Thus a vertical displacement of the ball-ended contact of
0.002 54 mm produces an angular change of the sine-bar of 1 second of an arc throughout a range
of ±10 minutes of an arc. (The normal measuring range of an autocollimator is ±5 minutes of an
arc.) The uncertainty of the NPL small-angle generator is estimated to be ±0.03 second of an arc for
angles in the range of ±10 minutes of an arc. A fused silica reflector, of 75-mm diameter, is mounted
on the sine-bar over its tilt axis and is viewed by the autocollimator under test. This reflector is flat
within 0.01 μm and is used for checking the flatness of the wavefront of the autocollimator.
When new steel polygons and angle gauges are submitted for certification, written evidence is
required from the manufacturer to show that they have received a recognized form of heat treatment
to ensure dimensional stability.
Although there are many different types of industrial requirements involving accurate angular
measurement, only the types of work listed below are normally dealt with by NPL. However, other
218 Metrology and Measurement

special purpose standards or components requiring pre- Microscope


cise measurement of angle will be considered if applied eyepiece
for to NPL.

14. Angle Dekkor It is a type of autocollimator.


Though it is not sensitive, still it is extremely useful for
a wide variety of short-range angular measurements.
Its pictorial view is shown in Fig. 7.31. It consists of a
microscope, collimating lens, glass scale engraved with
two scales, objective lens, eyepiece, and a lapped flat and
reflective base above which all these optical elements of
the instrument are mounted. The whole optical system
is enclosed in a tube which is mounted on an adjustable
Angle
table. Figure 7.32 explains the optical system of an angle slip-
dekkor. gauge
Base
An illuminated scale is set in the focal plane of the
collimating lens kept outside the view of the microscope
eyepiece. Light rays are then projected as a parallel beam
and strike the plane (on the base) below the instrument. Fig. 7.31 Set of angle dekkor
(Courtesy, Metrology Lab Sinhgad COE, Pune).

Microscope eyepiece

Prism

Lamp

Glass scale

Illuminated
scale

Datum scale

Converting rays from reflected


image of scale on screen

Workpiece
Collimating lens

Fig. 7.32 Optical system of angle dekkor


Angular Metrology 219

15 Fixed scale

10

Reflected image of
illuminated scale
10

15

Fig. 7.33 Fixed and illuminated scale

The reflected image is refocused by the lens in such a way that it comes in the view of the eyepiece.
This image can be seen on a glass scale which is placed in the focal plane of the objective lens. It falls
across the simple datum line, but across a similar fixed scale at right angles to the illuminate image. The
reading on the illuminated scale measures angular deviations from one axis at about 90o to the optical
axis and the reading on the fixed scale gives the deviation about an axis mutually at right angles to the
other two.
This enables to ensure the reading on the setting ‘master’. The master may be a sine bar or combina-
tion of angle gauges to set up on a base plate and the instrument is adjusted until a reading on both the
scales is obtained. Then, the ‘master’ is replaced by the work and the slip gauge is to be placed on the
surface of the workpiece to get a good reflective surface. Now the work is rotated until the fixed scale
reading is the same as that on the setting gauge. The difference in the two readings on the illuminated
scale is the error in the work surface angle.

Review Questions

1. List the various instruments used for angle measurement and explain angle gauges.
2. Explain the construction and use of a vernier and optical bevel protector.
3. What is a sine bar? Explain the procedure to use it using a sketch.
4. Discuss the limitations of the use of a sine bar.
5. Explain different types of sine bars with sketches.
6. The angle of a taper plug gauge is to be checked using angle gauges and the angle dekkor. Sketch
the set-up and describe the procedure.
220 Metrology and Measurement

7. Write short notes on (a)Vernier bevel protector (b) Autocollimator (c) Sine bar (d) Angle
dekkor.
8. Describe and sketch the principle of working of an autocollimator and state its applications.
9. Discuss the construction and use of a vernier and micrometer clinometer.
10. What are angle gauges? Explain with suitable examples how they are used for measuring angles.
11. Explain the construction, working and uses of the universal bevel vernier proctor.
12. Sketch two forms of a sine bar in general use. Explain the precautions to be taken while using it to
measure angles.
13. Write a technical note on angle gauge blocks by specifying their limitations. Also explain that to
what accuracy can the angles be generated with angle blocks.
14. Describe the principle of an angle-dekkor and mention its various uses.
8 Interferometry

“The sizes of end standards can also be determined by interferometry principles very accurately…”
Prof. M G Bhat, Professor Emeritus and Technical Director, Sinhgad College of Engineering.,
Pune University, Pune, India

UNDERSTANDING fringe patterns. These interference fringe


INTERFEROMETRY patterns are used to measure flatness by
In recent years industrial demand has comparing the fringe pattern obtained
resulted in a number of innovations in from the master flat surface of known flat-
interferometry. Simultaneously, advances ness and the surface under test. The NPL
in basic science have posed new require- Flatness Interferometer is one of the most
ments for measurements with very low popular types of interferometers used for
uncertainty. flatness testing.

To understand the interferometry phe- In the testing of optical components and


nomenon, it is necessary to study the optical systems, there are many require-
nature of light. To observe the phenome- ments of precision and accuracy, mea-
non of continuous interference of light surement time, ease of use, dynamic
waves, the two light sources should be range, and environmental conditions. But,
coherent, they should be very narrow nowadays interferometry techniques in
and the sources emitting a set of interfer- conjunction with modern electronics,
ing beams should be very close to each computers, and software are used as an
other. A monochromatic light source is extremely powerful tool in many fields
allowed to fall on the optical flat, which is including the testing of optical compo-
placed on the surface under test to get nents and optical systems.

8.1 INTRODUCTION

Huygens’ theory proposes that light is considered as wave motion propagated in ether as an electromag-
netic wave of sinusoidal form. The maximum disturbance of the wave is called amplitude, and the velocity
of transmission is called frequency. The higher points of a wave are called troughs and the lower points are
called crests. The distance between two troughs/crests is called wavelength (Refer Fig. 8.1). The time taken
by light in covering one wavelength is called the time period. The establishment of size accurately in relation
to national and international standards of length is of fundamental importance that is used for achieving
222 Metrology and Measurement

+
λ = Wavelength
Crest

Amplitude

A
O

Trough

Fig. 8.1 Light wave travels along axis OA

dimensional accuracy of a product. This wave nature of light is not apparent under ordinary conditions
but when two waves interact with each other, the wave effect is visible and it can be made useful for
measuring applications. For example, when light is made to interfere, it produces a pattern of dark bands
which corresponds to a very accurate scale of divisions. The particular characteristic of this entity is the
unit value of the scale, which is exactly one-half wavelength of light used. As this length is constant, it
can be used as a standard of measurement. The use of interferometry technique enables the determina-
tion of size of end standards (slip gauges and end bars) directly in terms of wavelength of light source
whose relationship to the international wavelength standard is known to a high degree of accuracy. The
subsidiary length standards, which include workshop and inspection slip gauges, setting meters, etc., are
calibrated with the help of interferometrically calibrated reference-grade slip gauges for retaining accuracy.
The French physicists Babinet in 1829 suggested that light waves could be used as a natural standard of
length. Later, a great deal of research was carried out in the similar way regarding the use of interferom-
etry techniques culminating in the establishment of the end standards such as yard and metre, in terms of
wavelength standard in 1960. Wavelength of orange light from krypton-86 spectrum was used.

8.2 MONOCHROMATIC LIGHT AS THE BASIS OF INTERFEROMETRY

White light is the combination of all the colours of the visible spectrum: red, yellow, orange, green,
blue, violet and indigo, and each of these colour bands consists of a group of similar wavelengths.
Therefore, this combination of all wavelengths of a visible spectrum and its form is not suitable for
interferometry. To overcome this difficulty, monochromatic light source like mercury, mercury 198,
cadmium, krypton, krypton-86, thallium, sodium and laser beams are used. This light source produces
rays having a single frequency and wavelength, which provide advantages like reproducibility, higher
accuracy of about one part in one hundred million, having a specific precise wavelength value and vir-
tually independent of any ambient conditions.

8.3 THE PRINCIPLE OF INTERFERENCE

Figure 8.2 explains the effects of combination of two light rays as rays A and B which are of the
same wavelength. When they happen to be in phase, it results into increased amplitude called resultant
Interferometry 223

Resultant

aR
B aB B
A
A aA

Fig. 8.2 Effect of combination of two monochromatic rays of equal amplitude, which are in phase
aA = Amplitude of wave A, a B = Amplitude of wave B, aR = aA + a B = Resultant amplitude (R)

amplitude. It is the addition of the amplitudes of the combined rays. Hence, if two rays of equal
intensity are in phase, they augment each other and produce increased brightness. If rays A and B
differ by a phase of 180° then the combined result R will be very small, may be zero, if the amplitudes
aA and aB are equal. Therefore, if two rays of equal intensity differ in phase by λ/2, they nullify each
other and result into darkness.
The above discussion reflects that interference can occur only when two rays are coherent, that is,
their phase difference is maintained for an appreciable length of time. This could be possible only when
the two rays originate from the same point of light source at the same time.

A Resultant

aA

aB

B
λ

Fig. 8.3 Effect of combination of two monochromatic rays


of equal amplitude, which are out of phase

The procedure for the production of an interference band is as follows.

i. Monochromatic light is allowed to pass through a very narrow slit (S ), and then allowed to pass
through the other two narrow slits (S1 ) and (S2 ), which are very close to each other.
ii. Two separate sets of rays are formed which pass through one another in the same medium.
iii. If path S1L2 and S2L2 are exactly equal then the rays on these paths will be in phase which results
in constructive interference, producing maximum intensity or bright band. The phenomenon
remains same for L1 and L3.
224 Metrology and Measurement

iv. If at the same point D1, the ray path differ- Slit S1 Screen
ence is equal to half the wavelength (S2D1
− S1D1 = λ/2), it results into an out-of- L1 – Light Band

Light Source
phase condition producing zero intensity D1 – Dark Band
or a dark band due to destructive interfer- L2 – Light Band
ence. The phenomenon remains the same D2 – Dark Band
for D2.

Slit S
L3 – Light Band
v. Thus, a series of bright and dark bands
are produced. The dark bands are called
Slit S2
interference fringes. The central bright band
Fig. 8.4 Way of producing interference pattern
is flanked on both the sides by dark bands,
which are alternatively of minimum and
maximum intensities and are known as interference bands.

8.4 INTERFERENCE BANDS USING OPTICAL FLAT

Another simple method of producing interference fringes is by illuminating an optical flat over a plane-
reflecting surface. An optical flat is a disc of glass or quartz whose faces are highly polished and flat within a
few microns. When it is kept on the surface’ nearly flat dark bands can be seen. These are cylindrical pieces
whose diameters range from 25 mm to 300 mm with the thickness of 1/6th of the diameter. For measuring
flatness, in addition to an optical flat, a monochromatic light source is also required. The yellow – orange
light radiated by helium gas can be satisfactorily used. Such an arrangement is shown in Fig. 8.5. Optical
flats are of two types, namely, Type A and Type B. A Type-A optical flat has a single flat surface and is used
for testing precision measuring surfaces, e.g., surfaces of slip gauges, measuring tables, etc.
A Type-B optical flat has both the working surfaces flat and parallel to each other. These are used
for testing the measuring surfaces of instruments like micrometers, measuring anvils and similar other
devices for their flatness and parallelism.
As per IS 5440–1963, optical flats are also characterized by the grades as their specifications: Grade 1
is a reference grade whose flatness is 0.05 micron and Grade 2 is used as a working grade with tolerance
for flatness as 0.10 micron.

8.4.1 Designations of Optical Flats


Grade 1, Type A, of 250-mm diameter is designed according to specifications laid down by IS 5440
and is designated as Optical Flat IA 25 – IS: 5440. Grade 2, Type B of optical flats with 12.125-mm
thickness are designated as II B 12.125 – IS: 5440. Generally, an arrow is made on the flat to indicate
the finished surface. Sometimes these optical flats are coated with a thin film of titanium oxide, which
reduces loss of light due to reflection to get more clear bands. These optical flats are used in constant
temperature environment and handled with proper care.
An optical flat is used for testing of flat surfaces. Consider the case when an optical flat kept
on the surface of a workpiece, of which the flatness is to be checked, and due to some reason
Interferometry 225

Monochromatic light source

Flat job kept on


optical flat Optical flat

Fig. 8.5 Monochromatic light source set-up along with optical flat and surface under test
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune University, India).

A
(like the surface being turf or convex/concave or cylindri- B
cal or because of any foreign material present in between
the surface under test and the bottom surface of the optical
flat), it could not make an intimate contact and rests at some D
angle ‘θ’. In this situation, if the optical flat is illuminated by
a monochromatic light, interference fringes will be observed
(refer Fig. 8.5 for studying the set-up). These are produced by
the interference of light rays reflected from the bottom sur- Monochromatic
light
face of the optical flat and the top surface of the workpiece
Optical flat
under test as shown in Figs 8.6 and 8.7 through the medium
of air.
An optical flat is shown at much exaggerated inclination Workpiece
with the test surface where the air space distances differ by H
one-half of the wavelength intervals. Dark bands indicate the
curvature of the workpiece surface. Referring to Fig. 8.6, the Fig. 8.6 Application of mono-
bands are represented by B and the mean spacing by A. The chromatic interference method
226 Metrology and Measurement

amount X by which the surface is convex or concave (as in the present instance) is given by the fol-
lowing relation:
B over the diameter D of the optical flat
X=
A
⎛ B⎞
The wavelength = 0.000022 inches so that X = 0.00001 × ⎜⎜ X = ⎟⎟
⎜⎝ A ⎟⎠
Thus, if B is one-quarter of A, it indicates that the surface is concave by 0.0000028 inches over the
diameter D. If mercury green light is used for monochromatic bands, the corresponding value will be
21.50 micro-inches. This phenomenon is explained in detail as follows.
A wave from a monochromatic light source L is made incident on the optical flat (refer Fig. 8.7)
placed on the surface under test. Some of the wave is partially reflected from a point on the bottom
surface of the optical flat a and partially reflected from the point b on the top surface of the surface
under test through the entrapped air. These two components of reflected light rays are recombined at
the eye. These rays differ by the length abc. The rays emerging at points x and y, which have slightly dif-
ferent directions, can be brought together by an optical instrument or eye.

Eye position Eye position


L L
Optical
flat
v
x
f
d θ
c λ 3λ
a g
4 4
b e
Reflecting Surfaces

Fig. 8.7 Production of interference fringes using optical flat

If the length abc is equal to λ/2 (where λ is the wavelength of the light source) then the dark band is
seen. A similar situation can occur at all points like b which are in a straight line across the surface being
checked, and due to this a straight dark band could be seen.

Dark band at the point b

λ/2
Across the
surface

b e g
Fig. 8.8 Alternative dark and bright bands
Interferometry 227

Similarly, at another point along the surface the ray L again splits up into two components whose
path difference length def is an odd number of half-wavelengths and the rays from d and f interfere to
cause darkness. The second dark band is shown by the point e (refer Fig. 8.8).
The amount of inaccuracies of a surface tested by the optical flat method can readily be estimated
by measuring the distance between the bands; thus there will be a surface inaccuracy of 0.00001 inches
over the distance of each consecutive band. For accurate measurements, the distance between the
colour fringes should be taken from the dark centre or from the edge of the red colour, nearest the
centre of the colour fringe.

8.5 EXAMPLES OF INTERFERENCE PATTERNS

The development of a typical type of interference pattern mainly depends upon the relationship of the
geometry of the surface and the position of the optical flat. The following are some of the interference
patterns in different situations. (See Fig. 8.10(a), Plate 7.)

A B C A B C A B C

A B C A B C A B C

Optical Flat

A B C A B C A B C

Fig. 8.9 Interference patterns obtained at different positions of an optical flat


228 Metrology and Measurement

Table 8.1 Descriptions of interference patterns

Figure Description
(1) Perfectly flat surface, but the contact is not
good.

(2) The lower surface under test is either convex


or concave near the lower right-hand edge. The
nature of the curvature can be ascertained by
the fingertip pressure-test∗.

(3) This pattern of fringes denotes either a ridge or


a valley; the fingertip pressure test∗ can be used
to conclude which condition exists.

(4) As there are no colour fringes, this figure shows


a perfectly flat surface under test.

∗In order to determine whether the surface is convex or concave, it must be pressed with the finger tip at the
centre of the rings. If the colour fringes move away from the centre, it indicates convexity; and if they move in
towards the centre, the surface is concave. Some such examples are shown as follows:
Interferometry 229

(i) (ii) (iii) (iv)

(v) (vi)

(vii) (viii)
Fig. 8.10(b) Different interference patterns

Refer Fig. 8.10(b); its explanation is as follows:


i. There are two hills, one at the upper end of the left hand-side and the other at the right-hand
bottom end.
ii. The test surface is either concave or convex.
iii. Rounded vertical edges
iv. Rounded horizontal edges
v. Gauge block face is not set parallel to the surface of the optical flat.
vi. This fringe pattern shows depression in the middle region.
vii. Partly flat and hollow
viii. Partly flat and sloping at the top side of the right-hand end.

As the inclination between the optical flat and test surfaces increase, fringes are brought closer; and as
inclination reduces, the fringe spacing increases and becomes nearly parallel.
230 Metrology and Measurement

8.6 NPL FLATNESS INTERFEROMETER

Figure 8.11 shows the optical arrangement for an NPL flatness interferometer. This instrument was
first constructed by NPL and manufactured commercially by Hilger and Watts Ltd. The flatness of the
surface under testing is measured by comparing it with an optically flat surface, which is generally the
base plate of the interferometer. Hence, it works on the principle of absolute measurement. Either cad-
mium or mercury 198 is used as a monochromatic source of light. Each of these gives four wavelengths
(with cadmium): red, green, blue, violet; and (with mercury): green, violet and two yellows.
The whole instrument is built on a single rigid casting and the given specimen (e.g., gauges), under
test is completely enclosed to minimize the effects of temperature variations. In this instrument
(simplest form of NPL interferometer), a mercury lamp is used as a light source whose radiations are
made to pass through a green filter which in turn, makes green monochromatic light pass through
it. This light is focused on to a pinhole giving an intense point source of monochromatic light. This
pinhole is in the focal plane of the calumniating lens and is thus projected as a parallel beam of light.
The wavelength of the resulting monochromatic radiation is in the order of 0.5 microns.

Mercury vapour light source

Condensing lens

Green filter
Pin hole

Glass plate reflector

45 degrees

Collimating lens

Parallel rays Optical flat

Surface under test


Base plate
Gauge

Fig. 8.11 NPL flatness


Interferometry 231

Now, the beam is directed on to the gauge under test which is wrung on the base plate via an optical
flat in such a way that interference fringes are formed across the face of the gauge. The fringes obtained
can be viewed directly above the means of a thick glass plate semi-reflector set at 45º to the optical axis.
The various results can be studied for comparison.
In case of large-length slip gauges, the parallelism of surfaces can also be measured by placing the
gauge on a rotary table in a specific position and reading number 1 can be taken. The number of fringes
obtained is the result of the angle that the gauge surface makes with the optical flat. This number is
noted. Then the table is turned through 180° and reading number 2 can be taken. Now, fringes are
observed and their number is to be noted. Then the error in the parallelism can be obtained by the fol-
lowing calculations.
The change in distance between the gauge and optical flat = λ/2.
(n −n )×λ
Then, error in parallelism = 2 1
4
where, n1 = number of fringes in the first position
and n2 = number of fringes in the second position.

8.7 GAUGE LENGTH INTERFEROMETER

This is also known as the Pitter – NPL Gauge Interferometer. It is used to determine the actual dimen-
sions or absolute length of a gauge.

Table 8.2 Typical fringe pattern examples

Fringe Pattern Obtained Description

Base plate pattem Gauge is flat and parallel.


Gauge pattem

Gauge is flat but not parallel from one side to another side.

Surface under test may be convex or concave.

Gauge is flat but not parallel from one end to the other end.
232 Metrology and Measurement

Monochromatic light from the source falls on a slit through a condensing lens; and after it
passes through the collimating lens, it goes through the constant deviation prism. Its rotation
determines wavelength magnitude of the light rays passing though the optical flat to the upper
surface of the gauge block under test and a base plate on which it is wrung. The light is reflected
in the mirror and its patterns can be observed through a telescopic eyepiece. Construction is as
shown in Fig. 8.12.

Collimating lens Constant deviation prism

Monochromatic
Illuminating
light source
aperture
Plate
Mirror
Condensing lens

Reflecting prism Viewing


aperture

a b
Optical flat

Gauge to
be measured

Base plate
Fig. 8.12 Gauge-length interferometer

The actual curvature can be determined by comparing a with the fringe spacing b. The change in
height (h) of λ/2 is given by
a h a λ
= ∴ h= ×
b λ/2 b 2

Illustrative Examples
Example 1 A 1.5-mm surface is being measured on an interferometer. A lamp is used which emits wavelength
as follows:
Red: 0.842 μm and Blue: 0.6628 μm. Calculate the nominal fractions expected for the gauge
for the two wavelengths.

Solution First, calculate the number of half-wavelengths, n and λ being the wavelength of the
source light.
Interferometry 233

For red light n = λ/2


= 0.842 / 2 = 0.421 μm
= 0.421 × 10 −3 mm
For blue light
n = λ/2
= 0.6628 / 2 = 0.3314 μm
= 0.3314 × 10 −3 mm
Now, calculate the nominal fraction of the surface Nf
∴ Nf = l/n (where l is the length of the surface to be checked)
For red light
Nf = 1.5/(0.421 × 10 −3) = 3562.9454
∴ consider the nominal fractions for Nf = 0.9454
For blue light
Nf = 1.5/(0.3314 × 10 −3)
= 4526.2523
∴ consider the nominal fractions for Nf = 0.2523
∴ the nominal fractions expected for the gauge for the two wavelengths are 0.9454 for red and 0.2523
for blue.
Example 2 A 1.45-mm slip gauge is being measured on a gauge length interferometer using a cadmium lamp.
The wavelengths emitted by this lamp are
Red: 0.643850537 μm
Green: 0.50858483 μm
Blue: 0.47999360 μm
Violet: 0.46781743 μm
Calculate the nominal fractions expected for the gauge for the four wavelengths.

Solution First, calculate the number of half wavelengths, n and being the wavelength of the source
light.
For red light
n = λ/2
= 0.643850537/2 = 0.321925185 μm
= 0.321925185 × 10 −3 mm
For green light
n = λ/2
= 0.50858483/2 = 0.254292415 μm
= 0.254292415 × 10 −3 mm
234 Metrology and Measurement

For blue light


n = λ/2
= 0.47999360/2 = 0.2399968 μm
= 0.2399968 × 10 −3 mm
For violet light
n = λ/2
= 0.46781743/2 = 0.233908715 μm
= 0.233908715 × 10 −3 mm
Now, calculate the nominal fraction of the surface Nf
∴ Nf = l/n (where l is the length of the surface to be checked)
For red light
Nf = 1.45/(0.321925185 × 10 −3) = 4540.1521
∴ consider the nominal fractions for Nf = 0. 1521
For green light
Nf = 1.45/(0.254292415 × 10 −3) = 5702.0969
∴ consider the nominal fractions for Nf = 0. 0969
For blue light
Nf = 1.5/(0.2399968 × 10 −3) = 6041.747223
∴ consider the nominal fractions for Nf = 0. 747223
For violet light
Nf = 1.5/(0.233908715 × 10 −3) = 6198.99469
∴ consider the nominal fractions for Nf = 0. 99469

Review Questions

1. Explain the principle of measurement by light-wave interference method.


2. List the common sources of light used for interferometry and explain the essential properties of
light wave for interference.
3. Describe optical flats along with their types.
4. Sketch and interpret the meaning of various interference fringe patterns observed using an optical
flat.
5. Explain how interference bands are formed by using optical flats.
Interferometry 235

6. What do you mean by the term ‘interferometer’? What are their advantages over optical flats?
7. Sketch the optical arrangement of an NPL gauge-length interferometer and explain how it is used
to compute the thickness of a slip gauge.
8. Write short notes on
(a) Optical flat (b) Gauge-length interferometer (c) NPL flatness interferometer
9. Explain the formation of interference fringes when light falls on an optical flat resting on a lapped
surface. What is the effect of using a monochromatic beam, instead of white light?
10. Sketch the typical fringe pattern observed through an optical flat which illustrates surfaces: (a) flat
(b) concave (c) convex (d) ridged. Explain the test on an optical flat which reveals whether a surface
is convex of concave.
11. Explain the basic difference between a flatness interferometer and length interferometer.
12. A 1-mm slip gauge is being measured on a gauge-length interferometer using a cadmium lamp. The
wavelengths emitted by this lamp are
Red: 0.643850537 μm
Green: 0.50858483 μm
Blue: 0.47999360 μm
Violet: 0.46781743 μm
Calculate the nominal fractions expected for the gauge for the four wavelengths.
9 Comparator

It doesn’t measure actual dimension, but it indicates how much it varies from the basic dimension…
S M Barve, Sr Manager, Gauge Laboratory, Cummins India Ltd.

SELECTING A COMPARATOR particular comparator system, prospec-


A comparator is a precision instrument tive buyers must first consider whether
used for comparing dimensions of a part the system meets three general applica-
under test with the working standards. tion requirements: specified part toler-
Any comparator system works for appli- ance limits, the type of characteristics to
cations in situations such as repetitive be compared—whether dimensional or
measurement. A comparator system re- geometrical—and the manufacturing or
quires a low initial investment and there end-product priorities vs, critical charac-
is less possibility of flexibility. Every type teristics to be compared.
of comparator has its advantages and
Another choice that has become a basic
limitations, which have to be considered
issue within the past few years is the
before use. For example, an electro-pneu-
analog vs, digital question. It used to be
matic comparator must be specifically
pretty straightforward—if we favoured
designed for particular applications. They
economy, an analog gauge equipped with
work best where inside diameters are
a mechanical dial indicator was the obvi-
very small or deep, where the ratio of the
ous choice. If the application required
bore size to the depth is small, or where
extremely high accuracy then a compar-
surface averaging of inside or outside
ing amplifier equipped with an electronic
diameters is desired. While selecting a
gauge head was the way to go.

9.1 INTRODUCTION

Virtually every manufactured product must be measured in some way. Whether a company makes
automobiles or apple sauce, laptops or lingerie, it is inevitable that some characteristic of size, volume,
density, pressure, heat, impedance, brightness, etc., must be evaluated numerically at some point during
the manufacturing process, as well as on the finished product. For a measurement to have meaning, an
accepted standard unit must exist. The inspector measuring parts on the shop floor must know that his
or her millimetre (or ounce, ohm, Newton or whatever) is the same as that being used on a mating part
Comparator 237

across the plant, or across the ocean. A chain of accountability, or traceability, connects the individual
gauge back to a national or international standards body to ensure this and the comparator works for
the same.

9.1.1 Measuring and Comparing


The Automotive Industry Action Group’s reference manual of gauging standards defines a measure-
ment system as ‘the collection of operations, procedures, gauges and personnel used to obtain measure-
ments of workpiece characteristics.’ And measurement is a process of quantifying the physical quantity
by comparing it with a reference using a comparator. In this process, once the unit of measurement is
accepted, some means of comparing the process or product against that unit must be applied. When
the characteristic to be evaluated is dimensional, e.g., size or location, there are two basic approaches.
The Quality Source Book—Gauge Manufacturers Guide defines a comparator as ‘a measuring component that
compares a workpiece characteristic to a reference’.
The first approach, simply called measuring, involves the use of direct-reading instruments that
count all units and decimal places from zero up to the dimension at hand. Direct-reading instruments
commonly used in manufacturing include steel rules or scales, vernier calipers, micrometers and some
digital height stands. Coordinate measuring machines can also fall under this category.
The second approach is comparing, which uses indirect-reading instruments known as compar-
ators to compare the workpiece against a standard or master—a precision object that represents
a known multiple of the measurement unit. A comparator typically may or may not start at zero
but at the specified dimension, and it indicates the size of the workpiece as a deviation from the
specification. A result of zero on a comparator thus indicates that the part is precisely of the right
size.
Both kinds of equipment have their roles. The strength of measuring devices is their flexibility:
You can measure virtually anything with a vernier caliper or a CMM over a fairly broad range of sizes.
A comparator tends to be quicker and easier to use because it is designed for more specific tasks. The
comparator-user generally needs to observe only the last digit or two of a dimension to know whether
a part is within the specified tolerances. And because comparators are designed for use over a shorter
range of dimensions, they tend to be capable of generating results of higher accuracy. Therefore,
comparators are usually the practical choice for high-volume parts inspection, particularly where high
precision is needed (during an inspection and measuring process, the use of a comparator is the best
option to remove dependability on the skill of an inspector).
From the above discussion, it is clear that comparators are precision-measuring equipment mainly
consisting of sensing, indicating or displaying units whose purpose is to detect variation in a specific
distance (as determined by a reference plane established at a fixed position relative to the instrument
and by selecting a gauging point on the surface of an object) and to display the results on a dial, gradu-
ated scale or through digital display (which is an amplified version of the sensed dimensional variation).
If we analyze a comparative measurement process, for example, comparative length-measurement pro-
cess, a little consideration will show that for the purpose of length measurement, the comparator must
be equipped with devices serving the following functions:
238 Metrology and Measurement

i. Locating the object under test on a reference plane with one end of the distance to be mea-
sured.
ii. Holding the comparator in a positive position from the reference plane, with the effective move-
ment of its spindle in alignment with the distance to be measured.
The use of a comparator is not limited to length measurement only but many other conditions of
an object under test can be inspected and variations can be measured. The scope of a comparator is
very wide. It can be used as a laboratory standard in conjunction with inspection gauges. A precision
comparator itself can be used as a working gauge. It can be used as an incoming and final inspection
gauge; moreover, it can also be used for newly purchased gauges.

9.2 DESIRABLE FEATURES OF COMPARATORS

A good comparator should be able to record variations in microns, and among other desirable features
(characteristics) it should possess the following:
1. The scale used in the instrument should be linear and have a wide range of acceptability for
measurement.
2. There should not be backlash and lag between the movement of the plunger and recording
mechanism.
3. The instrument must be precise and accurate.
4. The indication method should be clear. The indicator must return to zero and the pointer should
be free from oscillations.
5. The design and construction of the comparator (supporting table, stand, etc.) should be robust.
6. The measuring pressure should be suitable and must remain uniform for all similar measuring
cycles.
7. The comparator must possess maximum compensation for temperature effects.

9.3 CLASSIFICATION OF COMPARATORS

Wide varities of comparators are available commercially in the market, and they can be categorized on
the basis of the way of sensing, the method used for amplification and the way of recording the varia-
tions of the measurand. They are classified as mechanical comparators, optical comparators, pneu-
matic comparators, electrical and electronic comparators, and fluid displacement comparators. Also,
a combination of these magnifying principles has led to the development of special categories of
comparators as mechanical-optical comparators, electro-mechanical comparators, electro-pneumatic
comparators, multi-check comparators, etc. Comparators are also classified as operating either on a
horizontal or on a vertical principle. The vertical is fairly well standardized and is the most commonly
used.
Mechanical comparators are instruments for comparative measurements where the linear movement
of a precision spindle is amplified and displayed on a dial or digital display. Indicators utilize electronic,
Comparator 239

mechanical or pneumatic technology in the amplification process; e.g., dial indicators, digital indicators
and electronic amplifiers or columns. These gauging amplifiers or instruments are available in three
main types:
1. Comparators or high-precision amplifiers (including columns or electronic amplifiers).
2. Indicators (higher precision compared to test indicators, used for inspection).
3. Test indicators (lowest precision, widely applied in production checking).
Mechanical comparators, electronic comparators, or appliers, and pneumatic or air compara-
tors are gauging devices for comparative measurements where the linear movement of a precision
spindle is amplified and displayed on a dial/analog amplifier, column, or digital display. Mechanical
comparators have sophisticated, low-friction mechanism, better discrimination (∼0.00001"), and lower
range (∼+/− 0.0005") compared to indicators. Comparators have a higher level of precision and less
span error compared to conventional dial or digital indicators. The level or precision is sufficient for
measurement of high-precision ground parts and for the calibration of other gauges.
Indicators are gauging devices for comparative measurements where the linear movement of a
spindle or plunger is amplified and displayed on a dial, column or digital display. Typically, indica-
tors have a lower discrimination (∼0.001" to 0.0001") and greater range (∼+/− 1.000 " to +/−
0.050 " total) compared to comparators. The level or precision is sufficient for final-part inspection.
Test indicators have the lowest discrimination when compared with indicators and comparators.
Test indicators used are mainly for set up and comparative production part checking. Test indica-
tors often use a cantilevered stylus or level style probe that facilitates inspection of hard-to-reach
part features, but results in high cosine errors. A cosine error of 0.0006" may result over a travel
range of 0.010″. Test indicators are not considered absolute measuring instruments, but compara-
tive tools for checking components against standard or zeroing-out set-ups. Other devices that fall
within the category of indicators and comparators include gauge sets, gauging stations and gauging
systems.

9.3.1 Mechanical Comparator


Mechanical comparators fall in the broad category of measuring instruments and comprise some
basic types that belong to the most widely used tools of dimensional measurements in metal-work-
ing production. These instruments utilize the mechanical mean of magnifying the small movement
of the measuring stylus/contact plunger, which may consist of gear trains, levers, cams, torsion strips,
reeds and/or a combination of these systems. The magnification range is about 250 to 1000 times.
A mechanical comparator uses pointers as an indicator pivoted around a suspended axis and moving
against a circular dial scale. Some of the versatile, commonly and frequently used mechanical compara-
tors are the following:

Dial Indicator Dial indicators are mechanical instruments for sensing measuring-distance varia-
tions. The mechanism of the dial indicator converts the axial displacement of a measuring spindle
into rotational movement. This movement is amplified by either mechanical or inductive means and
displayed by either a pointer rotating over the face of a graduated scale or digital display.
240 Metrology and Measurement

1. Mechanical Dial Indicator It is a displacement indicating mechanism. Its design (as shown
in Fig. 9.1) is basically in compliance with American Gauge Design (AGD) specifications. Its operating
principle consists of a very slight upward movement on the measuring spindle (due to a slight upward
pressure on it) and is amplified through a mechanism in which the measuring spindle usually carries
an integral part of a rack whose teeth mesh with a pinion, the pinion being a part of a gear train. This
mechanism (shown in Fig. 9.2) thus serves two functions—one is of converting linear displacement of
the plunger (in turn, rack) into rotary motion, and the other is to amplify this rotary motion by means
of driving gears (G1, G2, G3 ) meshing with substantially smaller pinions (P1, P2, P3 ). This magnification

Main scale
locking screw

One division
equals one
complete
revolution of
the pointer
over the main
Graduated scale, i.e.,
main scale 1 mm

Plunger movement
direction

Fig. 9.1 Mechanical dial indicator


(Courtesy, Mahr GMBH Esslingen)

depends upon the number of teeth on the gear and pinion,


which can further be enlarged at the tip of the pointer by an
amount depending upon the length of the pointer. Measur- Rack
ing the divisions of the scale and dividing this dimension (P1) (G1)
by an equivalent movement of the measuring plunger can Pinion Gear
calculate the overall magnification of any dial gauge. (P2) (G2) (G3)

Refer Fig. 9.3 (a, b, c). These are the examples of typical
features of commercially available dial indicators: Hair-
spring
Type–A is with a reverse measuring force. • Shockproof
movement via sleeve which floats over the spindle • Con-
stant measuring force • Protective housing (back-wall inte- Plunger

grated in-housing) • Chrome-plated housing • Adjustable


tolerance markers Coil
spring
Type–B readings 0, 1 mm • Shockproof movement via
sleeve which floats over the spindle • Constant measuring Fig. 9.2 Working mechanism of
force • Protective housing (back-wall integrated in-housing) dial indicator
Comparator 241

(a) (b) (c)


Fig. 9.3 Different types of mechanical dial indicators

• Chrome-plated housing • Adjustable tolerance markers • 1 pointer movement on 10 mm • Delivered


in plastic case
Type–C Long-Range Dial Indicator with extra-large measuring range • 40-mm range • Strengthened
measuring spindle (5 mm) • Raising of measuring spindle via lifting cap • Shockproof movement•
Delivered in folded box
There are two basic mountings of the dial: (a) by stem, and (b) by the back.

2. Mechanical Dial Indicator (Comparator) With Limit Contacts A dial compara-


tor, shown in Fig. 9.5, is the synonym for highest precision and extreme operating robustness. Le-
vers, gears and pinions supported in jewelled bearings and the measuring spindle running in a ball
bush guide ensure a minimal reversal span error and a high accuracy. For this reason, dial comparators
are specially suited for measuring tasks where accuracy and reversal span error of a dial indicator are
no longer sufficient. Further advantages are their simple operation, their easy reading as well as their
effective shock protection of the movement. The inductive dial comparators, based on state-of-the-art
digital technology, permit readings as small as 0.2 μm. They possess practical operating functions like
tolerance monitoring or saving of extreme values from dynamic measurements and a combined digital
and analog display. Additionally, data can be sent to the evaluation equipment. The exploded view of a
mechanical dial comparator with limit contacts is shown in Fig. 9.6.
The type of gauges shown in Fig. 9.7, although operate essentially as a mechanical comparator, differ
from the conventional indicators by having a lever with electrical contact points on either side. These
types of comparators can be used as a substitute for the regular pointer and scale. They can be used
242 Metrology and Measurement

(a) (b)

(d)

(c)

(e) (f)

Fig. 9.4 (a) Precision dial indicator (b) Rear view of the precision dial indicator (c) Dial indicator
on a magnetic base stand, and its application of checking eccentricity of job in chuck is shown
in Fig. (d), and Figs (e) and (f) show flexble magnetic dial stands
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune University, India.)
Comparator 243

Adjustable
tolerance
markers

1 2 3 A B

Pointer
A B

C Fine adjustment
1. Undersize
screw
2. Good
3. Oversized Measuring spindle
C

Contact point
(a) (b)

(1, 2, 3) are the relays, (A, B) are adjusting screws for electric contacts (C) Lifting screw
Fig. 9.5 Mechanical dial indicator (comparator) with limit contacts
(Courtesy, Mahr GMBH Esslingen)

Box-type opening Self-contanied movement.

Ensures constant This unit can be removed


Lockable fine
measuring force and replaced quickly.
adjustment
Maximum sensitivity and
accuracy are ensured by
jewelled bearings of movement
in conjunction with precision
gears and pinions.
Raising of measuring
spindle either by way Clear-cut scale
of screw-in cable or
lifting knob Adjustable
Mounting shank tolerance
and measuring markers
spindle made of
hardened stainless
steel
Measuring spindle mounted in high-
precision ball guide (Types 1000/1002/
1003/1004) for minimal hysteresis

Insensitive to lateral forces acting on the


spindle

Fig. 9.6 Exploded view of mechanical dial comparator with limit contacts
244 Metrology and Measurement

(a) Precision bench (b) Self-centering (c) On-indicator stand


micrometer dial bore gauge
Fig. 9.7 Applications of mechanical dial comparator with limit contacts
(Courtesy, Mahr GMBH Esslingen)

as sensing heads without indicator scales. This is because the two limit positions of the gauge must be
set with the aid of a single master or gauge block, which represents the limit sizes. For doing this initial
setting, the tolerance markers of the indicator unit are brought to the desired limit positions guided by
the indicator’s scale graduations.

Fixed anvil Movable anvil


3. Micrometer Dial Comparator It is used
for rapid measurement of diameters of cylindrical
parts, viz., shafts, bolts, shanks, etc., and measure-
ment and checking of thickness and length. This
instrument is also recommended for standard pre-
cision parts. The procedure for noting the reading
is similar to the procedure for using a micrometer.
The dial shown in Fig. 9.8 also consists of adjustable
tolerance marks to set the instrument for specific di-
Fig. 9.8 Micrometter dial comparator
mension and then used as comparator.
The construction of this instrument includes mainly a micrometer a dial as a comparator, which is
integrated into the frame, and fixed and movable anvils. The frame is made of steel, and chrome-plated
with heat insulators. The ceramic measuring faces ensure high wear resistance. The measuring spindle
made of stainless steel is throughout hardened and ground. The retraction of a movable anvil and car-
bide-tipped measuring faces ensure maximum wear resistance.
Comparator 245

The instrument shown in Fig. 9.9 is also used


for the same purpose discussed for the previous
comparator. But one additional facility provided
here with this comparator is the rugged steel frame,
which can be swiveled up to 45° in relation to the
heavy-duty base.

4. Lever Type (Test Type) Dial Indicator


Dial test indicators are similar to dial indicators, but
are typically more precise and have a smaller range of
movement. Rather than a plunger that moves in and
out (as in the previous cases), they have a small lever
arm with a ball-shaped tip that moves up and down.
This enables the tip to be inserted into a small hole
so that the hole can be precisely centered on the lathe
Fig. 9.9 Bench micrometer dial comparator axis—an operation that could not be done with a dial
indicator as illustrated in Fig. 9.10 (c, d, e).

(a) (b)

(c) (d)

(e)
Fig. 9.10 (a) Lever-type dial indicator (b) Three sides where small dovetails are
used to mount (c, d, e) way of mountings to check runout, circularity, ovality
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune University, India.)
246 Metrology and Measurement

The indicator shown in Fig. 9.10 (a) has a measuring range of 0.030—much less than a dial indicator—
and reads plus or minus from the zero point. When the tip is at rest at its neutral point, it can be moved
0.015 in either direction. The tip can be set at different angles for convenience in set up. As on the dial
indicator, the bevel and numeric scale can be rotated to zero the reading. Each division is 0.0005 (5 ten-
thousandths or half a thousandth per division).
Figure 9.11 shows an exploded view of a lever-type dial indicator showing its design feature, and its
applications are explained in Fig. 9.12. The test indicator serves as an instrument for comparative mea-
surements. It can be used in any type of measuring stand. Due to the swiveling feature of the probe and
the reversal of its sensing direction, the test indicator is suitable for many measuring and checking tasks.
Its areas of application are (1) run-out and concentricity checks of shafts and bores, and (2) checks of
parallelism and alignment of flat faces in engineering and tool-making. For accurate measurements, the
axis of the contact point must be perpendicular to the measuring direction. If this is not possible, it is
necessary to multiply the reading on the dial with a correction factor, which depends on the angle α.
The correction factor is negligible for angles below 15°.

High contrast dial face


Box type housing with
three dovetail guideways

Contact points with


chromium-plated ball

Automatic matching to sensing direction,


i.e., pointer always moves in clockwise directions,
thus ensuring error-free reading.

Double lever supported in ball


bearings, overload protection
provided by slip clutch

Satin-chrome finish on housing to


protect against corrosion
Insensitive to magnetic fields complete metal version
Maximum sensitivity and accuracy provied by
precision gears and pinions
Jewelled movements bearings
Fig. 9.11 Exploded view of lever-type dial indicator showing its design feature

Table 9.1 Correction factor for angle (∝ > 15°)

Angle (∝) 15° 30° 45° 60°


Correction
0–96 0–87 0–70 0–50
Factor (mm)
Comparator 247

Example
Angle ∝ : 30° (estimated)
Reading on dial: 0.38 mm
Measured value: 0.38 × 0.87 = 0.33 mm

(a) (b) (c)

(d) (e) (f)


Fig. 9.12 Applications of mechanical lever-type dial comparator

Johansson Mikrokator The Johansson Mikrokator was developed by H. Abramson, a


Swedish engineer and manufactured by C E Johansson Ltd., hence the name. The construction of
the instrument is shown in Fig. 9.13. It uses a twisted strip with a pointer attached, as the plunger is
depressed, causing the strip to stretch. As the twisted strip is stretched, it changes the angle of the
pointer, and thus of the indicated deflection. In this instrument, the twisted strip is made up of a
phospor-bronze rectangular cross-section. This twisted band principle of displacement amplifica-
tion permits good flexibility of instrument design, which provides a wide range of measurement.
It is one of the important types of mechanical comparators. The actual measuring range depends
upon the rate of amplification and the scale used. Its mechanical means of amplification is the ratio
of (dθ/dl ) = − [(9.1∗l )/(W 2∗n)], where l is the length of the twisted strip measured along the natural
axis, W is a width of strip, n is the number of turns, θ is the twist of the mid-point of the strip with
respect to the end. Measuring forces used for two famous models of the Johansson Mikrokator are 30 g
and 250 g (refer Fig. 9.14). Accuracy of this instrument is ±1%.
248 Metrology and Measurement

Scale (side view)

Pointer (moves in and out of page)

Twisted strip
Bell crank lever

Plunger
Fig. 9.14 Johansson Mikrokator
Fig. 9.13 Working mechanism of Johansson Mikrokator (Courtesy, C. E. Johansson Company)

Sigma Mechanical Comparator This type of a simply designed comparator gives 300 to
5000 times mechanical amplification. Figure 9.15 illustrates the operating principle. It consists of a
plunger attached to a rectangular bar, which is supported at its upper and lower ends by flat steel springs
(split diaphragm) to provide frictionless movement. The plunger carries a knife-edge, which bears on
the face of a moving member of the cross-strip hinge. The cross-strip hinge consists of the moving
component and a fixed component by a flexible strip at right angles to each other. Therefore, when the
plunger moves, the knife-edge moves and applies a force on the moving member that carries a light
metal Y-forked arm. A thin phosphor-bronze flexible band is fastened to the ends to the forked arms,
which is wrapped about a driving drum to turn a long pointer needle.
Therefore, any vertical movement of the vertical plunger makes the knife-edge move the block of
cross-strip lever over the pivot. This causes the rotation of the Y-arm and the metallic band attached
to the arms makes the driving drum, and hence the pointer, to rotate. So amplification is done in two
stages:
Total magnification = {(Effective length of arm)/(Distance from the hinge pivot to knife)}
× {(Length of pointer)/(Pointer drum radius)}
Comparator 249

Arm that is essentially Knife-edge and


Pointer a pivoting beam saphire bearing
block (knife-edge
position is adjustable)
Drum

Slit
diaphragm
Flexible driving band

Fixed member

Axis of
Plunger
rotation

Moving member

Fig. 9.15 Sigma comparator

The amplification mechanism of a sigma comparator is adaptable for gauging multiple dimensions
by mounting several basic mechanisms into a common assembly arranged to have contacts with the
critical dimensions of the objects.

Dial Thickness Gauges This type of comparator also uses a dial indicator as a comparator unit.
It consists of a sturdy, rigid frame made of hard aluminium with a lifting lever for the movable upper
measuring spindle. It has an accuracy of 0.01 mm. Figure 9.16 shows the convenient heat-insulated
handle, open on one end. Figure (a) shows a model with flat measuring faces used for measuring soft
materials such as plastic films, felt, rubber, paper and cardboard. Figure (b) shows a model with spherical
measuring faces for measurement of hard materials such as sheet metal, hardboard, wooden panels and
panes of glass.

External and Internal Groove Comparator Gauges


External Comparator Gauges [shown in Fig. 9.17(a) Mechanical Indicator (b) Digital Indicator] are
used for measuring thickness and wall thickness (shown in Fig. 9.17. c, d, e, f ); and Internal Groove
Comparator Gauges [shown in Fig. 9.17(g) Mechanical Indicator (h) Digital Indicator] are used for
250 Metrology and Measurement

(a) (b)
Fig. 9.16 Dial thickness gauges

measuring bores and internal-groove dimensions, and absolute measurements [shown in Fig. 9.17 (i), (j),
(k)]. In these instruments, reliability for repeatability can be ensured due to a rack-and-gear drive and the
indicating scale interval from 0.005 mm up. Contact points are made of hard metal. Tolerance marks in
the dial make for easy reading and give fast and accurate measuring results. They are very handy.

Plate-Gauge Type Comparator This type of plate-gauge bench-type comparator design is


convenient for rapid gauging of inside and outside diameters of flat and relatively thin parts, reference
flanges, bearing rings, small shoulders, grooves, etc. The gauge plate has slots arranged either in ‘T’,
‘+’, or ‘erect or inverted Y’ position. Inside these slots, there are sliding locating stops and a sensitive
contact point. Setting gauges are used to set the locating stops and contact point for the specific dimen-
sion to be checked, which permits rapid retraction of the contact point for inserting and removing the
objects. The adjustable movement of the moveable probe is 6 to 10 mm. A two-point T-shape contact
can be made using the arrangement of the comparator, shown in Fig. 9.18. A third measuring probe
can be used as a lateral stop. The principle of part locating on a plate gauge is illustrated in Fig. 9.18 (a),
(b), (c). High-measuring accuracy is ensured by a measuring probe supported in a friction- and slack-
free parallel spring system. This instrument is user friendly—its table area can be tilted in relation to
the base from 0° to 90°.

Advantages and Limitations of Mechanical Comparator

Advantages A mechanical comparator ensures positive contact, which is suitable in a particular


application and it also ensures controlled measuring force. Those mechanical comparators, which
operate using rack and pinion offer long measuring range. Normally, they have a linear scale. Therefore,
they do not need any external agency as a power source such as electricity or compressed air. These
instruments are robust and compact in design, easy to handle and less costly as compared to other
amplifying devices.

Limitations Due to more moving parts, increasing friction increases inertia and any slackness in
moving parts reduces accuracy. If any backlash exists, it gets magnified. These instruments are sensitive
to vibrations.
Comparator 251

(a) (b)

(c) (d) (e) (f)

(g) (h)

(i) ( j) (k)
Fig. 9.17 External and internal groove comparator gauges
(Courtesy, Mahr GMBH Esslingen)
252 Metrology and Measurement

Electronic
indicator
(a) Inside
measurement
without stop

Gauge
plate

(b) Inside
measurement
with stop
Locating
stops

(c) Outside
measurement

Fig. 9.18 ID/OD Plate-gauge type comparator with electronic indicator and figures
(a, b, c) shows prinicpal of part location
(Courtesy, Mahr GMBH Esslingen)

9.3.2 Optical Comparators


Brief History of Development of Optical Comparators James Hartness, president
of J&L Machine Co., invented the optical comparator in 1922. It projects the shadow of an object
onto a screen a few feet away and can be compared with a chart showing tolerance levels for the part.
By the end of the decade, comparators also began to be used to examine wear of a part as well as for
set up phases in manufacturing. In the 1930s, the J&L Machine Co. weathered the Great Depression
by exporting optical comparators to the Soviet Union. Comparators are being used more and more
in small-parts manufacturing plants, including those that produce razor parts, toothbrushes, dental
burrs, bottle moulds and such other objects. Comparator sales reach a little more than 300 per year.
In the 1940s, optical comparator sales skyrocketed as optical comparators were adopted as a standard
for US artillery specifications. They were used in the manufacture of just about every part used in
World War II, including rivets and firing pins. In the 1960s, automatic edge detection was introduced,
making it possible for the machine, rather than the operator, to determine the part edge. This provided
more accuracy by eliminating subjectivity, which converted the stage into an additional measurement
Comparator 253

instrument with which to measure the part. In the 1970s, digital readouts were introduced, as program-
mable motorized stage control. As machines become more automated, developers started to incorpo-
rate programmable functions into the optical comparator. This paved the way for complete automation
of an optical comparator machine. And in the 1990s, incorporated software became standard optical
comparator equipment. Computers can be interfaced with optical comparators to run image analysis.
Points from manual or automatic edge detection are transferred to an external program where they can
be directly compared to a CAD data file.
Optical comparators are instruments that project a magnified image or profile of a part onto a
screen for comparison to a standard overlay profile or scale. They are non-contact devices that func-
tion by producing magnified images of parts or components, and displaying these on a glass screen
using illumination sources, lenses and mirrors for the primary purpose of making 2-D measurements.
Optical comparators are used to measure, gauge, test, inspect or examine parts for compliance with
specifications.
Optical comparators are available in two configurations, inverted and erect, defined by the type
of image that they project. Inverted image optical comparators are the general standard, and are
of the less-advanced type. They have a relatively simple optical system which produces an image
that is inverted vertically (upside-down) and horizontally (left-to-right). Adjustment and inspection
requires a trained or experienced user (about two hours of practice time and manipulation). Erect
models have a more advanced optical system that renders the image in its natural or ‘correct’ ori-
entation. The image appears in the same orientation as the part being measured or evaluated. Opti-
cal comparators are similar to micrometers, except that they are not limited to simple dimensional
readings. Optical comparators can be used to detect burrs, indentations, scratches and incomplete
processing, as well as length and width measurements. In addition, a comparator’s screen can be
simultaneously viewed by more than one person and provide a medium for discussion, whereas
micrometers provide no external viewpoints. The screens of optical comparators typically range
from 10˝–12˝ diameters for small units to 36˝–40˝ for larger units. Even larger screen sizes are avail-
able on specialized units. Handheld devices are also available, which have smaller screens as would
be expected.

Profile (Optical) Projector Using this instrument, enlarged (magnified) images of small
shapes under test can be obtained that can be used for comparing shapes or profiles of relatively
small engineering components with an accurate standard or enlarged drawing. Figure 9.19 (a) ( Plate 8)
shows the optical arrangement in the profile projector. The light rays from the light source are col-
lected by the condenser lens from which they are transmitted as straight beams and are then inter-
rupted by the test object held between the condenser and projector lens. Then the magnified image
appears on the screen, which allows a comparison of the resultant image with the accurately pro-
duced master drawing as shown in Fig. 9.19 (a), (b), (c). Figure 9.19 (d) shows a view of the profile
projector’s screen. It is provided with a protector scale. The whole circle is divided into 360°, which
acts as a main scale having 1° as the smallest division for measuring angles between two faces of the
enlarged image. To increase the accuracy of the angular measurement, a vernier scale is provided.
254 Metrology and Measurement Comparator 254

Sharpness of the magnified image can be obtained by focusing and adjusting the distance between
the component and the projection lens. This instrument offers 10 to 100 times magnification. Spe-
cifically, it is used to examine the forms of tools, gauges (e.g., screw-thread gauges) and profiles of
very small-sized and critical components whose direct measurement is not possible (e.g., profiles of
gears in wrist watches, etc.). Apar t from the profile projector, a toolmaker’s microscope is also used
as an optical comparator.

Optical–Mechanical Comparators These devices use a measuring plunger to rotate a


mirror through a pivoted lever, which serves as a first stage of amplification. This mechanically ampli-
fied movement is further amplified by a light beam reflected off that mirror, and simply by the virtue
of distance, the small rotation of the mirror can be converted to a significant translation with little
friction.
Therefore, (refer Fig. 9.20), the first stage of amplification (using lever principle) = (L2 / L1) and as
the movement of the plunger makes the mirror tilt by an angle θ, the image gets tilted by ‘2θ’.

Light source

Condenser lens

Index

Projection lens

L4

Scale
Mirror

Pivot

Lever

Plunger

L1 L2 L3

Fig. 9.20 Principle of optical comparator


Comparator 255

∴ The second stage of amplification, i.e., optical amplification = 2(L4 / L3).

Zesis Ultra Optimeter This type of optical comparator gives very high magnification, as it works
on a double magnification principle. As shown in Fig. 9.21, it consists of a light source from which light
rays are made to fall on a green filter, which allows only green light to pass through it and, further, it
passes through a condenser lens. These condensed light rays are made incident on a movable mirror M1,
then reflected on mirror M2 and then reflected back to the movable mirror M1. It gives double reflection.
The second-time reflected rays are focused at the graticule by passing through the objective lens.
In this arrangement, magnification is calculated as follows:
Let the distance of the plunger centre to the movable mirror M1 be ‘x ’, plunger movement height
be h and angular movement of mirror be [h /x]. f is the focal length of the lens then the movement of
the scale is 2f δθ, i.e., 2f [h /x].

Light Source
Eyepiece

Green filter
Graticule
Condenser
lens

Objective
lens

Mirror M2

Mirror M1

Plunger
X

Workpiece

Fig. 9.21 Optical system of Zesis Ultra Optimeter (optical comparator)


256 Metrology and Measurement

Magnification = {(Scale movement)/( Plunger movement)}


= {[2hf/x]/[h]}
= [2f/x]

Overall magnification = [2f/x] [ Eyepiece magnification]

Optical–Electrical comparators Optical comparator systems use light to make dimen-


sional, geometrical and positional checks on production parts. These systems consist of four prin-
cipal components: a light emitter, a receiver that converts the light to an electrical signal, a series
of optical lenses and an electronic amplifier that processes the signals and establishes meaningful
measurement data.
Optical measurement technology is suitable for parts inspection in operations where non-contact
with the workpiece is critical, where a large measuring range must be covered without retooling, where
machining is performed at high operating speeds and where clean and dry workpieces can be supplied
to the gauge.
Optical–Electrical measurement technology comes in a number of comparing formats. One of
them is light-intensity comparative comparing, now almost obsolete. Other types include laser scan-
ning gauges; shadow-cast, or CCD array comparing, and laser-diffraction comparing. Laser-diffraction
comparing is limited to a very small measurement range and, generally, is used only for wire inspection.
Its important features are R&R capability, flexibility, user-friendliness and inspection speed. The optical
comparators may be rated very low for environmental sensitivity, due primarily to their susceptibility to
coolants on the shop floor.

Advantages and Limitations of Optical Comparators

Advantages It is more suitable for precision measurement as it gives higher magnification. It con-
tains less number of moving parts and hence, offers good accuracy. Scales used are illuminated so it
allows for taking readings at room lighting conditions with no parallax error.

Limitations Illumination operation of an optical comparator requires external power source.

The apparatus is of comparatively large size and are costly.

9.3.3 Pneumatic Comparator


Air gauging ( pneumatic comparator) can offer the best of both mechanical/analog and electronic/digi-
tal gauges. Air gauges are virtually always fixed gauges, built to measure a single dimension by directing a
controlled jet of air against the workpiece and monitoring the back pressure. They are capable of accu-
racy to within a few millionths of a millimetre. Air gauges may have either analog or digital displays, or
both, and some feature data-output capabilities. Other benefits of air gauging include the following:
Comparator 257

It is self-cleaning, making it appropriate for use in dirty environments and on dirty parts. It is a
non-contact method, so it doesn’t mar delicate part surfaces; and it can be used to gauge compressible
materials (such as textiles, film and non-wovens) without distortion.

Differential Back-Pressure-Type Pneumatic Comparator This type is the constant


amplification air gauge. This design provides flexibility in its application as a pneumatic comparator, for
example, it can be used for gauge calibration or in a specific design to obtain variable applications with
the same control unit without exchanging its metering element. As shown in Fig. 9.22, a differential
back-pressure system uses a split flow channel, one flow going to the gauge head, and the other going
to a zero offset valve. A meter measures the difference in pressures, and thus gives the differences in
pressure. Its magnification range is from 1250X to 20000X.

Non-adjustable Zero-setting
jet valve
Precision pressure
reducer

Differential
Workpiece
pressure sensor
(piezoelectric)

Non-adjustable Pneumatic sensor


jet head

Fig. 9.22 Differentially controlled constant amplification


pneumatic gauging

During its operation, air gauges detect changes in pressure when the measuring jet approaches the
workpiece. If the distance (S ) to the measuring jet decreases, the pressure within the system increases,
while flow speed and, thus, the volume flow are reduced. If the dimension of the part under consider-
ation is as per the required specifications then the air pressure acting on the opposite side of the pressure
sensor (may be piezoelectric sensor or even diaphragm or bellow) is balanced, no deflection results and
the metering linked to it indicates zero. The pneumatic measuring method involves a rather small linear
measuring range. This measuring procedure comes up to its limits, if the generated surface A, which is
defined by the recess distance S, is larger than the cross-sectional area of the measuring jet of diameter d.
Figure 9.23 (b) shows the linear range in which the instrument should be used to get accurate readings.

Solex Comparator Its working is based on the principle that if the air under constant
pressure escapes by passing through two orifices, and if one of them is kept uniform then the
pressure change at the other due to variation in the workpiece size under test will give the reading.
Therefore, it is also known as ‘The Solex Back Pressure System’, which uses an orifice with the
venturi effect to measure airflow. Figure 9.24 shows the essential element of a pneumatic circuit,
258 Metrology and Measurement

Jet of the
pneumatic d linear range
measuring
head

Pressure p
S

d
4
Workpiece
s
(a) (b)
Fig. 9.23 (a) Direction of air passing through measuring head (b) Per-
formance characteristics of this instrument

A B C

Air flows in Orifice Air flows to


gauge head
Dip tube

Height difference
proportional to
pressure

D
Manometer tube

Water tank
Fig. 9.24 Pneumatic circuit diagram for solex pneumatic comparator

compressed air flows in from the end A passing through the restrictor (not shown in the figure) to
maintain constant pressure in a circuit equal to the height difference maintained in a manometer
tube; then it progresses to the dip tube. At the same time, the part air (with same pressure) passes
through orifice B to the pneumatic measuring head at C. The difference between B and C will
depend upon the orifice gap S [similar condition, refer Fig. 9.23 (a)]. This method is used for gaug-
ing parts such as bores.
Comparator 259

Velocity Differential-Type Air Gauge with Bar Graph and Digital Display This
type of pneumatic comparator is operated on the principle of measuring the changes in the velocity of
air caused by varying the obstruction of the air escape. It is used to assess and judge-measuring results
at a glance, which makes easy readout. The column amplifier offers a broad range of functions for com-
bining the signals from both static and dynamic measurements. It makes use of a venturi-tube having
different diameters at both the ends to convert the air velocity changes within the system into minute
pressure differentials. Measuring results (also excellently legible at a distance) are indicated by way of
three-color LEDs as shown in Fig. 9.25 ( Plate 9). When the programmable warning and tolerance limits
are exceeded, the LEDs change their colour from green to yellow or red, accordingly. It includes an air/
electronic converter unit permitting direct connection of pneumatic pick-ups to the column amplifier.
When the volume of escaping air is reduced due to increased gap between the surface of the part under
test and the nozzle orifice of the pneumatic measuring head, the velocity of air in the downstream of the
venturi will decrease the resulting pressure variations resulting in the corresponding height change on the
column. Its display range is +/− 10, 30, 100, 300, 1000, 3000, 1000 μm available commercially.

Advantages and Limitations of Pneumatic Comparator

Advantages As the pneumatic measuring head does not come in direct contact with the workpiece
object, no wear takes place on the head. It works on the pneumatic principle (with compressed air), and
gives a very high magnification. It is particularly preferred for repetitive measurement requirement situa-
tions (e.g., in process-gauging application). Its self-aligning and centering tendency makes the pneumatic
comparator the best device for measuring diameters, ovality, and taper of parts either separately or simul-
taneously. In this type of comparator, the amplification process requires less number of moving parts,
which increases the accuracy. Another advantage is that a jet of air helps in cleaning the part.

Limitations The uninformative characteristic of the scale is one of the working limitations. Dif-
ferent measuring heads are needed for different jobs. As compared to electronic comparators, the
speed of response is low. The portable apparatus is not available easily. In case of a glass tube as an
indicating device (column amplifier), high magnification is required to overcome meniscus error.

9.3.4 Electric and Electronic Comparator


The operating principle of an electrical comparator essentially consists of a transducer for converting
a displacement into a corresponding change in current or potential difference, and a meter or recorder
connected to the circuit to indicate this electrical change calibrated in terms of displacement.
The change in displacement is calibrated in three ways:
1. Using inductive principle, as the displacement of a core attached to a measuring plunger made
up of ferrous material can change the magnetic flux developed by the electric current passing
through one or more coils; or the displacement of a ferrous core attached to a measuring plunger
can change the eddy currents.
260 Metrology and Measurement

2. Using capacitive principle, as the displacement of a core attached to a measuring plunger made
up of ferrous material can change the air gap between the plates to modulate the frequency of
the electrical oscillations in the circuit.
3. Using resistive principle, as the displacement of the measuring plunger will stretch a grid of fine
wire, which increases its length, which in turn, alters its electrical resistance.
The metrological term electronic comparator includes a wide variety of measuring instruments which
are capable of detecting and displaying dimensional variations through combined use of mechani-
cal and electronic components. Such variations are typically used to cause the displacement of a
mechanical contacting (sensing) member with respect to a preset position, thereby originating pro-
portional electrical signals, which can be further amplified and indicated. Comparator gauges are the
basic instruments for comparison by electronically amplified displacement measurement. Very light
force can be used in electronic comparators, where almost no mechanical friction is required to be
overcome. This characteristic is of great value when measuring workpieces with very fine finish that
easily could be marred by heavier gauge contact. Consider the example of the test-indicator-type
electronic comparator as an electronic height gauge (shown in Fig. 9.26) ( Plate 9). These gauges carry
a gauging head attached to a pivoting, extendable, and tiltable cross bar of a gauging stand (refer Fig.
3.11). For the vertical adjustment of the measuring head (probe/scriber), the columns of height-
gauge stand are often equipped with a rack-and-pinion arrangement or with a friction roller guided in
a groove. Instead of a cross bar, some of the models are equipped with a short horizontal arm only
which achieves fine adjustment by means of a fixture spring in the base of the stand which when
actuated by a thumb screw, imparts a tilt motion to the gauge column. Electronic height gauges are
generally used for comparative measurement of linear distance (height) of an object whose surface
being measured must lie in the horizontal plane and the distance to be determined must be reflected
from a surface plate representing a plane parallel to the part surface on which the measurement is
being carried out. The size of the dimension being measured is determined by comparing it with
the height of the gauge block stock. Modern digital electronic technology permits absolute height
measurement to work as a perfect comparator, because by the facility provided with the push of a
button, the digital display can be zeroed out at any position of the measuring probe. Applications of
electronic test-indicator-type comparators are essentially similar to those of mechanical test indica-
tors, and measure geometric interrelationship such as run-out, parallelism, flatness wall thickness and
various others. Electronic internal comparators are used for external length or diameter measurement
with similar degree of accuracy. A particular type of mechanical transducer has found application
in majority of the currently available electronic gauges. This type of transducer is the linear variable
differential transformer (LVDT), and its application instrument is discussed in the next sub-article.

Inductive (Electronic) Probes This instrument works on the first principle, i.e., inductive
principle. The effect of measurements with inductive probes is based on the changing position of
a magnetically permeable core inside a coil pack. Using this principle, we can distinguish between
half-bridges (differential inductors) and LVDTs (Linear Variable Differential Transducers). New
models apply high-linearity, patented transducers (VLDT–Very Linear Differential Transducers),
Comparator 261

operating similar to LVDTs, on the principle of


the differential transformer. The LVDT principle
arrangement is shown in Fig. 9.27, and construc-
tion details of an inductive probe are shown in
Fig. 9.28.

Secondary windings
Primary windings
Construction of Inductive Probe
1. Stylus Various styli with M2.5 thread are used. S

2. Sealing bellow is made up of the material Viton


which is extremely resistant and ensures high perfor-
mance even in critical environments.
3. Twist lock strongly influences the probes’ opera-
tion characteristics and durability.
4. Clearance stroke adjustment When screwing the
guide bush in, the lower limit stop of the measur-
ing bolt can be shifted in the direction of the elec- Fig. 9.27 LVDT arrangement
trical zero point.
5. Rotary stroke bearing Only rotary stroke bearings
made by Mahr are used for Mahr’s inductive probe.
6. Measuring force spring The standard measuring force amounts to 0.75 N. For most probes, the mea-
suring force can be changed without any problems by exchanging the measuring force spring.

(9) Bending cap

(2) Sealing (3) Twist lock (6) Measuring (7) Coil system
bellow force spring (10) Connecting cable

(1) Stylus (4) Clearance (5) Rotary stroke (8) Probe sleeve
stroke adjustment bearing (11) 5-channel
DIN-plug

Fig. 9.28 Inductive probes


(Courtesy, Mahr GMBH Esslingen)
262 Metrology and Measurement

7. Coil system The patented VLDT (Very Linear Differential Transducer) coil system allows for
extremely high linearity values.
8. Probe sleeve To shield the probe against EMC influences, the high-quality nickel–iron alloy Mumetall
is used.
9. Bending cap The normal axial cable outlet of the standard probes can be easily changed to a radial
cable outlet by mounting a slip-on cap.
10. Connecting cable Only resistant PU cables are used for the 2.5-m (8.20 ft) long standard probe cable.
11. 5-channel DIN plug Worldwide, this plug is the most frequently used for connection of inductive probes
to amplifiers. Depending on the compatibility, however, different pin assignments have to be observed.
Figures 9.29 (a) and (b) show thickness measurement; a single inductive probe is used for all
kinds of direct measurements on cylindrical and flat workpieces. It is applied in the same way as dial
indicators, mechanical dial comparators or lever gauges, (c) thickness measurement independent of
workpiece form and mounting, (d) height difference between two steps, (e) axial run-out measure-
ment as single measurement, (f ) radial run-out single measurement, (g) coaxiality measurement on
two shaft ends, ( h) roundness measurement independent of the eccentricity as sum measurement,
(i) taper measurement independent of the total workpiece size, ( j) perpendicularity measurement
independent of workpiece position, ( k) measurement of eccentricity independent of diameter as
differential measurement, and ( l ) measurement of wall thickness with lever-type probe. The probe
lever is protected by friction clutches against excessive strain and is particularly suitable for inside
measurements.

Inductive Dial Comparator Another example of the inductive principle used for compara-
tive measurement is an inductive dial comparator. Now, however, there are digital electronic indicators
that are about the same size and price as dial indicators. Gauges equipped with digital indicators may
possess all of the benefits of an amplifier comparator, including automatic zeroing and offset func-
tions, and data export, at a fraction of the cost. Digital readouts are not necessarily superior to analog
dials, however. Analog displays are ergonomically superior in many applications. For example, users
can observe size variation trends more readily. They can also quickly sense whether a part is good or
bad without having to go through the intellectual process of comparing the number on a digital display
to the allowable tolerance specification. Some electronic indicators now incorporate analog displays to
replicate this benefit.
The electronic snap gauge as shown in Fig. 9.30 (c) is used for rapid measurements of cylindrical
components like shafts, pins, shanks, and for thickness and length measurement. The patented ‘Channel
Lock’-design assures parallelism over the entire adjustment range. It consists of an adjustable centering
stop. Large, square tungsten-carbide anvils 15 × 15 mm, with chamfers are used to assist the locating
of the component to be checked. The lift-off lever for retraction of measuring anvil (Model 301-P),
permits contact-free introduction of the workpieces. All adjustments are accomplished by using the
enclosed socket head spanner.
Comparator 263

(a) (b) (c)

(d) (e) (f)

(g) (h)

(i) (j)

(k) (l)

Fig. 9.29 Practical examples of use of inductive probes


264 Metrology and Measurement

1 7

2
Mahr
Mimmtora

0.00

pb on
off
3 mm

(a) (b) (c)


Fig. 9.30 (a), (b) Digital dial indicator (c) Electronic Snap Gauge
1–Protective and lifting cap for measuring spindle; 2–Display; 3–Operating buttons; 4–Mounting
shank; 5–Measuring spindle; 6–Contact point 901; 7–Data output; 8–Battery compartment;
(Courtesy, Mahr Gmbh Esslingen)

Advantages and Limitations of Electrical and Electronic Comparator

Advantages These types of comparators have high sensitivity as they are expressed as the smallest
input (contact member deflection) which produces proportional signals. They contain very few moving
parts, hence there is less friction and wear. Repeatability is ensured as measurement is done in linear
units, computed on 3σ basis. They have a wide range of magnification. They are small, compact and
convenient to use, set up and operate. Readings can be displayed by various means (analog or digital)
used alternately, or several of them simultaneously. Digital display minimizes the reading and interpre-
tation errors.

Limitations External power source is required. The cost of this type of comparator is more than
the mechanical type. Fluctuations in voltage or frequency of electric supply may affect the results. Heat-
ing of coils in the measuring instrument may cause drift.

Review Questions

1. Explain the term comparator. Discuss the classification of comparators.


2. Describe the essential characteristics of a good comparator.
3. Describe the working principle, construction and advantages of any one mechanical comparator.
4. Differentiate between mechanical and pneumatic comparators.
Comparator 265

5. Differentiate between electrical and pneumatic comparators.


6. Differentiate between gauge and comparators.
7. State the advantages and limitations of optical comparators.
8. Explain with a neat sketch the working of Solex pneumatic comparators.
9. Describe the working principle, construction and advantages of any one optical comparator.
10. Write short notes on (a) Johansson circulatory comparator (b) Mechanical comparator (c) Pneu-
matic comparator (d) Electrical comparator (e) Optical comparator
11. Discuss the difference between the terms ‘measuring’ and ‘comparing’.
12. What are the desirable functions expected by a comparator as a device used for meteorical mea-
surement requirement?
13. State why comparators are used in engineering practices.
14. Name the mechanisms used in Sigma comparators and twisted strip comparators and mention
their advantages.
15. Justify the statement: Comparators have been able to eliminate some common errors of
measurement.
16. What is meant by the term ‘magnification’ and its significance as applied to a mechanical comparator?
17. Why is damping essential in mechanical comparators? Explain with a suitable example how it is
achieved.
18. Explain the basic methods of magnifications and explain any one in detail by drawing its sketch.
19. Explain the principles of pneumatic gauging by the ‘back-pressure’ system and state the relation-
ship by drawing a typical curve showing the back pressure/applied pressure and the ratio of cross-
sectional areas over which it is used.
20. Explain the operating principle of an electrical comparator. How is change in displacement
calibrated?
10 Introduction
Metrology of to
Metrology
Surface Finish

‘We measure the surface roughness to know its surface finish……. .’


J N Pitambare, GM, Walchandnagar Industries Ltd., Walchandnagar

CHECKING SURFACE FINISH tolerance values. Many factors affect the


Surface regularities can be quantified in surface texture, e.g., type of machining
terms of surface roughness values as it process, material of workpiece, cutting
is concerned with the size and shape of conditions, viz., speed, feed, depth of cut,
the surface. The appearance, wear resis- tool material and its geometry, machine-
tance, fatigue resistance, corrosion resis- tool rigidity, internal and external vibra-
tance, initial tolerance, hardness, colour, tions, etc. There are many situations
absorption capacity, etc., are some of the where surface finish on the job is of pri-
important characteristics of the product mary importance. As we change the
which are influenced by surface texture. manufacturing process to produce prod-
ucts, the machining conditions required
Surface irregularities are the root cause
to be controlled will change. For example,
for development of sharp corners where
in case of milling (conventional machin-
stress concentration occurs, which ulti-
ing process), the surface finish mainly
mately leads to part failure. On the other
depends upon the axial runout on the
hand, irregularities of a surface are one
cutter, type of tip used, feeds employed
of the requirements to achieve good
and other machining conditions. But in
bearing conditions in between two
case of electrical discharge machining
mating surfaces, as valleys an irregular
(non-conventional machining process),
surface help retain the film of lubrica-
each electrical spark discharge produces
tion and the hills reduce metal-to-metal
a spherical crater in the workpiece, the
contact. Hence, requirement of the type
volume of which is directly related to the
of surface will vary as per the applica-
energy contained in the spark. The depth
tion for which we intend to make use of
of the cut of the crater defines the surface
the surface under consideration.
finish. Therefore, the amperage, fre-
Any manufacturing process cannot obtain quency and finish on the electrodes will
an absolutely smooth and flat surface. govern the surface finish.
This is because, on the machined part,
the surface irregularities are left after Machinability for the workpiece also has
manufacturing. The practical alternative an effect on surface finish. Consider that
is to produce a surface within acceptable a given material may allow higher cutting
Metrology of Surface Finish 267

speed or induce lower cutting forces; but shape. The most common method to
it may not produce a good surface finish. check the surface finish is to compare
Whereas the finish produced on the part the test surface visually and by touch-
is a cause of rejection, this consideration ing it against a standard surface. But,
has an effect on the cost also. If a higher nowadays, many optical instruments,
surface finish is obtained on the material viz., interferometer, light slit micro-
under consideration, under given set of scope, etc., and mechanical instru-
machining conditions, then we could ments, viz., Talysurf, Tomlinson sur-
judge that its machinability is good. face recorders are used to determine
numerical values of the surface finish
It is a well-known fact that no surface
of any surface.
in reality follows a true geometrical

10.1 INTRODUCTION

On the earth’s surface, it is observed that discontinuities or joints do not have smooth surface structures
and they are covered with randomly distributed roughness. The effective role of surface roughness on
the behavior of discontinuities and on shear strength makes the surface roughness an important factor
that has to be taken into account right from the design stage to the final assembled product. New
metrological studies, supported by new methods and technological advances, take into account surface
roughness and its effect on the behavior of discontinuities. In this chapter, techniques that are used in
measurement of surface roughness are discussed.
Surface metrology is of great importance in specifying the function of a surface. A significant pro-
portion of component failure starts at the surface due to either an isolated manufacturing discontinu-
ity or gradual deterioration of the surface quality. The most important parameter describing surface
integrity is surface roughness. In the manufacturing industry, a surface must be within certain limits of
roughness. Therefore, measuring surface roughness is vital to quality control of machining a workpiece.
In short, we measure surface texture for two main reasons:

i. To try to predict the performance of the component.


ii. To try to control the manufacturing process as the manufacturing process leaves its signature in the
surface texture.
In most circumstances, a single measurement is made on the surface in order to assess the texture. This
measurement must be representative of the surface and appropriate for the purpose of measurement (e.g.,
measuring normal to the lay of the surface, or in the indicated direction). The most important concept is
to know what you are dealing with. From knowledge of the roughness amplitude and wavelength values
expected from the surface, it is possible to select the appropriate instrument settings for a reliable roughness
measurement. The most important factors are the selection of the stylus tip and the roughness filters.

10.2 TERMS USED IN SURFACE–ROUGHNESS MEASUREMENT

The quality of a machined surface is characterized by the accuracy of its manufacture with respect
to the dimensions specified by the designer. Every machining operation leaves a characteristic
268 Metrology and Measurement

evidence on the machined surface. This evidence is in Pattern


the form of finely spaced micro-irregularities left by
the cutting tool. Each type of cutting tool leaves its
own individual pattern, which, therefore, can be identi-
fied. This pattern, as shown in Fig.10.1, is known as
surface finish or surface roughness.

Lay Lay represents the direction of predominant sur-


Workpiece
face pattern produced as shown in Figs 10.1 and 10.2 and
Fig. 10.1 Lay pattern
it reflects the machining operation used to produce it.

Flaw Lay direction

Waiviness
Roughness height height

Roughness width

Waiviness
width
Roughness-width cutoff
Fig. 10.2 Surface characteristics

Roughness Roughness consists of surface irregularities which result from the various machining
processes. These irregularities combine to form surface texture. It is defined as a quantitative measure
of the process marks produced during the creation of the surface and other factors such as the struc-
ture of the material.

Roughness Height It is the height of the irregularities with respect to a reference line. It is mea-
sured in millimetres or microns or micro-inches. It is also known as the height of unevenness.

Roughness Width The roughness width is the distance parallel to the nominal surface between
successive peaks or ridges which constitute the predominate pattern of the roughness. It is measured
in millimetres.

Waviness This refers to the irregularities which are outside the roughness width cut-off values.
Waviness is the widely spaced component of the surface texture. This may be the result of workpiece or
tool deflection during machining, vibrations or tool runout. In short, it is a longer wavelength variation
in the surface away from its basic form (e.g., straight line or arc).
Metrology of Surface Finish 269

Waviness Height Waviness height is the peak-to-valley distance of the surface profile, measured
in millimetres.

Difference between Roughness, Waviness and Form We analyze below the three main
elements of surface texture—roughness, waviness and form.

Roughness This is usually the process marks or witness marks produced by the action of the cut-
ting tool or machining process, but may include other factors such as the structure of the material.

Waviness This is usually produced by instabilities in the machining process, such as an imbalance
in a grinding wheel, or by deliberate actions in the machining process. Waviness has a longer wavelength
than roughness, which is superimposed on the waviness.

Form This is the general shape of the surface, ignoring variations due to roughness and waviness.
Deviations from the desired form can be caused by many factors. For example, the part being held too
firmly or not firmly enough, inaccuracies of slides or guide ways of machines, or due to stress patterns
in the component.
Roughness, waviness and form (refer Fig. 10.3) are rarely found in isolation. Most surfaces are a com-
bination of all three and it is usual to assess them separately. One should note that there is no set point at
which roughness becomes waviness or vice versa, as this depends on the size and nature of the applica-
tion. For example, the waviness element on an optical lens may be considered as roughness on an auto-
motive part. Surface texture refers to the locally limited deviations of a surface from its ideal shape. The
deviations can be categorized on the basis of their general patterns. Consider a theoretically smooth, flat
surface. If this has a deviation in the form of a small hollow in the middle, it is still smooth but curved.
Two or more equidistant hollows produce a wavy surface. As the spacing between each wave decreases,
the resulting surface would be considered flat but rough. In fact, surfaces having the same height of
irregularities are regarded as curved, wavy, or rough, according to the spacing of these irregularities.
In order to separate the three elements, we use filters. On most surface-texture measuring instru-
ments, we can select either roughness or waviness filters. Selecting a roughness filter will remove

Primary texture (roughness)

Secondary texture (waviness)

Form

Fig. 10.3 Roughness, waviness and form


270 Metrology and Measurement

waviness elements, leaving the roughness profile for evaluation. Selecting a waviness filter will remove
roughness elements, leaving the waviness profile for evaluation. Separating the roughness and waviness
is achieved by using filter cut-offs.

Filter A filter is an electronic or mathematical method or algorithms which separates, different


wavelengths and allows us to see only the wavelengths we are interested in. In other words, it is a mecha-
nism for suppressing wavelengths above or below a particular value. In surface measurement, filtering
can arise in the gauging system due to mechanical or electronic constraints, and it can also be applied
by the data-analysis system (software).
Early measuring instruments used analog (electronic) filters. These types of filters are also known
as 2CR filters. The 2CR stands for two capacitors and two resistors. These electronic filters, although
still accepted and recognized by international standards, do suffer from phase distortion caused by the
nature of their electronic components. To remove this effect, we have another type of filter called a
2CR PC filter. The PC in this case stands for phase-corrected. This type of filter suffers from less dis-
tortion than the 2CR but is still an electronic filter and, as such, still suffers from some distortion.
Modern instruments use phase-correct filters such as the Gaussian filter. These types of fil-
ters drastically reduce filter distortion, although they can only be implemented where mathematical
algorithms do filtering by computer-based processing. On most modern computer-based instru-
ments, analog filters are digitally simulated so that correlation between new and old instruments
can be made.

Roughness Width Cut-Off Roughness width cut-off is the greatest spacing of respective sur-
face irregularities to be included in the measurement of the average roughness height. It should always
be greater than the roughness width in order to obtain the total roughness height rating.
In basic terms, a cut-off is a filter and is used as a means of separating or filtering the wavelengths
of a component. Cut-offs have a numerical value which when selected reduce or remove the unwanted
wavelengths on the surface. For example, a roughness filter cut-off with a numeric value of 0.8 mm
will allow wavelengths below 0.8 mm to be assessed with wavelengths above 0.8 mm being reduced in
amplitude; the greater the wavelength, the more severe the reduction. For a waviness filter cut-off with
a numeric value of 0.8 mm, wavelengths above 0.8 mm will be assessed with wavelengths below 0.8 mm
being reduced in amplitude.
There is a wavelength at which a filter is seen to have some pre-determined attenuation (e.g., 50%
for a Gaussian filter). In roughness there are two different filters, a long wavelength filter Lc and a short
wavelength filter Ls which suppresses wavelengths shorter than those of interest. There are internation-
ally recognized cut-offs of varying lengths. These are 0.08 mm, 0.25 mm, 0.8 mm, 2.5 mm and 8 mm.
In general, you select a roughness cut-off in order to assess the characteristics of the surface you
require. These are usually the process marks or witness marks produced by the machining process. To
produce a good statistical analysis of these process marks, you would normally select a cut-off in the
order of 10 times the wavelengths under consideration. These wavelengths may be the turning marks
on the component.
Metrology of Surface Finish 271

Note : cut-offs should be determined by the nature of the component and not by the length of the
component. Choosing the wrong cut-off will in some cases severely affect the outcome of the result.

Sample Length After the data has been filtered with a cut-off, we then sample it. Breaking the
data into equal sample lengths does sampling. The sample lengths (as shown in Fig. 10.4) have the
same numeric value as the cut-off. In other words, if you use a 0.8 mm cut-off then the filtered data
will be broken down into 0.8 mm sample lengths. These sample lengths are chosen in such a way that
a good statistical analysis can be made of the surface. In most cases, five sample lengths are used for
analysis.

Traverse length

Run-up Over travel

Sampling
length
(cut-off)

Assessment (evaluation) length

Fig. 10.4 Sample length and assessment length

Assessment Length An assessment length is the amount of data left after filtering that is then
used for analysis.The measurement length is dictated by the numerical value of the cut-off, which itself
is dictated by the type of surface under inspection. Typically, a measurement may consist of a traverse
of 6–7 times the cut-off selected. For example, 7 cut-offs at 0.8 mm = 5.6 mm. One or two cut-offs
will then be removed according to the filter type and the remaining cut-offs used for assessment. This
only applies when measuring roughness. For measuring waviness or primary profiles, the data length is
chosen according to application and the nature of the surface. In general, the data length needs to be
sufficient to give a true representation of the texture of the surface.

Arithmetic Average (AA) A close approximation of the arithmetic average roughness-


height can be calculated from the profile chart of the surface. Electronic instruments using appro-
priate circuitry through a meter or chart recorder may also automatically perform averaging from a
mean centreline. If X is the measured value from the profilometer then the AA value and the root
mean square (rms) value can be calculated as shown in Table 10.1. Its numerical value is about 11%
higher than that of AA.
272 Metrology and Measurement

Table 10.1 Arithmetic average

X X2
3 9
15 225
20 400
33 1089
25 625
18 324
AA = 234/16 = 14.6 micro in
5 25 RMS = (4551/16)1/2 = 16.9 micro in
10 100
15 225
15 225
5 25
11 121
14 196
13 169
27 729
8 64
Total 234 4551

10.3 FACTORS AFFECTING SURFACE FINISH IN MACHINING

Whenever two machined surfaces come in contact with one another, the quality of the mating parts
plays an important role in their performance and wear. The height, shape, arrangement and direction
of these surface irregularities on the workpiece depend upon a number of factors:

A. The machining variables


a. Cutting speed
b. Feed
c. Depth of cut

B. The tool geometry


The design and geometry of the cutting tool also plays a vital role in determining the quality of
the surface. Some geometric factors which affect achieved surface finish include
a. Nose radius
b. Rake angle
Metrology of Surface Finish 273

c. Side cutting-edge angle


d. Cutting edge
C. Workpiece and tool material combination and their mechanical properties’
D. Quality and type of the machine tool used,
E. Auxiliary tooling, and lubricant used, and
F. Vibrations between the work piece, machine tool and cutting tool.

The final surface roughness might be considered as the sum of two independent effects:

1. The ideal surface roughness is a result of the geometry of tool and feed rate, and
2. The natural surface roughness is a result of the irregularities in the cutting operation.
[Boothroyd and Knight, 1989].
Factors such as spindle speed, feed, rate and depth of cut that control the cutting operation can be
set up in advance. However, factors such as tool geometry, tool wear, chip loads and chip formations,
or the material properties of both tool and workpiece, are uncontrolled (Huynh and Fan, 1992). Even
in the occurrence of chatter or vibrations of the machine tool, defects in the structure of the work
material, wear of tool, or irregularities of chip formation contribute to the surface damage in practice
during machining (Boothroyd and Knight, 1989).

10.3.1 Ideal Surface Roughness


Ideal surface roughness is a function of only feed and geometry. It represents the best possible finish
which can be obtained for a given tool shape and feed. It can be achieved only if the built-up-edge,
chatter and inaccuracies in the machine tool movements are eliminated completely. For a sharp tool
without nose radius, the maximum height of unevenness is given by
f
Rmax =
cot fφ + cot β
The surface roughness value is given by:
Rmax
Ra =
4
Rmax
2 φ
β
Rmax

f/2 f/2

f feed
φ major cutting edge angle
β working minor cutting edge angle
Fig. 10.5 Idealized model of surface roughness
274 Metrology and Measurement

Practical cutting tools are usually provided with a rounded corner, and Fig. 11.5 shows the surface pro-
duced by such a tool under ideal conditions. It can be shown that the roughness value is closely related
to the feed and corner radius by the following expression:
0.0321 f 2
Ra =
r
where, r is the corner radius.

10.3.2 Natural Roughness


In practice, it is not usually possible to achieve conditions such as those described above, and normally
the natural surface roughness forms a large proportion of the actual roughness. One of the main fac-
tors contributing to natural roughness is the occurrence of a built-up edge. Thus, larger the built-up
edge, the rougher would be the surface produced, and factors tending to reduce chip-tool friction and
to eliminate or reduce the built-up edge would give improved surface finish.
The measurement of surface roughness is defined by a collection of international standards. These
standards cover characteristics of the measurement equipment as well as outline the mathematical
definitions of the many parameters used today. This chapter discusses some of the key issues in this
important field. The roughness of a surface can be measured in different ways, which are classified into
three basic categories:

1. Statistical Descriptors These give the average behavior of the surface height. For example,
average roughness Ra; the root mean square roughness Rq; the skewness Sk and the kurtosis K.

2. Extreme Value Descriptors These depend on isolated events. Examples are the maxi-
mum peak height Rp, the maximum valley height Rv, and the maximum peak-to-valley height Rmax.

3. Texture Descriptors These describe variations of the surface based on multiple events. An
example for this descriptor is the correlation length.
Among these descriptors, the Ra measure is one of the most effective surface-roughness mea-
sures commonly adopted in general engineering practice. It gives a good general description of the
height variations in the surface. Figure 10.6 shows a cross section through the surface. A mean line
is first found that is parallel to the general surface direction and divides the surface in such a way
that the sum of the areas formed above the line is equal to the sum of the areas formed below the
line. The surface roughness Ra is now given by the sum of the absolute values of all the areas above
and below the mean line divided by the sampling length. Therefore, the surface roughness value is
given by
⎡ ⎡area (abc ) + area (cde )⎤ ⎤
Ra = ⎢⎢ ⎣ ⎦⎥

⎢⎣ f ⎥⎦
where, f is feed
Metrology of Surface Finish 275

Feed f

Work
surface
Machined
Working surface
Working
major cutting-edge angle kre
minir cutting-edge angle kre
Tool

(a)

Rmax Kre
2 b Kre
Rmax a c e

f f d
2 2

(b)
Fig. 10.6 A cross-section through the surface

Table 10.2 Range of Surface Roughness (Ra in μm)

Methods Manufacturing Process ‘ Ra’ values in μm


Turning 0.32 to 25
Milling 0.8 to 6.3
Drilling 1.6 to 20
Boring 0.4 to 6.3
Metal-removal-Processes
Reaming 0.4 to 3.2
Planing 1.6 to 50
Shaping 1.6 to 12.5
Broaching 0.4 to 3.2
Honing 0.25 to 0.4
Lapping 0.012 to 1.16
Finishing and Cylindrical grinding 0.068 to 5
Super-finishing Process
Burnishing 0.04 to 0.8
Polishing 0.04 to 0.16
Super finishing 0.16 to 0.32
276 Metrology and Measurement

Methods Manufacturing Process ‘ Ra’ values in μm


Ultrasonic machining 0.2 to 3.2
Abrasive jet machining 0.1 to 1.6
Electric discharge machining
0.5 to 6
Non-conventional (finishing)
material-removal
Electric beam machining 0.4 to 6
process
Plasma arc machining 3.2 to 25
Electrochemical macsphining 0.05 to 3.2
Chemical machining 0.2 to 6
Forging 1.6 to 25
Sawing 1.6 to 2.5
Forming process
Extrusion 0.16 to 5
Rolling 2.5 to 50
Sand 5 to 50
Die 0.8 to 16
Casting process
Investment 1.6 to2.3
Permanent mould 0.8 to 3.2

10.4 SURFACE-ROUGHNESS MEASUREMENT METHODS

With an increase in globalization, it has become even more important to control the comparability of
results from different sources. Stylus instruments have been used in the assessment of surface texture for
some sixty years. Initially, simple analog instruments were used, employing an amplifier, chart recorder
and meter to give graphical and numerical output. Analog filters (simple electronic R-C circuits) were
used to separate the waviness and roughness components of the texture. In order to address this issue,
ISO introduced the concept of ‘bandwidth’ in the late 1990s. Under this concept, the shorter wavelengths
used in surface roughness analysis are constrained by a short-wave filter (know as the s-filter—refer ISO
3274:1996). The bandwidth is then limited in a controlled way that relates directly to surface features,
rather than being limited by the (electrical) bandwidth of the measuring system.
Inspection and assessment of surface roughness of machined workpieces can be carried out by
means of different measurement techniques. These methods can be ranked into the following classes.

10.4.1 Comparison-Based Methods


In the past, surface texture was assessed by an inspector who used either his/ her eye or even fingernail
to inspect the surface. In order to put a number to the surface texture, we need to use a more accurate
Metrology of Surface Finish 277

means of measurement. Comparison techniques use specimens of surface roughness produced by


the same process, material and machining parameters as the surface to be compared. Visual and tactile
senses are used to compare a specimen with a surface of known surface finish. Because of the subjec-
tive judgment involved, this method is useful for surface roughness Rq 1.6 micron.

10.4.2 Direct Measurement Methods


Direct methods assess surface finish by means of stylus-type devices. Measurements are obtained using
a stylus drawn along the surface to be measured—the stylus motion perpendicular to the surface is
registered. This registered profile is then used to calculate the roughness parameters. This method
requires interruption of the machine process, and the sharp diamond stylus may make micro-scratches
on surfaces.

1. A Typical Stylus Probe Surface-Measuring Instrument It consists of a stylus


with a small tip (fingernail), a gauge or transducer, a traverse datum and a processor. The surface is
measured by moving the stylus across the surface. Stylus instruments have been used in the assessment
of surface texture for some sixty years. Initially, simple analog instruments were used, employing an
amplifier, chart recorder and meter to give graphical and numerical output. Analog filters (simple elec-
tronic R-C circuits) were used to separate the waviness and roughness components of the texture. As
the stylus moves up and down along the surface, the transducer converts this movement into a signal,
which is then exported to a processor that converts this into a number and usually a visual profile.
For correct data collection, the gauge needs to pass over the surface in a straight line such that only
the stylus tip follows the surface under test. This is done using a straightness datum. This can consist
of some form of datum bar that is usually lapped or precision ground to a high straightness tolerance.
On small portable instruments, this is not always a good option and can add to the expense of the
instrument. In these cases, it is possible to use an alternative means of datum. This part of the stylus
probe-type instrument is known as skid.
A skid is a part of the gauge that has a radius
Traverse direction large enough to prevent movement in and out
(X) of the roughness characteristics of the surface.
The stylus and the skid are usually independent
Skid Stylus movement (Z)
in their height (Z ) movement but move together
in the measurement direction. Surface devia-
tions are recorded as the difference between the
stylus and the skid movement in the Z direction.
Fig. 10.7 Skid In other words, the skid acts as the straightness
datum—it ‘skids’ over the top of the surface.
A skid is designed in such a way that it passes over a component’s surface without falling into its
valleys (roughness). However, wavelengths greater than the diameter of the skid will not register due to
the skid falling in and out of these wavelengths (waviness). Therefore, waviness measurement should
be avoided when using a skid-based instrument.
278 Metrology and Measurement

Traverse Direction (x)

Stylus
motion (z)

Fig. 10.8 Stylus motion and measurement direction

2. Tomlinson Surface Meter The name of the instrument is given after its designer,
Dr Tomlinson. It is comparatively economical and reliable and uses the mechano-opto magnification
method. The body of the instrument carries the skid unit. Its height is adjusted to enable the diamond-
tipped stylus to be conveniently positioned. Except vertical motion, a leaf spring and a coil spring
as shown in Fig. 10.8 restrict all motions of the stylus. The tension in the coil spring causes a similar
tension in the leaf spring adjust and maintains the balance to hold a cross roller (lapped) in a position
between the stylus and a pair of parallel fixed rollers as shown in the plan view. A light spring steel arm
attached to the cross roller carries a diamond at its tip which bears against the smoked glass screen.
During the actual measuring of surface finish, the instrument body is to be drawn across the surface
by rotating a screw (1 r.p.m.) by a synchronous motor while the glass is maintaining as stationary. The
surface irregularities make the diamond probe and in turn, the stylus to move in the vertical direction. It
causes the cross roller to pivot about a specific point. This causes magnification of the said movement
of the arm carrying a scriber and produces a trace on the smoked glass screen. This trace can be further
magnified at 50X or 100X by an optical projector for examination.

3. The Tayler–Hobson’s ‘Talysurf’ It is a dynamic electronic instrument used on the factory


floor as well as in the laboratory. It gives very rapid output, as compared with the Tomlinson surface
meter. The measuring head of the instrument shown
in Fig. 10.9 consists of a stylus and a skid, which has
′E′ shaped
to be drawn across the surface under inspection by
stamping
means of a motorized driving unit. A A

The arm carrying the stylus (diamond stylus of about


B C
0.002 mm tip radius) forms an armature, which pivots
about the centre element (leg) of the E-shaped stamp- Armature
ing. The other two elements (legs) of the E-shaped
stamping carry coils with ac current. These two coils, Skid Stylus
along with the other two resistances, form an oscillator.
As the armature is pivoted about the centre element,
any movement of the stylus causes a variation of the Principle
air gap, and the amplitude of the original ac current Fig. 10.9 Talysurf principle
Metrology of Surface Finish 279

Filtered wave form

D Meter

Filter
Demodulator
Amplifier
B C
Recorder

A
Oscilator

Carrier Modulated Demodulated and


carrier Smoothened
Fig. 10.10 Talysurf Schematic Layout

flowing through the coil is modulated. The output (modulated) of the bridge is further demodulated so that
the current flow is directly proportional to the vertical displacement of the stylus (refer Fig. 10.10). This
output causes a pen recorder to produce a permanent record. Nowadays microprocessor-based surface-
roughness measuring instruments are used. One such instrument ‘MarSurf’ is shown in Fig. 10.11 along
with its specifications to understand the attributes of the capabilities of an instrument, viz., digital output,
and print-outs of the form of surface under consideration.

Fig. 10.11 MarSurf


(Courtesy, Mahr Gmbh Esslingen)
280 Metrology and Measurement

(Specifications • Measuring range of up to 150 μm • Units μm/μin selectable • Standards: DIN/


ISO/JIS and CNOMO (Motif) selectable • Tracing lengths as per DIN EN ISO 4288/ASME: 1.75 mm,
5.6 mm, 17.5 mm (.07 in, .22 in .7 in); as per EN ISO 12085:1 mm, 2 mm, 4 mm, 8 mm, 12 mm, 16 mm
• Number of sampling lengths selectable from 1 to 5 • Automatic selection of filter and tracing
length conforming to standards • Phase-corrected profile filter as per DIN EN ISO 11562 • Cut-off
0.25 mm/0.80 mm/2.50 mm .010 in/.032 in/.100 in • Short cut-off selectable • Parameters as per
DIN/ISO/SEP: Ra, Rz, Rmax, Rp, Rq, Rt, R3z, Rk, Rvk, Rpk, Mr1, Mr2, Mr, Sm, RPc; as per JIS: Ra,
Rz, Ry, Sm, S, tp; Motif parameters: R, Rx, Ar, W, CR, CF, CL (three-zone measurement) • Tolerance
monitoring in display and measuring record • Automatic or selectable scaling • Printing of R-profile
(ISO/JIS), P-profile (MOTIF), material ratio, curve, measuring record • Output of date and/or time of
the measurements • Integrated memory for the results of approx. 200 measurements • Storage facility
on PCMCIA memory card for results, profiles, and measuring programs • Dynamic pick-up calibration
• Blocking of instrument settings for preventing unintentional readings)

10.4.3 Non-Contact Methods


There has been some work done to measure surface roughness using non-contact techniques. Here is
an electronic speckle correlation method given as an example. When coherent light illuminates a rough
surface, the diffracted waves from each point of the surface mutually interfere to form a pattern, which
appears as a grain pattern of bright and dark regions. The spatial statistical properties of this speckle image
can be related to the surface characteristics. The degree of correlation of two speckle patterns produced
from the same surface by two different illumination beams can be used as a roughness parameter.
Figure 10.12 shows the measure principle. A rough surface is illuminated by a monochromatic
plane wave having an angle of incidence with respect to the normal to the surface; multiscattering

x E
Interference filter

Rough surface Lens


Photodetector

α
L

f f
y η
z
(a) (b)

Fig. 10.12 (a) The measure principle of non-contact technique (b) Lasercheck non-contact
surface roughness measurement gauge [Packaged with a compact 76 mm x 35 mm x 44 mm
(portable head that has a mass of only 0.45 kg. The device will perform for years with
no maintenance and no fragile and expensive stylus tip to protect or replace. The system
performs measurements in a fraction of a second, over a range of 0.006 µm to greater
than 2.54 µm Ra roughness)]
Metrology of Surface Finish 281

and shadowing effects are neglected. The photosensor of a CCD camera placed in the focal plane of
a Fourier lens is used for recording speckle patterns. Assuming Cartesian coordinates x, y, z , a rough
surface can be represented by its ordinates Z (x, y) with respect to an arbitrary datum plane having
transverse coordinates (x, y). Then the r. m. s. surface roughness can be defined and calculated.

10.4.4 On-process Measurement


Many methods have been used to measure surface roughness in process.

1. Machine Vision In this technique, a light source is used to illuminate the surface with a digital
system to view the surface, and the data is sent to a computer to be analyzed. The digitized data is then
used with a correlation chart to get actual roughness values.

2. Inductance Method An inductance pickup is used to measure the distance between the sur-
face and the pickup. This measurement gives a parametric value that may be used to give a comparative
roughness. However, this method is limited to measuring magnetic materials.

3. Ultrasound A spherically focused ultrasonic sensor is positioned with a non-normal incidence


angle above the surface. The sensor sends out an ultrasonic pulse to the personal computer for analysis
and calculation of roughness parameters.

10.5 PRECAUTIONS FOR SURFACE-ROUGHNESS MEASUREMENT

1. Securing the workpiece depends on the component size and weight. In most cases, very light stylus
forces are used to measure surface finish, and if possible clamping is avoided. If clamping is necessary
then the lightest restraint should be used.
2. It is best to level the surface to minimize any error. However, on most computer-based measuring sys-
tems, it is possible to level the surface after measuring by using software algorithms. Some instruments
have wide gauge ranges, and in these circumstances leveling may not be so critical because the compo-
nent stays within the gauge range. For instruments with small gauge ranges, leveling may be more critical.
However, in all circumstances, leveling the part prior to measurement is usually the best policy.
There are two ways of overcoming this problem related to soft surfaces and easily marked surfaces be
measured. One is to use non-contact-type measuring instruments such as those with lasers or optical-type
transducers. However, some of these types of instruments can be limited with certain applications. If you need
to use stylus-type instruments then a replica of the surface can be produced allowing contact to be made.
The stylus tip can have an effect on the measurement results. It can act as a mechanical filter. In
other words, a large stylus tip will not fall down a narrow imperfection (high frequency roughness). The
larger the stylus, the more these shorter wavelengths will be reduced. A good example of a typical stylus
would be a 90° conisphere-shaped stylus with a tip radius of 2 um (0.00008"). This will be suitable for
most applications. Other stylus tip sizes are available and are component dependent in their use. For
example, for very small imperfections, a small stylus radius may be used.
282 Metrology and Measurement

Effects of the Stylus Tip The stylus tip radius is a key feature that is often overlooked. As-
suming that a conisphere stylus is being used, the profile recorded by the instrument will in effect be the
locus of the centre of a ball, whose radius is equal to that of the stylus tip, as it is rolled over the surface.
This action broadens the peaks of the profile and narrows the valleys. For simplicity, if we consider the
surface to be a sine wave then this distortion is dependent both on the wavelength and the amplitude.
For a given wavelength (of similar order of size to the stylus tip), the stylus tip will be unable to
reach the troughs of the sine wave if the amplitude is greater than a maximum limiting value. For
amplitudes above this limiting value, the measured peak-to-peak amplitude values will be attenu-
ated. It is worth mentioning in passing that the stylus tip also introduces distortion into other
parameters, because the sinusoidal shape of the surface is not preserved in the measured profile
(refer Fig. 10.13). This can lead to discrepancies between measurements taken with different stylus
radii, and so it is important to state the stylus tip size whenever this differs from the ISO recom-
mendations. Of course, the situation will be even more complicated for more typical engineering
surfaces.

Stylus

surface

Fig. 10.13 Distortion due to fine stylus size

10.6 SURFACE TEXTURE PARAMETERS

The purpose of a parameter is to generate a number that can characterize a certain aspect of the sur-
face with respect to a datum, removing the need for subjective assessment. However, it is impossible
to completely characterize a surface with a single parameter. Therefore, a combination of parameters is
normally used. Parameters can be separated into three basic types:

a. Amplitude Parameters These are measures of the vertical characteristics of the surface
deviations.

b. Spacing Parameters These are measures of the horizontal characteristics of the surface
deviations.

c. Hybrid Parameters These are a combination of both the vertical and horizontal character-
istics of the surface deviations
Metrology of Surface Finish 283

10.6.1 Amplitude Parameters


Rsk – It is a measurement of skewness and will indicate whether the surface consists of mainly
peaks, valleys or an equal combination of both. It is the measure of the symmetry of the profile about
the mean line. A surface with predominant peaks will be considered as ‘positive skew’, and a surface
with predominant valleys will be considered as ‘negative skew’. Negative skew, for example, is desirable
where oil retention is required. Positive skew may be desirable where adhesion is required.

Rku < 3

Rku = 3

Rku > 3

Fig. 10.14 Associated parameter Rku (Kurtosis)

Rku– ‘Kurtosis’ is a measure of the sharpness of the surface profile.


[Wsk, Wku, Psk and Pku are the corresponding parameters from the waviness and primary profiles
respectively.]

Rz (JIS) It is also known, as the ISO 10-point height parameter in ISO 4287/1-1984. It is numer-
ically the average height difference between the five highest peaks and the five lowest valleys within the
sampling length.

Rz and Rtm Rz = (Peak roughness) Rp + (depth of the deepest valley in the roughness) Rv and is the
maximum peak to valley height of the profile in a single sampling length.
Rtm = The equivalent of Rz when more than one sample length is assessed and is the Rp + Rv values
in each sample length divided by the number of sample length.
Rz1 max is the largest of the individual peak-to-valleys length from each sample length.

R3y , R3z R3z is the vertical mean from the third highest peak to the third lowest valley in a sample
length over the assessment length. DB N311007 (1983)
284 Metrology and Measurement

Ra—Average Roughness This is also known as Arithmetic Average (AA), Centre Line Average
(CLA), and Arithmetical Mean Deviation of the profile. The average roughness is the area between the
roughness profile and its mean line, or the integral of the absolute value of the roughness profile height
over the evaluation length:
L
1
L ∫0
Ra = r (x ) d x

When evaluated from digital data, the integral is normally approximated by a trapezoidal rule:
N
1
Ra =
N
∑r n =1
n

Graphically, the average roughness is the area (shown below) between the roughness profile and its
centreline divided by the evaluation length (normally, five sample lengths with each sample length equal
to one cut-off ):

Average Roughness
Ra

L
L
Fig. 10.15 Average roughness (Ra is an integral of the absolute value of the roughness
profile. It is the shaded area divided by the evaluation length L. Ra is the most commonly used
roughness parameter.)

The average roughness is by far the most commonly used parameter in surface-finish measurement.
The earliest analog roughness-measuring instruments measured only Ra by drawing a stylus continu-
ously back and forth over a surface and integrating (finding the average) electronically. It is fairly easy
to take the absolute value of a signal and to integrate a signal using only analog electronics. That is the
main reason Ra has such a long history.
It is a common joke in surface-finish circles that ‘RA’ stands for regular army, and ‘Ra’ is also the
chemical symbol for radium; only Ra is the average roughness of a surface. This emphasizes that the
‘a’ is a subscript. Older names for Ra are CLA and AA meaning centreline average and area aver-
age.
An older means of specifying a range for Ra is RHR. This is a symbol on a drawing specifying a
minimum and maximum value for Ra.
Metrology of Surface Finish 285

RHR max
min RHR1020
(Older drawings may have used this notation to express an allowable range for Ra. This
notation is now obsolete.)
For example, the second symbol above means that Ra may fall between 10 μ and 20 μ. Ra does not
give all the information about a surface. For example, Fig. 10.16 shows three surfaces that all have the
same Ra, but you need no more than your eyes to know that they are quite different surfaces. In some
applications they will perform very differently as well.

Same Ra for three diffrrent surfaces

Fig. 10.16 Three surfaces all have the same Ra, even though the eye immediately
distinguishes their different general shapes

These three surfaces differ in the shape of the profile—the first has sharp peaks, the second deep
valleys, and the third has neither. Even if two profiles have similar shapes, they may have a different
spacing between features. In Fig. 10.17 too, the three surfaces all have the same Ra.
If we want to distinguish between surfaces that differ in shape or spacing, we need to calculate other
parameters for a surface that measure peaks and valleys and profile shape and spacing. The more com-
plicated the shape of the surface we want and the more critical the function of the surface, the more
sophisticated we need to be in measuring parameters beyond Ra.

Rq Root-Mean-Square Roughness The root-mean-square (rms) average roughness of a


surface is calculated from another integral of the roughness profile:
286 Metrology and Measurement

Fig. 10.17 The same Ra value

L
1
L ∫0
Rq = r 2 (x ) d x

The digital equivalent normally used is


N
1
Rq =
N
∑r
n =1
n
2

For a pure sine wave of any wavelength and amplitude, Rq is proportional to Ra; it’s about 1.11 times
larger. Older instruments made use of this approximation by calculating Rq with analog electronics
(which is easier than calculating with digital electronics) and then multiplying by 1.11 to report Rq.
However, real profiles are not simple sine waves, and the approximation often fails miserably. Modern
instruments either digitize the profile or do not report Rq. There is never any reason to make the
approximation that is proportional to Ra.
Rq has now been almost completely superseded by Ra in metal machining specifications. Rq still has
value in optical applications where it is more directly related to the optical quality of a surface.

Rt , Rp , and Rv The peak roughness Rp is the height of the highest peak in the roughness profile
over the evaluation length (p1 below). Similarly, Rv is the depth of the deepest valley in the roughness
profile over the evaluation length (v1). The total roughness, Rt, is the sum of these two, or the vertical
distance from the deepest valley to the highest peak.
Rv = min ⎡⎣r (x )⎤⎦ , 0<x <L

Rp = max ⎡⎣r (x )⎤⎦ , 0<x <L


Metrology of Surface Finish 287

P1 Rp Rt

RV

V1

t
L
Fig. 10.18 Rt , Rp , and Rv

Rt = Rp + Rv

These three extreme parameters will succeed in finding unusual conditions: a sharp spike or burr on
the surface that would be detrimental to a seal for example, or a crack or scratch that might be indicative
of poor material or poor processing.

Rtm, Rpm and Rvm These three parameters are mean parameters, meaning they are averages of the
sample lengths. For example, define the maximum height for the i-th sample length as Rpi. Then Rpm is
M
1
Rpm =
M
∑R
i =1
pi

Similarly,
M
1
Rvm =
M
∑R
i =1
vi

and
M
1
Rtm =
M
∑R
i =1
ti = Rpm + Rvm

where Rvi is the depth of the deepest valley in the i th sample length and Rti is the sum of Rvi and Rpi:

Rvi = min ⎡⎣r (x )⎤⎦ , il < x < (i + 1) l


Rvi = max ⎡⎣r (x )⎤⎦ , il < x < (i + 1)l
Rti = Rpi + Rvi

These three parameters have some of the same advantages as Rt, Rp, and Rv for finding extremes in
the roughness, but they are not so sensitive to single unusual features.
288 Metrology and Measurement

Rymax (or Rmax) Maximum Roughness Height Within a Sample Length


Ry and Rmax are other names for Rti. Rmax is the older American name and Ry is the newer ISO and
American name. For a standard five cut-off trace, there are five different values of Ry. Ry is the maxi-
mum peak-to-lowest-valley vertical distance within a single sample length.
Rymax(ISO)—Maximum Ry
Rymax is an ISO parameter that is the maximum of the individual or Rmax ( i.e., Rti ) values.
R y max = max [ Rti ], 1 ≤ i ≤ M

It serves a purpose similar to Rt, but it finds extremes from peak to valley that are nearer to each
other horizontally.
Rz (DIN), i.e. Rz according to the German DIN standard, is just another name for Rtm in the Ameri-
can nomenclature (over five cutoffs).
Rz [ DIN ] = Rtm

Rz(ISO) It is the sum of the height of the highest peak plus the lowest valley depth within a sam-
pling length.

R z(ISO)
P1 P4 P2
P3
P5

V3 V5 V4
V2
V1

t
L
Fig. 10.19 Rz (ISO) (the sum of the height of the highest peak plus the lowest valley
depth within a sampling length)

R3zi Third Highest Peak to Third Lowest Valley Height The parameter R 3zi is the
height from the third highest peak to the third lowest valley within one sample length.
R 3z Average third highest peak to third lowest valley height
R 3z is the average of the R 3zi values:
M
1
R3z =
M
∑R
i =1
3 zi
Metrology of Surface Finish 289

P1 P2 R3zi
P3 P5 P4

V3 V5 V4
V2 V1

t(one sample length,l)

Fig. 10.20 R3zi (third highest peak to third lowest valley height)

R3z has much the same purpose as Rz except that less extreme peaks and valleys being measured.
R3zmax Maximum third highest peak to third lowest valley height
R3zmax is the maximum of the individual R3zi values:
R3 z max = max ⎡⎢⎣ R3zi ⎤⎦⎥ , 1 ≤ i ≤ M

R3z and R3zmax are not defined in national standards, but they have found their way into many high-
end instruments. They originated in Germany as a Daimler–Benz standard.

10.6.2 Roughness Spacing Parameters


1. Pc – Peak Count Peak count is a number giving the number of peaks per length of trace in
a profile. For the purpose of calculating Pc a ‘peak’ is defined relative to an upper and lower threshold.
Usually, this is a single number, the ‘peak count threshold’, the distance from a lower threshold up to an
upper threshold, centered on the mean line. A peak must cross above the upper threshold and below
the lower threshold in order to be counted.
Peak count is the number of peaks in the evaluation length divided by the evaluation length, (or to be
picky, by the distance from the beginning of the first peak to the end of the last peak). Pc is thus reported
as peaks/in or peaks/cm. Some instruments allow the thresholds to be centered on a height that differs
from the mean line. This is non-standard but may be convenient. For example, a pair of thresholds that
counts low peaks accompanied by deeper valleys may be appropriate for plateau surfaces.
The value obtained for Pc depends quite heavily on the peak count threshold for most surfaces. The
figure 10.21 shows peak count versus threshold for a ground surface and a turned surface as representa-
tive samples. For the ground surface, the parameter shows no stability. For the turned surface, there is a bit
of flattening out at a threshold of about 40 μin, but even for this surface Pc shows a wide variation with
threshold.
290 Metrology and Measurement

PC PC - Peak Count
1 2 3 4 ... n

Peak count
threshold

t
L
Fig. 10.21 Pc—Peak count

2000

1500

Po ground surface Ra = 5.7[In]


1000
(1/in)

500
turned surface Ra = 20.7[1/in]

0
0 20 40 60 80 100 120 140
Pc threshold(μ in)
Fig. 10.22 Changes in Pc values

2. HSC—High Spot Count High spot count, HSC, is similar to peak count except that a peak
is defined relative to only one threshold. High spot count is the number of peaks per cm (or inch) that
cross above a certain threshold. A peak must cross above the threshold and then back below it.
High spot count is commonly specified for surfaces that must be painted. A surface which has pro-
trusions above the paint will obviously give an undesirable finish.

3. Sm—Mean Spacing Sm is the mean spacing between peaks, now with a peak defined relative to
the mean line. A peak must cross above the mean line and then back below it.
If the width of each peak is denoted as Si (above) then the mean spacing is the average width of a
peak over the evaluation length:
Metrology of Surface Finish 291

High spot count


HSC
1 2 3 4 5 6 ... n

high spot count


threshold

t
L
Fig. 10.23 HSC—High spot count

S1 S2 S3 S4 S5 S6 ... Sn ... Sm Mean Spacing

Sm

t
L
Fig. 10.24 Sm—Mean spacing

N
S m = (1 N ) ∑ S n
n =1

Sm is usually reported in μm.

4. a—Average Wavelength The average wavelength of the surface is defined as follows:


Ra
λ a = 2π
Δa

This parameter is analogous to Sm in that it measures the mean distance between features, but it is
a mean that is weighted by the amplitude of the individual wavelengths, whereas Sm will find the pre-
dominant wavelength.
292 Metrology and Measurement

5. q—RMS Average Wavelength


Rq
λ q = 2π
Δq

6. pc—Peak Count Wavelength


1
λ pc =
pc

The above formula leaves in the reciprocal units of µpc. Therefore, the value must ordinarily be
converted from [in] to [µin] or from [cm] to [µm].
K-Randomness Factor

10.6.3 Roughness Hybrid Parameters


1. a—Average Absolute Slope This parameter is the average of the absolute value of the
slope of the roughness profile over the evaluation length:

d r (x )
L
1
Δa = ∫
L 0 dx
dx

It is not so straightforward to evaluate this parameter for digital data. Numerical differentiation is a
difficult problem in any application. Some instrument manufacturers have applied advanced formulas
to approximate (dz/dx) digitally, but the simplest approach is to apply a simple difference formula to
points with a specified spacing L/n:
1 N
Δa = ∑ rn+1 − rn
L n−1

If this approach is used, the value of L/n must be specified since it greatly influences the result of the
approximation. Ordinarily, L/n will be quite a bit larger than the raw data spacing from the instrument.

2. q—RMS Average Slope


1 ⎛⎜ d r (x ) ⎞⎟⎟
L 2

L ∫0 ⎜⎜⎝ d x ⎟⎟⎠
Δq = ⎜ ⎟ dx

1 N
∑ (rn+1 − rn )
2
Δq =
L n−1

3. Lo—Actual Profile Length One way to describe how a real profile differs from a flat line
is to determine how long the real profile is compared to the horizontal evaluation length. Imagine the
profile as a loose string that can be stretched out to its full length.
Metrology of Surface Finish 293

The 2-D length of a profile comes from the following equation:

⎛ d r (x )⎞⎟
L 2

Lo = ∫ 1 + ⎜⎜ ⎟ dx
0
⎜⎝ d x ⎟⎟⎠

The answer in a digital evaluation depends on the spacing of the points we choose to approximate dr/dx:

⎛ L ⎞⎟
N 2

Lo = ∑n =1
⎜⎜ ⎟ + (rn +1 − rn )2
⎜⎝ N ⎟⎠

4. Lr—Profile Length Ratio The profile length ratio, Lr, is the profile length normalized by the
evaluation length:
Lo
Lr =
L
The profile length ratio is a more useful measure of surface shape than Lo since it does not depend
on the measurement length.
The larger the value of Lr, the sharper or crisper the surface profile appears, and the larger is the
true surface area of the surface. In some applications, particularly in coating, where good adhesion is
needed, it may be desirable to have a large value of Lr, i.e., a large contact surface area. For most sur-
faces, Lr is only slightly larger than one and is difficult to determine accurately.

10.6.4 Statistical Analysis


1. Amplitude Distribution Function The amplitude distribution function (ADF) is a prob-
ability function that gives the probability that a profile of a surface has a certain height, z , at any position x.
Ordinarily, the ADF is computed for the roughness profile, although the texture or even primary pro-
files might be used in specialized applications.
The ADF has a characteristic bell shape like many probability distributions (refer Fig. 10.25 (a)). The
ADF tells “how much” of the profile lies at a particular height, in a histogram sense. It is the probability
The amplitude distribution function

0 % per unit height

Fig. 10.25(a) Amplitude distribution function


294 Metrology and Measurement

0% 15% 100%

Fig. 10.25(b) Bearing ratio curve (comments about shape, plateau, peaks, valleys)

that a point on the profile at a randomly selected x value lies at a height within a small neighborhood of
a particular value z:
Prob (z + dz > r (x ) > z ) = ADF (z ) dz

2. Bearing Ratio Curve The bearing ratio curve is related to the ADF. It is the corresponding
cumulative probability distribution and has much greater use in evaluation of surface finish. The bear-
ing ratio curve is the integral (from the top down) of the ADF (refer Fig. 10.25 (b)).
Other names for the bearing ratio curve are the bearing area curve (this is becoming obsolete with
the increase in topographical methods), the material ratio curve, or the Abbott–Firestone curve.
(See Figs. 10.27 and 10.28, Plate 10.)

(a) (b)
Fig. 10.26 (a) Pocket Surf (b) Drive unit for shop-floor applications
(Courtesy, Mahr Gmbh Esslingen)
Metrology of Surface Finish 295

(a)

(b) (c)

Fig. 10.29 (a) Dual-skid pick-up is suited for roughness measurements on plane surfaces and
a cylindrical surface in longitudinal direction, as well as inside bores with a diameter larger than
12 mm (b) Single-skid pick-up with lateral, spherical skid, 0.3-mm radius in tracing direction,
90° 5-µm stylus radius (200 µin), suitable to measure inner radii in circumferential direction
with a diameter larger than 12 mm (c) Drive units for mobile roughness measuring instruments
(Courtesy, Mahr Gmbh Esslingen)

10.7 POCKET SURF

The pocket surf (as shown in fig. 10.26) is a pocket-sized economically priced, completely portable
instrument which performs traceable surface-roughness measurements on a wide variety of surfaces.
It can be used confidently in production, on the shop floor and in the laboratory (US patent no.
4.776.212).

Features
• Solidly built, with a durable cast aluminum housing, to provide years of accurate, reliable surface
finish gauging
• Can be used to measure any one of four switch-selectable, parameters: Ra, Rmax/Ry, Rz
• Electable traverse length 1, 3 or 5 cut-offs of 0.8 mm/ 0.030 in
296 Metrology and Measurement

Technical Data
• Operates in any position—horizontal, vertical, and upside down
• Four switchable probe positions—axial (folded) or at 90°, 180° or 270°
• Even difficult-to-reach surfaces such as inside and outside diameters accessible
• Integrated data output for SPC-processing units that is compatible with the most common data
processing systems
• Easy-to-read LCD readout presents measured roughness value in microinches or micrometres
within half a second after the surface is traversed
• Out-of-range (high or low) and ‘battery low signals are also displayed

10.8 SPECIFYING THE SURFACE FINISH

10.8.1 Drawing Specifications


As per IS: 3073 of 1967, indicating following main characteristics of surface texture on drawings is as
shown in Fig. 10.30(a):

i. Roughness value (Ra),


ii. Sampling length or cut-off length (mm)
iii. Machining or production method
iv. Machining allowance (mm)
v. Direction of lay in the symbolic form such as = ( parallel ), ⊥( perpendicular), X(angular),
M(multidirectional), C(circular), R(radial )

Machining method

Roughness Ra Sampling length


value in μm

Directional lay
Machining allowance

Fig. 10.30(a) As per IS: 3073 of 1967—Indicating following


main characteristics of surface texture on drawings
Metrology of Surface Finish 297

Cylindrical grinding

Ra 0.2

0.10

Fig. 10.30(b) Example of indicating main characteristics of surface


texture on drawings

For example, a cylindrically ground surface with 0.10 mm machining allowance having Ra value of
0.2 μm with cut-off length of 3 mm and direction of lay as perpendicular will be represented as in
Fig. 10.30(b).

10.8.2 Grade Specification


Considering the capabilities of estimating techniques for evaluating the surface texture, ISO has recom-
mended the use of grades for specifying the surface texture. The following table gives the information
about grades and respective symbols used for specifying grades for respective Ra values in microns.

Table 10.3 Grade specification

Roughness
0.025 0.5 0.1 0.2 0.4 0.8 1.6 3.2 6.3 12.5 25 50
Value Ra µm
Roughness
N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 N11 N12
Grade
Roughness
∇∇∇∇ ∇∇∇ ∇∇ ∇ −
Symbol

Note:
1. Preferred values for arithmetical mean deviation Ra in μm are selected from
0.025, 0.5, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.3, 12.5, 25, 50
2. Preferred values for ten point height irregularities Ra in μm are selected from
0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.3, 12.5, 25, 50
298 Metrology and Measurement

Illustrative Examples

Example 1 In the measurement of surface roughness, heights of 20 successive peaks and valleys were measured
from a datum as follows:
35 25 40 22 35 18 42 25 35 22
36 18 42 22 32 21 37 18 35 20 microns.
If these measurements were obtained over a length of 20 mm, calculate the CLA and RMS values
of the surface.
Solution:

35 + 25 + 40 + 22 + 35 + 18 + 42 + 25 + 35 + 22 + 36 + 18 + 42 + 22 + 32 +
(i) CLA value = 21 + 37 + 18 + 35 + 20
20
= 29.1 microns

35 + 25 + 40 + 22 + 35 + 18 + 42 + 25 + 35 + 22 + 36 + 18 + 42 + 22
(ii) RMS value = +32 + 21 + 37 + 18 + 35 + 20
20
= 4.5934 microns

Review Questions
1. Explain reasons for controlling surface texture.
2. It is not possible to produce a perfectly flat surface. Justify the statement.
3. Explain surface texture w.r.t. its roughness, waviness, lay and sampling length.
4. Explain the terms:
a. Primary texture
b. Secondary texture
c. Ra Value
d. CLA Value of surface roughness
e. Skid and stylus
f. Mean line of profile
g. Micro and macro irregularities
Metrology of Surface Finish 299

5. Explain the procedure to use roughness comparison specimen to assess surface roughness along
with their limitation of applications.
6. Explain the method of finding the CLA index using a magnified graphical record of surface texture.
7. With the help of a neat sketch describe the construction and working of the following instruments;
a. Profilometer
b. Tomlinson surface meter
c. Taylor-Hobson surface roughness instrument
8. Define roundness and state the causes of out-of-roundness.
9. Explain the detail method of checking roundness by using a roundness measuring machine.
10. ‘The deviation from roundness occurs in the form of waves about the circumference of the part’.
Justify the statement.
11. How can you specify the surface finish on a drawing.
12. Explain: (a) Roughness spacing parameters (b) Root-Mean-Square roughness
13. What is the significance of wavelength of surface variations in measurement of surface texture?
14. Explain in principle, the function and operation of a stylus-type surface-texture measuring instru-
ments. Also explain their advantages.
15. How is surface texture related to tolerances on the surface dimensions?
16. Discuss the consequences of not specifying the sampling length in surface-roughness measurement.
17. Specify the causes of surface irregularities found in surface texture.
18. Define the term ‘ten-point height irregularities’ and use a profile to illustrate the answer.
19. Discuss any two methods of surface-finish evaluation and state their merits and demerits.
20. Explain symbolic representation with examples of indicating the main characteristics of surface
texture on drawings.
21. Write a short note on grades for specifying the surface texture.
22. In the measurement of surface roughness, heights of 10 successive peaks and valleys were mea-
sured from a datum as follows:
Peaks: 45, 42, 40, 35, 35 µm.
Valleys: 30, 25, 25, 24, 18 µm.
Determine the Rz value of the surface.
11 Metrology of Screw
Threads

‘Metrology of screw threads ensures perfect assembly…’


INTRODUCTION TO SCREW A screw thread is the ridge of uniform
THREADS cross section in the form of a helix on
Screw threads are the most important the external or internal surface of a cyl-
machine elements and are used in screws, inder; or in the form of a conical spiral
bolts, nuts, studs, tapped holes and other on the external or internal surface of
power-transmitting devices. These are the frustum of a cone, called straight or
convenient for joining and sealing pur- tapered threads respectively. Metrology
poses and can also be used as coarse of screw threads deals with screw
type for bracket fitments as well as very gauges for fine measurement.
fine type for micrometer heads.

11.1 UNDERSTANDING QUALITY SPECIFICATIONS OF SCREW THREADS

An essential principle of the actual profiles of both the nut and bolt threads is that they must never
cross or transgress the theoretical profile. So bolt threads will always be equal to, or smaller than, the
dimensions of the basic profile. Nut threads will always be equal to, or greater than, the basic profile.
To ensure this in practice, tolerances and allowances are applied to the basic profile.
Practically, to make a thread, tolerances must be applied to ensure that this essential principal always
applies. Tolerancing of screw threads is complicated by the complex geometric nature of the screw-
thread form. Clearances must be applied to the basic profile of the threads in order that a bolt thread
can be screwed into a nut thread. For the thread to be made practically, there must be tolerances applied
to the main thread elements.
Usually, nut threads have a tolerance applied to the basic profile so that it is theoretically possible
for the nut thread profile to be equal to the theoretical profile. Bolt threads usually have a gap between
the basic and actual thread profiles. This gap is called the allowance with inch-based threads and the
fundamental deviation with metric threads. The tolerance is subsequently applied to the thread. Since
for coated threads, the tolerances apply to threads before coating (unless otherwise stated), the gap is
Metrology of Screw Threads 301

taken up by the coating thickness. After coating, the actual thread profile must not transgress the basic
profile of the thread.
A full designation for a metric thread includes information not only on the thread diameter and
pitch but also a designation for the thread tolerance class. For example, a thread designated as
M12 × 1 - 5g6g indicates that the thread has a nominal diameter of 12 mm and a pitch of 1 mm.
The 5g indicates the tolerance class for the pitch diameter, and 6g is the tolerance class for the major
diameter.
A fit between the threaded parts is indicated by the nut-thread tolerance designation followed by
the bolt-thread tolerance designation, separated by a slash. For example, M12 × 1 - 6H/5g6g indicates
a tolerance class of 6H for the nut (female) thread, a 5g-tolerance class for the pitch diameter with a
6g-tolerance class for the major diameter.
A tolerance class is made up of two parts, a tolerance grade and a tolerance position.

11.1.1 Tolerance Positing and Grades of Threads


A number of tolerance grades have been established for G
the pitch and crest diameters (the crest diameter is the

Tolerance
grade
minor diameter in the case of a nut thread and the major
diameter in the case of a bolt thread. Tolerance grades
are represented by numbers, the lower the number the Nut
smaller the tolerance. Grade 6 is used for a medium toler- threads Lower deviation
ance quality and a normal length of thread engagement. Basic
Upper deviation size
Grades lower than 6 are intended for fine tolerance qual- Bolt
threads
Tolerance

ity and/or short lengths of thread engagement. Grades


grade

higher than 6 are intended for coarse tolerance quality


and/or long lengths of thread engagement, which are as
follows: g

Fig. 11.1 Tolerance positing and


i. 5 tolerance grades (grades 4 to 8) available for the grades of screw threads
minor diameter of the nut thread
ii. 3 tolerance grades (grades 4, 6 and 8) for the major
diameter of the bolt thread
iii. 5 tolerance grades (grades 4 to 8) for the pitch diameter tolerance of the nut thread
iv. 7 tolerance grades (grades 3 to 9) for the pitch diameter tolerance of the nut thread

11.1.2 Tolerance Position and Grading for ISO Threads


Uppercase letters for nut threads and lowercase letters for bolt threads indicate tolerance positions.
The tolerance position is the distance of the tolerance from the basic size of the thread profile.
For nut threads there are two tolerance positions—H with a zero fundamental deviation (distance of
the tolerance position from the basic size) and G with a positive fundamental deviation.
302 Metrology and Measurement

P
P/8
H/8

3H/8
6 0°

5H/8
G H
30°
H

D, d
P/2

H/4
Nut

D2, d2
threads

D1, d1
Basic P/4
size
Bolt D = major diameter of internal thread
90°
threads D = major diameter of external thread
Axis of D2 = pitch diameter of internal thread
h screw thread d2 = pitch diameter of external thread
g D′ = minor diameter of internal thread
e d′ = minor diameter of external thread
P = pitch
Fig. 11.2 Tolerance position and H = height of fundamental triangle
grading for ISO threads Fig. 11.3 Basic profile of unified ISO thread form

For bolt threads there are four tolerance positions—h has a zero fundamental deviation and e, f, and
g have negative fundamental deviations. (A positive fundamental deviation indicates that the indicates
that the size for the thread element will be smaller than the basic size).

11.2 SCREW THREAD TERMINOLOGY

1. Pitch Diameter (often called the effective diameter) of a parallel thread is the diameter of the
imaginary co-axial cylinder which intersects the surface of the thread in such a manner that the inter-
cept on a generator of the cylinder, between the points where it meets the opposite flanks of a thread
groove, is equal to half the nominal pitch of the thread.

Pitch

Major Crest
diameter Angle
nk
Fla

Pitch
diameter

Minor
diameter
Root

Pitch/2

Fig. 11.4(a) Screw thread terminology


Metrology of Screw Threads 303

Height or
depth of thread
B.
Flanks
D.Crest Pitch
C.Axis E.Root F.

Crest

Minor diameter

Major diameter
Root

Screw

Thread
angle A.Extenal Threads Internal threads

Fig. 11.4(b) Screw thread terminology (conventional)

2. Major Diameter (B) of a thread is the diameter of the imaginary co-axial cylinder that just
touches the crest of an external thread or the root of an internal thread.

3. Minor Diameter is the diameter of the cylinder that just touches the root of an internal
thread.

4. Crest (D) of a thread is the prominent part of a thread, whether internal or external.

5. Root (E ) is the bottom of the groove between the two flanking surfaces of the thread, whether
internal or external.

6. Flanks of a thread are the straight sides that connect the crest and the root.

7. Axis (C) is the centreline running lengthwise through a screw.

8. Angle of a Thread is the angle between the flanks, measured in an axial plane section.

9. Pitch of a Thread (F ) is the distance measured parallel to its axis between corresponding
points on adjacent surfaces in the same axial plane. There are three types of pitch errors:

a. Progressive Error of Pitch is a gradual, but not necessarily uniform, deviation of the pitch
of successive threads from the nominal pitch.
304 Metrology and Measurement

b. Periodic Error of Pitch is a cyclical 2p


pattern of departures from the nominal pitch,

Pitches
Two pitches
which is repeated regularly along the screw. p
advance
One pitch
advance
c. Drunkenness is a periodic variation λ
1 2
of a pitch where the cycle is of one pitch
Fig. 11.5 Illustration of pitch of a thread
length.

Effect of Pitch Errors Errors in pitch—namely, incorrect relative position of the flanks—act
obstructively, due to which a perfect external screw (which has pitch error) will not screw into a perfect
internal screw of the same nominal size. Pitch errors virtually increase the pitch diameter of an external
screw and virtually reduce the pitch diameter of an internal screw.

10. External Thread (A) is a thread on the outside of a member, e.g., the thread of a bolt.

11. Internal Thread is a thread on the inside of a member, e.g., the thread inside a nut.

12. Addendum of an external thread is the radial distance between the pitch and major cylinders
or cones, respectively.

13. Dedendum of an external thread is the radial distance between the pitch and minor cylinders
or cones, respectively.

14. Lead is the axial movement of a point following its helical turn around the thread axis, where
n = number of starts, i.e., where there are n helices started at regular intervals round the same cylinder.

15. Rake Angle (λ) is the acute angle formed by a thread helix on the pitch cylinder and a plane
perpendicular to the cylinder axis.
tan λπ = pEn (multistart thread)
tan λπ = pE (single-start thread) where E is the pitch diameter

16. Virtual Effective Diameter of a parallel thread is the simple pitch diameter of an imagi-
nary thread of perfect pitch and flank angles, cleared at the crests and roots but having the full depth
of straight flanks, which would just assemble with the actual thread over the prescribed length of
engagement.

17. Pitch Cylinder has a diameter and location of its axis in such a way that its surface would
pass through a straight head in a manner so that the widths of the thread ridge and the thread
groove are equal and are located equidistantly between the sharp major and minor cylinders of a
given thread form.
Metrology of Screw Threads 305

11.3 TYPES OF THREADS

Most of the threads have triangle-shaped threads. On the other hand, square-shaped and trapezoid-
shaped threads are used for moving machinery which need high accuracy, such as a lathe.
In respect to thread standards, there is a metric thread (M), a parallel thread for piping (PF ), a
taper thread for piping (PT), and a unified thread (UNC, UNF). In this chapter, metrology of threads
is related to metric threads because they are the most widely used in many countries around the
world.
The most common screw thread form is the one with a symmetrical V-profile. The included angle
is 60 degrees. This form is prevalent in the Unified Screw Thread (UN, UNC, UNF, UNRC, UNRF)
form as well as the ISO/Metric thread. The advantage of symmetrical threads is that they are easier
to manufacture and inspect compared to non-symmetrical threads. These are typically used in general-
purpose fasteners.
Other symmetrical threads are the Whitworth and the Acme. The Acme thread form has a stron-
ger thread, which allows for use in transnational applications such as those involving moving heavy
machine loads as found on machine tools. Previously, square threads with parallel sides were used for
the same applications. The square thread form, while strong, is harder to manufacture. It also cannot be
compensated for wear unlike an Acme thread.

11.3.1 British Association Thread


This thread was used for small-diameter threads (less than 0.25 inch). The thread has reduced roots and
crests and has a flank angle of 47 and a half degrees. The thread size varies from BA number 23 (0.33-mm
diameter with a pitch of 0.09 mm) to BA number 0 (6-mm diameter with a pitch of 1 mm). Relative to
the Whitworth thread, the depth of the BA thread is smaller. This thread form is now redundant and
has been replaced by Unified and Metric threads. The form of the thread is shown in Fig. 11.6.
If, p = pitch of the thread
d = depth of the thread
r = radius at the top and bottom of the threads
then d = 0.6 p, r = 2 p/11

11.3.2 Whitworth Threads


Sir Joseph Whitworth proposed this thread in 1841. This was the first standardized thread form. The
form of the thread is shown in Fig. 11.7 The principal features of the British Standard Whitworth
(BSW) thread form are that the angle between the thread flanks is 55 degrees, and the thread has radii
at both the roots and the crests of the thread. The relevant standard for this thread form is BS 84:
1956. This thread form is now redundant and has been replaced by Unified and Metric threads. The
306 Metrology and Measurement

47.5°

d r

Fig. 11.6 British association thread

British Standard Fine (BSF) thread has the same


profile as the BSW thread form but was used when 55°
a finer pitch was required for a given diameter.
If, p = pitch of the thread d r

d = depth of the thread


r = radius at the top and bottom of the
threads P

then d = 0.640327 p, r = 0.137329 p Fig. 11.7 Whitworth threads

11.3.3 Metric Threads


In November 1948, the Unified thread was agreed upon by the UK, the US and Canada to be used as
the single standard for all countries using inch units. In 1965, the British Standards Institution issued a
policy statement requesting that organizations should regard the BSW, BSF and BA threads as obsolete.
The first choice of replacement for future designs was to be the ISO metric thread with the ISO inch
(Unified) thread being the second choice.
Metric threads are designated by the letter M followed by the nominal major diameter of the thread
and the pitch in millimetres. For example, M10 × 1.0 indicates that the major diameter of the thread
is 10 mm and the pitch is 1.0 mm. The absence of a pitch value indicates that a coarse thread is specified.
For example, stating that a thread is M10 indicates that a coarse thread series is specified of 10-mm
diameter (giving the thread a pitch of 1.5 mm).
The thread form for Unified and Metric threads are identical. 60°
If, p = pitch of the thread r
d
d = depth of the thread
r = radius at the top and bottom of the threads P

d = 0.54127 p Fig. 11.8 Metric threads


Metrology of Screw Threads 307

11.4 MEASUREMENT OF SCREW THREADS


1. Geometrical Parameter
a. Major Diameter ---- Bench Micrometer
b. Minor Diameter ---- Bench Micrometer
c. Thread angle and profile ---- Optical Profile Projector, Pin Measurement

2. Functional Parameters
a. Effective Diameter ---- Screw Threads Micrometer, Two-or Three-wire
methods, Floating Carriage Micrometer
b. Pitch ---- Screw Pitch Gauge, Pitch Error Testing Machine
Measurement of screw threads can be done by inspection and checking of various components
of threads. The nut and other elements during mass production are checked by plug gauges or
ring gauges.

11.4.1 Measurement of Major Diameter


A bench micrometer serves for measuring the major diameter of parallel plug screw gauges. It
consists of a cast-iron frame on which are mounted a micrometer head with an enlarged thimble
opposite a fiducial indicator; the assembly makes a calliper by which measurements are reproducible
within ±0.001 mm (±0.00005 in). The microm-
eter is used as a comparator. Thus, the bench
micrometer reading R B is taken on a standard
cylindrical plug of known diameter B of about
the same size as the major diameter to be mea-
sured. A reading R G is then taken across the
crests of the gauge. Its major diameter D is
given by D = B + R B − R G
Readings should be taken along and round the
gauge to explore the variations in major diameter. Fig. 11.9 Bench micrometer
Finally, the reading R B on the standard should be
checked to confirm that the original setting has not changed. It is recommended that the measurement
should be repeated at three positions along the thread to determine the amount of taper which may be
present.

11.4.2 Measurement of Minor Diameter


For checking the minor diameter, the anvil end and spindle end have to reach roots on opposite sides,
but it doesn’t happen. Therefore, the wedge-shaped pieces are held between the anvil face root of
the thread and spindle face root of the thread. One reading is taken over a dummy minor diameter
308 Metrology and Measurement

Support

Fiducial
indicator Measuring
Box
anvils Measuring
head
Fig. 11.10 Schematic diagram of bench micrometer

Micrometer spindle

Wedge pieces readchine the root


K

Indicator anvil

Fig. 11.11 Vee wedge pieces contacting minor diameter

cylindrical piece along with the wedge pieces. The procedure is also repeated along with the threaded
component.

11.4.3 Floating Carriage Micrometer


This can also be used for measuring the minor diameter. This is a high-precision instrument with a least
count of 0.2 microns and is used for checking thread elements on threads of screw plug gauges, which
are used for high-precision measurements.
A floating carriage micrometer consists of a sturdy cast-iron base; and two accurately mounted and
aligned centres. A freely moving slide mounted on hardened steel balls is freely moving at right angles
Metrology of Screw Threads 309

to the axis of the centres, carrying a micrometer and highly sensitive fiducial indicator. The carriage
permits measurements along with the centreline and at right angles to the work. This carriage is then
mounted on another carriage, which is finally mounted on a fixed base. This carriage helps in moving
the carriage having the micrometer and fiducial indicator to position them along with the length of the
workpiece. The indicator also moves along with the bench micrometer.

E
4
3
2
1
0
24
23
22
21

D
B
C
A

D
A
B

C C
Fig. 11.12(a) Floating carriage micrometer

The setting cylinder is kept between the fiducial indicator and micrometer anvil and the reading is
taken and is recorded as (R 1). Without moving the fiducial indicator, the cylinder is replaced by a screw
in such a way that the roots of the threads are touched showing the minor diameter, and the corre-
sponding reading is noted as R 2.
Then, minor diameter = (R 1–R 2 ) + Master Diameter.

Pitch Diameter Measurement by Floating Carriage Micrometer The gauge is


mounted between a pair of centres carried on a base (A) which has two vee-grooves machined in its upper
surface parallel to the line of the centres. These vee-grooves form runways for a saddle (B), having on one
side two projecting conical pegs (C ), which rest in one of the grooves. The other end of the saddle rests
on a steel ball placed in the other vee-groove. A micrometer carriage (D), with a vee-groove and a flat on
its underside, can move freely on steel balls resting in two vee-grooves cut in the upper surface of the
saddle in a direction at right angles to the line of centres. This micrometer carriage is provided on one side
with a micrometer head graduated to either 0.002 mm or 0.0001 in and capable of being read to 0.0005
mm or 0.00002 in, and on the other side with an adjustable anvil associated with a sensitive indicator. The
310 Metrology and Measurement

Fig. 11.12(b) Floating carriage micrometer

common axis of the micrometer and anvil is at the same height as the line of centres. The shank of one
of the conical pegs (C ) is made eccentric; so that by turning it in its hole, it is possible to adjust the axis of
the micrometer to be truly square with the line of centres. After making this setting, the position of the
peg can be maintained by a clamping screw. Taken as a whole, the machine is a development of the bench
micrometer already described, having a free motion at right angles to the line of centres, and capable also
of being traversed along the bed of the machine so as to measure at any desired position along a screw
gauge mounted between the centres. The cylinders used with the machine during the measurements of the
pitch diameter are suspended by threads from light rods (E ) fixed to the micrometer carriage. In order to
eliminate entirely the personal element as regards the ‘feel’ of the micrometer, and also to obtain a control
of the measuring force, the adjustable anvil is fitted with a fiducial indicator (F ) which operates under a
force of about 250 grams wt (8 oz wt), or less if desired. The machine described, which was designed at
NPL, is obtainable commercially in two or three sizes to accommodate gauges up to 250 mm (10 in) or
so in diameter. Reference should now be made to Fig. 11.13 (a,b) (Plate 11), where the central diagram
shows cylinders seated in the groove of the thread and the dimension T beneath them. The objective of
the measurement in the floating micrometer machine is to determine T. The pitch diameter E is then
obtained from the measured value of T by the formula

E=T+P−c+e
where P is a constant depending on the pitch and angle of the screw thread, and the mean diameter of
the small cylinders used; c is a correction depending mainly upon the rake angle of the screw thread; e is
a correction for the elastic compression of the cylinders. In practice, the thread measuring cylinders are
supplied with their measured diameter and with the P values appropriate to each combination of pitch
and the common thread form to which the cylinders may be applied. The values of c and e are in general
relatively small for standard screw threads and the low measuring force used; however, they must be
taken into account as they are significant compared with tolerances for screw gauges.

Measurement of Effective Diameter by using Two-Wire Method Figure 11.14


shows a floating carriage micrometer used for measurement of a simple effective diameter by using
two-wire method.
The two wires used in the measurement should be identical in diameters between flanks and threads.
The effective diameter is given by E = T + P
Metrology of Screw Threads 311

Wire of
Dia. ′d′
O
Diameter over
wire Dm
D E
Q
B C
E M
A
θ/2 Effective
diameter
Diameter ′T ′ P

Fig. 11.14 Two-wire method

where T is the dimension under the wires and


T = DM−2d
d = Diameter of wire
For measuring the dimension T, the wires are placed over a standard cylinder of diameter greater than
the diameter under the wires, and the corresponding reading is noted as r1 and the reading the over the
gauges as r2.
Then, T = P − (r1 − r2)
where, p = the constant value which should be added to the diameter under the wires for calculating
the effective diameter and which also depends upon the diameter of the thread and the pitch of the
thread (pitch value)
Now refer Fig. 11.14.
BC lies on the effective diameter line.
BC = ½ pitch = ½ p
d cosecθ / 2
OP =
2
d( cosec θ / 2 − 1)
PA =
2
PQ = QC cot θ/2 = p/4 cot θ/2
p cot θ / 2 d (cos ec θ / 2 − 1)
AQ = PQ − AP = −
4 2
AQ has a value half of P.
312 Metrology and Measurement

Therefore, P = 2 AQ
p
= cot θ / 2 − d ( cosec θ / 2 − 1)
2
For, metric threads, θ = 60°
P = 0.866p − d
For measuring T by using a floating carriage micrometer, place the master cylinder with wires and take
the reading R. Now, replace the master cylinder with the threaded screw and take the reading as S. Then,
T = (R−S ) + diameter of master cylinder.

11.4.4 Expression for Best Size Wire


This wire is of such a diameter that it makes contact with the flanks of the thread on the effective
diameter or pitch line, i.e, the contact points of the wires must be on the pitch line or effective diameter.
Refer Fig. 11.16. OP is perpendicular to the flank position of the thread. Let half the included angle of
the thread be θ.
Therefore, in Δ OAP
sin POA = AP/OP

Indicating
unit

Stylus

Pitch

Effective diameter line

Fig. 11.15 Stylus point on or near effective diameter

Table 11.1 P values of different thread forms

Thread Form P Value


ISO Metric 0.866025 p – W
Unified 0.866025 p – W
Whitworth 0.960491 p − 1.165681 W
BA 0.136336 p − 1.482950 W
Metrology of Screw Threads 313

P/4
O
r Pitch line
A P

θ
Effective
diameter
P/2

Fig. 11.16 Best size wire

or sin (90°−θ ) = OP/AP


OP = AP /cos θ = AP sec θ
Since AP = r
and wire diameter = db = 2r = 2 AP sec θ
As AP lies on the pitch line
AP = P/4
Where, P = Pitch of the thread
Hence,
db = 2p/4 (sec θ)
= P/2 sec θ
For V threads, θ = 60°,
∴ db = 0.5773 p
The pitch diameter of the best-size cylinder of various thread forms are given below.

Table 11.2 Best size cylinder diameter specifications of different forms of threads

Form of Thread Diameter of Best Size Cylinder


ISO Metric 0.57735 p
Whitworth 0.56369 p
BA 0.5462 p
Acme 0.51645 p

11.4.5 Measurement of Effective Diameter by Three-Wire Method


The two-wire method is slightly difficult for manual operation. The centreline of the micrometer is to
be perpendicular to the axis of the component and should remain intact until measurement is finished.
314 Metrology and Measurement

Table 11.3 “Best size“ diameters of cylinder for ISO metric threads

Best size Diameter Best size Diameter


of Thread Measuring of Thread Measuring
Pitch of ISO Metric Thread Cylinder Pitch of ISO Metric Thread Cylinder
mm mm mm mm
0.2 0.1155 1.25 0.7217
0.25 0.1443 1.5 0.8660
0.3 0.1732 1.75 1.0104
0.35 0.2021 2 1.1547
0.4 0.2309 2.5 1.4434
0.45 0.2598 3 1.7320
0.5 0.2887 3.5 2.0207
0.6 0.3464 4 2.3094
0.7 0.4041 4.5 2.5981
0.75 0.4330 5 2.8868
0.8 0.4619 5.5 3.1754
1 0.5774 6 3.4641
8 4.6188

But due to the inclination of the positioning of the two wires, the instrument may get slightly tilted,
which gives incorrect results. The three-wire method overcomes this problem. In this case, the instru-
ment maintains its alignment itself and gives a true reading.
Refer Fig. 11.17 (b).
AD = AB cosec x/2 = r cosec x/2
H = DE cot x/2 = p/2 cot x/2
CD = H/2 = p/4 cot x/2
H = AD − CD
r = cosec x/2 − p/4 cot x/2
Distance over wires = M
= E + 2h + 2r
= E + 2 ( r cosec x/2 − p/4 cot x/2) + 2r
= E + d (1+cosec x/2) − p/2 cot x/2
Metrology of Screw Threads 315

E = Effective diameter
M = Distance over wires

E M

(a)

H B C

X M
X /2
D E

E = Effective diameter
E
M = Distance over wires d = diameter of wires
r = radius of the wires x = angle of thread
h = Height of the center or the wire rod from the effective
diameter
(b)
Fig. 11.17 (a) Three–wire method of measuring effective diameter
(b) Three–wire method of measuring effective diameter

(i) In case of Whitworth Thread X= 550, depth of thread = 0.64p, so that E = D − 0.64p
and cosec x/2 = 2.1657, cot x/2 = 1.921
M = E + d (1+cosec x/2) − p/2 cot x/2
= D + 3.1657d − 1.6005p
where D = Outside diameter
(ii) In case of Metric Threads
Depth of Thread = 0.6495p
E = D − 0.6495p, X= 600, cosec x/2 =2, cot x/2 = 1.732
316 Metrology and Measurement

M = D − 0.6495p + d (1+2) − p/2 (1.732)


= D + 3d − 1.5155p
We can measure the value of M practically and then compare with the theoretical values. Upon
knowing M, the correct value of E can be found out.

11.5 MEASUREMENT OF THREAD FORM ANGLE

Measurement of flank angle is the most important form of measurement. Flank angle is the angle
between the straight portion of a thread flank and a line normal to the thread axis. The thread image is
projected by the optical method and then measured by the planned arrangement.
The shadow projector shown in Fig. 11.18 is an arrangement for measuring flank angle. This devel-
opment can be used to advantage on plug screw gauges mounted in a projector on which the opposite
ends of a diameter can be viewed in turn by an accurately straight transverse movement of the plug
across the field of the lens.
Thus, the first measurement of each flank angle is taken from the combined readings of the circular
scale and of the tangent screws. The screw plug is then moved across the field of the lens by means of
the transverse adjustment, until the thread form of the other side appears on the white background of
the protractor, the rake of the beam of light being reset to allow for the reversed direction of the helix.
Without moving the alignment of the protractor on the table of the machine, the pivoted arm (B) is
swung across to measure the angle of the same flank as before. The mean of the two readings gives the
inclination of the flank with respect to the normal to the axis of the screw. The method is described as
the throwover method. The accuracy attainable in flank-angle measurement depends on the alignment of

Straight edge
on projector
screen

B
40
40
20 30
10 20
0 10

1 Div:=1Min:
Fig. 11.18 Shadow protractor for measuring flank angles on horizontal sceerns
Metrology of Screw Threads 317

the axis of the screw with the protractor datum, the length of the flank available for setting the protractor
arm, and the fineness of reading offered by the design of the protractor. It is also affected by the crisp-
ness of the image. The throwover method eliminates small errors of alignment of the protractor with a
plug screw axis but is available only for gauges with a diameter within the traverse of the projector. We can
remember that equal and correct flank angles of the cutting tool correction relation of the cutting tool
and correct relation of tool axis of the cylindrical blank must be sought in manufacture. Actual measure-
ment is done by a protractor mounted on the screen. The setting lines are adjusted exactly to the image of
the profile between the flanks of the single thread and the difference in the two readings of the measure
of the angle. Alternatively, this measurement can also be done by using a toolmaker’s microscope.

Effect of Flank Error Figure 11.19 illustrates the effect caused by errors in flank angles. The cor-
responding flanks of the two screws are not parallel and cannot make full flank-to-flank contact. Instead,
the flanks of the lower outline offer contact only at their extremities: although the plug has a smaller simple
pitch diameter than the ring, it behaves as if it is increased. The simple pitch diameter of a plug screw is
virtually increased by the equivalent of the sum, irrespective of sign, of the errors of the flank angles.
∂a1 and ∂a2 represent the errors in the two flank angles of a screw thread. The virtual increase or
decrease of pitch diameter of an external thread or of an internal thread is given by the following
approximate expressions.

1. ISO metric—0.019 × p × (∂a1 + ∂a2 )


2. Whitworth—0.0105 × p × (∂a1+ ∂a2)
3. British Association—0.0091 × p × (∂a1+ ∂a2)
4. ACME—0.0180 × p × (∂a1+ ∂a2)

where, ∂a1 and ∂a2 are the errors expressed in degrees and the corresponding virtual change will have
same unit as that of the value used in p.

P/2 Perfect
screw
ring

Screw ring pitch line

Screw plug
a a pitch line
Virtual effective diameter

a − δa1 a + δa2
Sample pitch diameter

Sample pitch diameter

Scerw plug
having flank angle
errors δa1 and δa2
(screw plug)

(screw plug)

(screw plug)

Fig. 11.19 Difference in pitch diameter for errors of flank angles


318 Metrology and Measurement Metrology of Screw Threads 318

11.6 MEASUREMENT OF INTERNAL THREADS

Ring gauges have internal threads and are taken as a standard sample for measuring the parameters of
internal threads.

1. Measurement of Major Diameter For measuring the


major diameter, a special comparator having specially designed anvils
is used. Anvils are placed in grooves opposite by along the helix and X
the dimension x is measured as shown in Fig. 11.20.

⎛ p2 ⎞
Major diameter = x 2 −⎜⎜ ⎟⎟⎟ , where p = pitch.
⎜⎝ 4 ⎟⎠
Major diameter
2. Measurement of Minor Diameter The Screw Gauge
Booklet published by NPL describes the process of measurement of Fig. 11.20 Measurement of
minor diameter, which is as follows: major diameter

The minor diameter can be sized by fitting a mandrel having a diametric taper of about 1 in 500, i.e.,
0.000 2 in per in into the ring. The minor diameter is then taken as the diameter of the mandrel where
it fits the screw thread. Unfortunately, the tapered mandrel gives the minimum size of the minor diam-
eter and does not check ovality. Alternatively, the minor diameter may be sized by a range of cylindrical
plugs, differing in size by known small increments. The minor diameters of screw threads above 20-mm
(0.75 in) diameter may be obtained from the measurement by gauge blocks of the distance between two
precise cylindrical rollers of known size placed diametrically opposite in the screw ring gauge. The minor
diameter is then calculated by adding the diameters of
the rollers to the size of the gauge block combination
which just fits between the rollers. By using precision
rollers, the minor diameters of ring gauges of nominal
diameters up to 100 mm (4 in) may be estimated to an
accuracy of ±0.001 mm (±0.00005 in). This method has
the advantage in that the ovality of the minor cylinder
may be determined by taking measurements around the Fig. 11.21 Measurement of minor
circumference of the screw. diameter

For screw rings of small diameter, a pair of accurately


made sliding wedges, known as taper parallels, may be used as the minor diameter being obtained by
micrometer measurement over the projecting portions of the wedges.

3. Pitch Diameter Measurement The measurement of pitch diameter of individual ring


screw gauges is undertaken only when necessary. Instead, the use of check plugs is quicker and
preferred. The direct mechanical measurement of the pitch diameter of parallel screw ring gauges is
practicable only by the use of measuring machines of special design. The NPL displacement method
is used for measuring pitch diameter of internal screw threads. The NPL screw booklet provides the
details as follows.
Metrology of Screw Threads 319

The basis is to measure the pitch diameter of a screw ring gauge by comparison with the ‘pitch diameter’
of a precision annular groove in a solid cylindrical plug. The latter acts as a standard pitch diameter. In prac-
tice, a number of annular grooves are finely ground in a cylindrical plug, the grooves being corresponding in
depth with a range of pitches and having closely nominal flank angles for various thread forms. Each groove
is standardized for pitch diameter (Es ) with thread measuring cylinders in a floating carriage machine. Dif-
ferent designs of contact device have been used, but in essence they are a double-ended stylus carried in a
bar. The stylus which has a radius form at each end is selected to make contact at or near the pitch line to be
measured; the stylus or the bar is so mounted as to be sensitive to contact pressure on either end of itself.
C The total displacement of the standard of pitch diam-
XR eter ES in measurement is XR + XL. Total displacement of
the ring of pitch diameter EG in measurement is YL + YR.
Notice that the two displacements located by each of the
CX ES stylus contacts are in opposite directions:
L
EG = XR + XL + YL + YR − ES
In the NPL machine, the various displacements are
imparted to a carriage upon which the standard and/or
C YL ring may be mounted and which can be moved in a straight
line parallel to the stylus. A micrometer on each side reg-
isters the position of the carriage. Two accurate straight
line motions of the stylus are provided; one normal to
C the face of the carriage to move from thread to thread.
YR
The other, in a plane parallel to the face of the carriage to
locate a diameter. Nowadays, the displacement method is
available in coordinate measuring machines (CMMs). The
major and minor diameter may also be measured using
EG
a suitable sharp radius stylus and a precision plane-faced
Fig. 11.22 NPL displacement method gap of known size as a standard.

Illustrative Examples

Example 1 Calculate the diameter of best size of a wire for an M 20 x 2.5 screw.

Solution: Pitch of thread P = 2.5 mm


θ = 60° for metric thread ..... (given)
Best size wire diameter
(db ) = (P / 2) × sec (θ/2)
= (2.5 / 2) × sec (60/2) = 1.4433 mm
320 Metrology and Measurement

Example 2 An M20 × 2.5 plug screw gauge is checked for effective diameter by a floating carriage microm-
eter with best size wire and the following readings were noted:
(i) Diameter of standard cylinder = 18. 001 mm
(ii) Micrometer reading over standard cylinder with two wires of same diameter = 14.6420 mm.
(iii) Micrometer reading over the plug screw gauge with the wires of same diameter = 14.2616 mm.
Calculate the effective diameter of the gauge by neglecting rake and elastic compression errors.

Solution: Effective diameter = T + P


where T = Dimensions under the wires.
And T = (R 2 − R 1 ) + Diameter of master cylinder
R1 = Reading over the wires when cylinder is positioned = 14.6420 mm
R2 = Reading over the wires when gauge is positioned = 14.2616 mm
T = (14.2616 − 14.6420) + 18.001 = 17.6206 mm
We know that for metric threads
P = 0.866p − d
where d = p/2 sec θ, θ is the half-included angle.
∴ d = 2.5 / 2 sec 30°
d = 1.3334
P = 0.866p − d = 0.866x 2.5 − 1.3334 = 0.8316 mm
∴ effective diameter = E = T + P = 17.6206 + 0.8316 = 18.4522 mm

Example 3 For M 16 × 2mm external threads, calculate the best-size wire diameter and the difference
between size under wire and effective diameter

Solution: Pitch of thread P = 2 mm, θ = 60° for metric thread ..... (given)
Best-size wire diameter, db
db = (P/2) × sec (θ / 2) = (2/2) × sec (60/2) = 1.154 mm
Pitch value, P
P = 0.866p − d
= [0.866 × 2] − 1.154 = 0.577 mm
Effective diameter, E
E = Diameter under wire + Pitch Value
E=T+P
∴ difference between effective diameter and size under wire = P = E − T
∴ E − T = 0.577 mm.
Metrology of Screw Threads 321

Example 4 Calculate the effective diameter if


(i) Micrometer reading over standard cylinder with two wires of diameter = 15.64 mm
(ii) Micrometer reading over the gauge with two wires as 15.26 mm and pitch of thread = 2.5
mm
(iii) Wire of 2-mm diameter and standard cylinder = 18 mm

Solution: Dm = Diameter over standard cylinder = 15.64 mm


Ds = Diameter over plug screw gauge = 15.26 mm
p = Pitch thread = 2.5 mm
d = Diameter of wire =2 mm
D = Standard cylinder diameter = 18 mm
The wire diameter, db = 2 mm …… (given)
The pitch value, P
∴ P = 0.866 p − d
P = [ (0.866 × 2.5) − 2 ]
P = 0.165 mm.
Value of diameter under wire, T
∴ T = [ Ds − Dm] + D
T = [ 15.26 − 15.64 ] − 18
T = 17.62 mm
Effective diameter, E
∴E = T+P
E = 17.62 + 0.165
E = 17.785 mm

Example 5 Calculate the effective diameter for an M 24 × 3 plug gauge by using a floating carriage microm-
eter for which readings are taken as follows:
(i) Micrometer reading over standard cylinder with two wires of diameter = 12.9334 mm
(ii) Micrometer reading over the plug screw gauge with two wires as 12.1124 m
(iii) Diameter of standard cylinder = 22.001 mm. Best wire size was used for the above.

Solution: Dm = Diameter over standard cylinder = 12.9334 mm


Ds = Diameter over plug screw gauge = 12.1124 mm
p = Pitch thread = 3 mm
θ = 60° (metric thread)
D = Standard cylinder diameter = 22.001 mm
Best size wire diameter, d b
db = (P/2) × sec (θ/2) = (3/2) × sec (60/2) = 1.732 mm
322 Metrology and Measurement

Pitch value, P
P = 0.866p − db
= [0.866 × 3] − 1.732 = 0.8659 mm
Value of diameter under wire, T
∴ T = [Ds − Dm ] + D
T = [12.1124 − 12.9334] − 22.001
T = 21.18 mm
Effective diameter, E
∴E=T+P
E = 21.18 + 0.8659
E = 12.0459 mm

Review Questions
1. Explain the nomenclatures of screw thread with the help of a neat sketch.
2. Discuss the various types of pitch errors along with their causes and effects.
3. Name and describe the various methods of measuring the minor diameter of the thread.
4. With the help of suitable sketches describe the pitch-measuring machine for a thread gauge.
5. What is best-size wire? Calculate the diameter of the best wire for an M 20 × 25 screw.
6. Show that the best wire size for measuring effective diameter of threads is given by
db = (P/2) sec (θ/2)
7. Explain in brief the different corrections to be applied in the measurement of effective diameter
by the method of wire.
8. Sketch and describe a floating carriage micrometer and state its use.
9. Explain, the following methods for measuring effective diameter’ with the help of derivation
(a) Two-Wire Method (b) Three-Wire Method
10. For measuring the effective diameter for an M 10 × 1.5 thread gauge for a wire of 0.895 mm by
using floating carriage micrometer, readings are taken as
(a) Micrometer reading over standard cylinder with two wires of 8-mm diameter = 2.4326 mm
(b) Micrometer reading over the gauge with wires mounted as 3.0708 mm
Calculate the effective diameter.
Metrology of Screw Threads 323

11. Suggest a suitable method of inspection for the profile of screw thread with sketches.
12. Name the three most important dimensions of a vee-thread which controls the fitting of threads.
Show with a sketch all dimensions which are necessary to completely define a thread.
13. Define the pitch of a screw thread. Draw an illustrative line diagram of a pitch measuring machine
and describe its working. Explain how the graphs of accumulative and periodic errors look like.
14. Explain the reason why three wires are used to measure a screw thread with a hand micrometer and
two wires are used to measure a floating carriage machine for the same purpose.
15. When measuring the major diameter of an external screw thread gauge, a 35.00-mm diameter
cylindrical standard was used. The micrometer readings over a standard gauge were 9.3768 mm and
11.8768 mm respectively. Calculate the thread-gauge major diameter.
16. What do you mean by a ‘drunken thread’? How it is produced? Describe the method to test drunk-
enness of a component machined on centres.
12 Metrology of Gears

“Metrology of gears checks smoothness of operation, freedom from vibration and noise ……”
P R Trivedi, GM Manufacturing, Mahindra Engineering and Chemical Products Ltd., MIDC Pimpari

MEASURING GEARS The improved manufacturing capability


Gears are essential elements in the of gear production equipment demands
drive systems of most machines and high-accuracy measurement equipment.
equipment. They are used in road, rail The demand for quiet gears in all trans-
and air transport, and in a wide variety missions has increased due to changes
of industrial drives ranging from food in legislation and customer demand.
processing, packaging and printing to Conventional gear measurement tech-
quarrying, power generation, machine niques are not able to identify the error
tools, robotics and steel making. In the sources, which are the cause of many
past twenty-five years, there has been noise and vibration problems. Existing
substantial development in gearing, gear metrology facilities in the world have
resulting in a threefold torque density the capability to measure gears up to 2.6
increase (relative to size and weight) m in diameter and weighing up to 15
and a reduction in real cost of 27%. tonns. In practice, gear tooth vernier, base
This has been achieved by improve- tangent comparator, involute profile-mea-
ments in gear accuracy supported by suring machine, profile projector, etc., are
improved gear metrology (e.g., probe- in use; but on the other hand, many con-
system measuring) and calibration siderably larger gears are produced but
accuracy. Lower uncertainty of calibra- their tooth profiles cannot be measured.
tion data has encouraged manufactur- A portable tooth-profile-measurement
ers to use the best metrology practices instrument is still one of the requirements
to reduce their measurement uncer- of the industry for checking gears during
tainty and to improve gear accuracy. production and assessing wear or
damage of gears during service.

12.1 INTRODUCTION

Gears are mechanical devices that transmit power and motion between axes in a wide variety of commercial
and industrial applications. They are widely used for speed reduction or increase, torque multiplication
Metrology of Gears 325

and resolution, and accuracy enhancement for positioning systems. They find applications in areas like
machine tools, automobiles, material-handling devices, rolling mills, ancillary machinery, and so on. Trans-
mission efficiency of gears is 99 per cent, which is due to positive displacement characteristics of a gear,
a power-transmission device. Such high efficiency depends upon the dimension of a gear under consid-
eration with its specified design dimensions. Along with the dimension, the accuracy of their geometrical
forms has considerable effects on the smoothness of operation, freedom from vibration and noise, and
their working life. So, contemplated inspection of gears is inevitable. Gear types available include a spur
or pinion gears, change gears, cluster gears, internal gears, differential end gears, racks, helical gears, her-
ringbone gears, worm wheels, worms, miter or bevel gears, miter or bevel gear sets, hypoid gears, gear
stock and pinion wire, and gear blank. Metric gears are characterized by their millimetre-based module
designation.
Gears are made from a wide variety of materials with many different properties. Factors such as
design life, power-transmission requirements, noise and heat generation, and presence of corrosive ele-
ments contribute to optimization of gear material. Common metal materials of construction for gears
(metric—all styles, gear stock, gear blanks) include aluminum, brass, bronze, cast iron, steel, hardened
steel, and stainless steel. Plastic and other materials that may be used include acetal, Delrin, nylon, and
polycarbonate. Combination gears can have plastic teeth with metal inserts. An important environmen-
tal parameter to consider is the operating temperature.

12.1.1 Gear Wheel


The gear wheel is a basic mechanism. Its purpose is to transmit rotary motion and force. A gear is a
wheel with accurately machined teeth round its edge. A shaft passes through its centre and the gear may
be geared to the shaft. Gears are used in groups of two or more. A group of gears is called a gear train.
The gears in a train are arranged so that their teeth closely interlock or mesh. The teeth on meshing
gears are the same size so that they are of equal strength. Also, the spacing of the teeth is the same on
each gear. An example of a gear train is shown in Fig. 12.1.

Single gear Gear train


Fig. 12.1 Schematic of gear and gear train
326 Metrology and Measurement

12.1.2 Rotation Direction


When two spur gears of different sizes mesh together, the larger gear is called a wheel, and the smaller
gear is called a pinion. In a simple gear train of two spur gears, the input motion and force are applied
to the driver gear. The driven gear transmits the output motion and force. The driver gear rotates the
driven gear without slipping. The wheel or the pinion can be the driver gear. It depends on the exact
function the designer wishes the mechanism to fulfill. When two spur gears are meshed, the gears rotate
in opposite directions, as shown in Fig. 12.2.

Fig. 12.2 Direction of gears in mesh

12.2 TYPES OF GEARS

1. Bevel Gears These gears have teeth cut on a cone instead of a cylinder blank. They are used
in pairs to transmit rotary motion and torque where the bevel gear shafts are at right angles (90 degrees)
to each other. An example of two bevel gears is shown in Fig. 12.3 (a).

2. Crossed Helical Gears These gears also transmit rotary motion and torque through a right
angle. The teeth of a helical gear are inclined at an angle to the axis of rotation of the gear as shown
in Fig. 12.3(b).

3. Worm and Worm Wheel A gear, which has one tooth, is called a worm. The tooth is in
the form of a screw thread. A worm wheel meshes with the worm. The worm wheel is a helical gear
with teeth inclined so that they can engage with the thread—like a worm. Like the crossed heli-
cal gears, the worm and worm wheel transmit torque and rotary motion through a right angle. An
application of the worm and worm wheel used to open lock gates is shown on the left-hand side
in Fig. 12.3 (c).
Metrology of Gears 327

(a) (b) (c) (d)

(e) (f) (g)

(h) (i)

(j)
Fig. 12.3 (a) Bevel gears (b) Crossed helical gears (c) Worm and worm wheel (d) Single
helical gear (e) Double helical gear (f) Spiral bevel gears (g) Internal face-cut gears
(h) External face-cut gears (i) Rack and pinion (j) Spur gear
328 Metrology and Measurement

4. Helical Gear This gear is used for applications that require very quiet and smooth running, at
high rotational velocities. Parallel helical gears have their teeth inclined at a small angle to their axis of
rotation, as shown in Fig. 12.3 (d). Double helical gears give an efficient transfer of torque and smooth
motion at very high rotational velocities. An example of a double helical gear is shown in Fig. 12.3 (e).

5. Spiral Bevel Gears When it is necessary to transmit quietly and smoothly a large torque
through a right angle at high velocities, spiral bevel gears can be used. An example of spiral bevel gears
is shown in Fig. 12.3(f ).

6. Face Cut Gears This type of gear teeth can be cut on the inside of a gear ring, an example of which
is shown in Fig. 12.3 (g). Internal gears have better load-carrying capacity than external spur gears. They are
safer in use because the teeth are guarded. An example of an external face cut gear is shown in Fig. 12.3 ( h).

7. Rack and Pinion This is used for converting rotary motion to linear motion. A rack-and-pinion
mechanism [shown in Fig. 12.3 (i)] is used to transform rotary motion into linear motion and vice versa.

8. Spur Gears A spur gear is one of the most important ways of transmitting a positive motion
between two shafts lying parallel to each other. These types of gears constitute a large proportion of
the gears in use today. A gear of this class may be likened to a cylindrical blank, which has a series of
equally spaced grooves around its perimeter so that the projections on one blank may mesh in the
grooves of the second. As the design should be such that the teeth in the respective gears are always in
mesh, the revolutions made by each is definite, regular and in the inverse ratio to the numbers of teeth
in the respective gears. This ability of a pair of well-made spur gears to give a smooth, regular, and
positive drive is of the greatest importance in many engineering designs. An example of two spur gears
in mesh is shown in Fig. 12.3 ( j). This chapter confines the scope of discussion for metrology of gears
only with involute gears of straight tooth known as ‘spur’.

12.3 SPUR GEAR TERMINOLOGY

Spur gears are also called straight tooth or involute gears. Spur gears mate or mesh via teeth with very specific
geometry. A spur gears pitch is a measure of tooth spacing and is expressed in several ways. Circular
pitch (CP) is a direct measurement of the distance from one tooth centre to the adjacent tooth centre.
Diametric pitch (DP) is the ratio of the number of teeth to the pitch diameter (in inches) of a gear; a
higher DP therefore indicates finer tooth spacing. This is the more common pitch designation for gears
with English design units. Module (mod or M ) is used for metric gears and is the ratio of pitch diameter
(in mm) to the number of teeth; a higher module therefore indicates coarser tooth spacing. Pressure
angle is another specification of tooth form and is the angle of tooth drive action, i.e., the angle between
the line of force between meshing teeth and the tangent to the pitch circle at the point of mesh. Gears
must have the same pitch and pressure angle in order to mesh. Other important gear-size specifications
to consider for gears, (Metric—all styles, gear stock, gear blanks) include number of teeth, face width,
and length. Some of the important terminologies of spur gear are defined as follows:
Metrology of Gears 329

Outside or
u s blank diameter
di
ra
c le
cir Whole depth
h
tc
Pi
Circular pitch
Dedendum
Centre distance

Addendum
Clearance
Addendum circle
Circular tooth
thickness Pitch circle

Working
depth Dedendum or
rooth circle

Fig. 12.4 Spur gear terminology

i. The pitch circle (diameter) is the circle (diameter) representing the original cylinder which
transmits motion by friction and its diameter is the pitch circle diameter.
ii. The centre distance of a pair of meshing spur gears is the sum of their pitch circle radii. One
of the advantages of the involute system is that small variations in the centre distance do not
affect the correct working of the gears.
iii. The addendum is the radial height of a tooth above the pitch circle.
iv. The dedendum is the radial depth below the pitch circle.
v. The chordal addendum is the distance from the top of the tooth to the chord connecting the
circular thickness arc.
vi. The chordal thickness is the thickness of a tooth on a straight line or chord on the pitch circle.
vii. The clearance is the difference between the addendum and the dedendum.
viii. The whole depth of a tooth is the sum of the addendum and the dedendum.
ix. The working depth of a tooth is the maximum depth that the tooth extends into the tooth
space of a mating gear. It is the sum of the addenda of the gear.
x. The addendum circle is that which contains the tops of the teeth and its diameter is the outside
or blank diameter.
xi. The dedendum or root circle is that which contains the bottoms of the tooth spaces and its
diameter is the root diameter.
xii. Circular tooth thickness is measured on the tooth around the pitch circle, that is, it is the length
of an arc.
xiii. Circular pitch is the distance from a point on one tooth to the corresponding point on the next
tooth, measured around the pitch circle.
xiv. The module is the pitch circle diameter divided by the number of teeth.
330 Metrology and Measurement

xv. The diametrical pitch is the number of teeth per inch of pitch circle diameter. This is a ratio.
xvi. The pitch point is the point of contact between the pitch circles of two gears in a mesh.
xvii. Contact between the teeth of meshing gears takes place along a line tangential to the two base
circles. This line passes through the pitch point and is called the line of action.
xviii. The angle between the line of action and the common tangent to the pitch circles at the pitch
point is the pressure angle.
xix. The tooth face is the surface of a tooth above the pitch circle, parallel to the axis of the gear.
xx. The tooth flank is the tooth surface below the pitch circle, parallel to the axis of the gear. If any
part of the flank extends inside the base circle, it cannot have involute form. It may have another
form, which does not interfere with mating teeth, and is usually a straight radial line.
xxi. Backlash is the amount by which the width of a tooth space exceeds the thickness of the engag-
ing tooth on the pitch circles.
xxii. Clearance is the distance from the tip of a tooth to the circle passing through the bottom of the
tooth space with the gears in mesh and measuring radially. The correct clearance is vital to the
motion of gears.

12.4 FORMS OF GEARS

There are two types of gear forms used in engineering frequently, viz., involute and cycloidal. The
second one, i.e., the cycloidal profile is not used for gear form in modern applications. It is used for
some special cases of heavy and impact loading conditions. On the other hand, the involute profile has
a wide scope of application for general purpose in precision engineering.
For reasons of economy in production, modern gear teeth are almost exclusively cut to an involute
form. The involute is a curve which is generated by rolling a straight line around a circle, where the
end of the line will trace an involute. Figure 12.5 shows the
construction of an involute. Using this method to draw a Addendum
gear profile would be very time-consuming, so we will use Pitch circle
an approximation called ‘Unwins construction’.
Base circle

i. Mark positions A and B on the addendum and base Dedendum


circle.
ii. Divide the distance into three equal parts AC = CB.
iii. From the point C draw a tangent to the base circle.
o
This gives the point D. A
C B
iv. Divide CD into four equal parts.
v. Having got the point F, draw an arc from E through 4 E
3
C from the addendum circle to the dedendum circle. 2
1
Draw the fillet radius to complete this part of the D

tooth profile.

An involute profile has some prominent advantages Fig. 12.5 Diagram to draw involute
over other types of profiles. Gears possessing involute tooth profile
Metrology of Gears 331

profile have the same pressure angle, variations in the centre distance between two spur gears in mesh
have no effect on the velocity, the face and flank form a continuous curve and all gears having the
same pitch and pressure angle work correctly together. Tooth profiles of spur gears can be measured
accurately. This can be done by establishing how far the tooth forms confirm to the theoretical form
of the involute tooth profile as desired by the designer.

12.5 QUALITY OF (SPUR) GEAR

As in the case of limits, fits and tolerances for ordinary engineering components, a system has
been evolved for grades and tolerances of gear toothing. As per the requirements of draft ISO
recommendation number 1328, ‘Accuracy of parallel involute gears’ and revelent Indian Stan-
dard Specifications, viz., IS: 4702, 4725, 4058 and 4059, gears have been classified into 10 quality
accuracy grades from 3 (high-precision gears) to 12 (coarse-quality, low-speed gears). BS Standard
436:1970 and DIN Standards 3963:1977 have categorized spur gears into 12 classes. AGMA Stan-
dard 2001-B88 has categorised spur gears into 15 classes. The quality or grade assigned to any
particular gear is the finest number selected for any of the three elements, viz., limits of tolerance
of pitch, tooth profile and tooth alignment. Normally, for a pair of mating gears, the elements of
the components belong to the identical accuracy grades, but these may also have different grades
as per the agreement between the manufacturer and the user. Table 12.1 shows the relationship
between the quality of toothing, circumferential velocity, dynamic forces, and other determining
factors

Table 12.1 Spur gear manufacturing processes and its quality

For Spur Gear


Cast, Finished by
Manufacturing Machined with Milled Fine
pressed, grinding scrap- Type of loading
processes formed cutter generated finished
forged ping, ect.
Quality of gear 10 to 12 9 to 10 8 to 9 6 to 7 4 to 5
Max. limit of
0.8 1.2 5 8 15 Normal
velocity (m/s)

Depending upon the quality the grades involved, gears have been classified as per Indian Standard
Specifications. This is given in Table 12.2.
Besides the above-mentioned IS specifications, IS: 4071 lays down requirements for master gears
which are intended for checking other working gears. As stated earlier, grades 1 and 2 (in some cases 3
also) are assigned for master gears. While selecting the quality grade, the designer should always con-
sider the cost involved.
332 Metrology and Measurement

Table 12.2 Quality grades of gears and ISS number

ISS Number Quality Grade Application


4702 3 and 4 High-precision gears
4725 5 and 6 Precision gears
4058 10, 11 and 14 Coarse quality, low speed gears
4059 7, 8 and 9 Medium speed gears

12.6 ERRORS IN SPUR GEAR

Before considering the methods/techniques and instruments used for gear-parameter measurement, first
we will have to define the types of error to be inspected and amount of dimensional variation allowed,
which finally depends upon the required quality of gear. As every gear rotates about the axis, almost all the
parameters have to be inspected about the axis of rotation. However, the actual axis of rotation depends on
several factors, but one has to ensure that the axis of inspection is very close to the axis of rotation of gear
assembly. Then the axis of inspection may be the axis of bore of the gear blank,or in case if the gear is an
integral part of the shaft then the axis of the shaft is an axis of inspection. From the metrological point of
view, the major aspects of any gear which need to be inspected are

i. Gear blank
ii. Teeth of single gear for tooth profile, for tooth alignment, for tooth spacing around gear and
tooth thickness
iii. Combined error of the gear in assembly

1. Gear Blank Run-out Errors Errors normally inspected for spur gear blanks are the fol-
lowing:

i. Tip diameter run-out error is due to excessive interference of tooth tip with the root fillet of the
mating gear.
ii. Radial run-out of the interface surface may be due to wrong setting on the machine during
manufacturing.
iii. Face run-out of the interface face is a run-out of reference surface specified on drawing. It happens
due to wrong angular positioning of a blank with respect to the axis of manufacture.

2. Gear Tooth Profile Errors These errors are indications of deviation of the actual tooth
profile from the ideal tooth profile. The errors of tooth profiles are the following:

i. Tooth profile error— Tooth profile error is the summation of deviation between actual tooth profile
and correct involute curve which passes through the pitch point measured perpendicular to the
Metrology of Gears 333

actual profile. The measured band is the actual effective working surface of the gear. However, the
tooth modification area is not considered as part of profile error.
ii. Pressure angle error
iii. Basic circle error

3. Gear Tooth Errors


i. Tooth thickness error is the value obtained by subtracting the design tooth thickness from the
actual tooth thickness, measured along the reference surface or pitch circle.
ii. Tooth alignment error is also called distortion error. When a spur gear is cut, its tooth traces should
follow the ideal path, i.e., parallel to the axis of gear, but it is an indication of deviation from its
ideal path.

4. Pitch Errors These errors exist in tooth spacing.


i. Adjacent transverse pitch error is the algebraic difference between the actual transverse circular pitch
and its theoretical value. In other words, it indicates the departure measured on similar flanks of
two adjacent teeth. When the measurement is done over a length of more than one pitch apart,
viz., one pitches, it is called cumulative pitch error.
ii. Tooth to tooth pitch error is the difference between two consecutive pitches.
iii. Radial run-out is the eccentricity in the pitch circle. Run-out is twice the eccentricity.
iv. Single pitch error (fpt) is the deviation between the actual measured pitch value between any adjacent
tooth surface and theoretical circular pitch.
v. Accumulated pitch error (fp) is the difference between theoretical summation over any number of
teeth interval, and summation of actual pitch measurement over the same interval.
vi. Normal pitch error (fpb) is the difference between theoretical normal pitch and its actual measured
value.

The major element to influence the pitch errors is the run-out of gear flank groove.

5. Runout Error of Gear Teeth, Fr This error defines the run-out of the pitch circle. It
is the error in radial position of the teeth. Most often it is measured by indicating the position of
a pin or ball inserted in each tooth space around the gear and taking the largest difference. Alter-
nately, particularly for fine pitch gears, the gear is rolled with a master gear on a variable centre
distance fixture, which records the change in the centre distance as the measure of teeth or pitch
circle run-out. Run-out causes a number of problems, one of which is noise. The source of this
error is most often insufficient accuracy and ruggedness of the cutting arbor and tooling system.

6. Lead Error, fb Lead error is the deviation of the actual advance of the tooth profile from
the ideal value or position. Lead error results in poor tooth contact, particularly concentrating contact
to the tip area. Modifications, such as tooth crowning and relieving can alleviate this error to some
degree.
334 Metrology and Measurement

7. Composite Error It is the combined effect of a number of errors acting simultaneously. This
error term includes two or more types of the individual errors, such as profile errors, pitch error, tooth
alignment error, tooth thickness error, etc. This type of error is measured by meshing a gear under
test with the master gear. Therefore, it is the range of difference between the displacement at the pitch
circle of a gear and that of the master gear meshed with it at a fixed distance when moved through one
revolution, when the driving and driven gear flanks are in proper contact. There are two methods of
measuring this error. Depending on the methods of measuring, the errors are described as single-flank
tooth-to-tooth composite error and double-flank tooth-to-tooth composite error. These errors indicate the difference
between the largest and the smallest centre distance observed during one revolution of the test gear.

8. Assembly Errors When gears are in assembly, they are checked for the following:
i. Centre Distance Errors Centre distance is specified along with (normally unidirectional)
tolerance. Therefore, any increase in centre distance will result into increased (clearance) backlash.
Backlash should be as small as possible and designed for minimum centre distance.

ii. Axes Alignment Error For spur gear, the axes of the two mating gears must be parallel to
each other—any misalignment will result into axes alignment error.

12.7 MEASUREMENT AND CHECKING OF SPUR GEAR

After production, gears are checked and inspected to ensure correctness of different parameters and
smoothness of operation. Different methods are followed for measurement and checking of gears,
which are discussed in detail as follows.

12.7.1 Pitch Measurement


Pitch measurement may take place from the left flank to the next left flank, called left-flank pitch
measurements and similarly, the circular distance from the right flank to the next right flank is called
right-flank pitch measurement. We can measure these pitches either as absolute values or relative values.
For pitch measurement, proper care has to be taken if the axis of generation of gear and the axis of
measurement are different, otherwise even a gear without any pitch error may appear to have a pitch
error. Circular pitch is defined as the distance from a point on one tooth to the corresponding point on
the next tooth, measured around the pitch circle. Therefore, the pitch can be measured either by a step-
by-step method, i.e., measuring the distance from one point on one tooth to a similar point on the next
successive tooth or by the direct angular method, i.e., by measuring the position of a suitable point on
a tooth after a gear has been indexed through a suitable angle.

a. Using Portable Pitch – Measuring Instrument A portable pitch-measuring instru-


ment as shown in Fig. 12.6 is used to measure tooth-to-tooth pitch by step-by-step method. This instru-
ment consists of one dial indicator and three measuring tips. One of the tips is the measuring tip, the
second is the sensitive tip whose position can be adjusted by adjusting a screw, and further, its movement
is transmitted through leverage system to the dial indicator. The third tip is adjustable or a guide stop for
Metrology of Gears 335

Dial indicator

Body

Sensitive tip

Fixed
measuring tip

Adjustable
or guide
stop
Fig. 12.6 Portable base pitch-measuring instrument

support. The distance the between the fixed and sensitive tip is set to be equivalent to the base pitch of
the gear with the help of slip gauges. This properly set instrument is applied to the gear so that all three
tips make contact with the tooth profile. The reading on the dial indicator is the pitch error.

b. Two-Dial Gauge Method for Pitch Measurement In this


1st
method, two lever-type dial gauges, as shown in Fig. 12.7 (b), are used on two
adjucent teeth of a gear mounted in centres as shown in Fig. 12.7 (a). The gear
under test is indexed through successive pitches to give constant reading on
the first indicator, and any change in the reading on the second dial indicates
pitch error.
2nd
12.7.2 Gear Tooth Profile Measurement
Gear tooth profile measurement can be done by measuring the tooth profile
of a spur, i.e., the involute profile accurately. All the tooth profile errors are (a)
measured in transverse plane of a spur gear. It can be done in several ways.

a. Optical Projection Method In this method an optical comparator


and profile projector (as shown in Fig. 12.8) ( Plate 12) are used to magnify the (b)
profile of the gear under test and then it is compared with the master profile.
Fig. 12.7 Two-dial
It enables quick checking of the profile, which is more useful in the case of gauge method for
small-sized and thin gears. pitch measurement

b. Involute Measuring Machine In case of a large-sized gear, the involute profile is checked
using involute measuring machine. This gear under test is held on a mandrel. A ground circular disc
336 Metrology and Measurement

of the same diameter as that of the base circle of the gear


under test is also mounted on the mandrel. Then the dedi-
cated machine, consisting of a straight edge, to which the Straight edge
dial gauge is attached, is brought in contact with the base Base
circle of the disc. The straight edge is rolled around a base circle
circle without slipping the stylus present at the measuring Plunger
end of the plunger of the dial gauge. When the gear and Fig. 12.9 Involute measuring
disc is rotated then the stylus moves over the tooth profile, machine
and the deviations from the true involute profile are indi-
cated on the dial gauge.

c. Tooth Displacement Method When the previously discussed dedicated involute mea-
suring machine is not available, the vertical measuring machine (height gauge) is used for checking
the profile of the large-sized gear. Though it is a time-consuming method, it is the best-suited
method for calibration of master involute, and is used for very high-precision components. In this
method, the gear under test is rotated through small angularincrements and the reading on the
vertical measuring machine is noted. These readings are compared with the theoretically calculated
values at about five to ten places along the tooth flank. Trial and error method is used to establish
the required incremental angular positions. Theoretical values may be calculated with respect to the
angular positions, for example, as shown in Fig. 12.10 (b), (c), (d) where, φ = pressure angle, θ =
angular position as

L1 − L2 = rp cos φ (θ1 − θ2)

L3 − L1 = rp cos φ (θ3 − θ1)

d. Computer-Controlled Probe Scanning Method The measurements are taken with a


computer-controlled probe scanning at a constant rate with a constant force from the root of the tooth
to its tip. Accuracy of our equipment is calibrated to 40 µinches with NIST standards on a yearly basis.
Figure 12.11 (a) graphically shows the method of scanning. After the data is collected for all the teeth,
it is mathematically evaluated to determine profile form error and concentricity as well as base-circle
shrinkage error and actual tooth thickness.
Two gear-scan-trace-reports are used for analysis. The first report is for an actual measured gear
that indicates significant shrinkage error. The second report is for a gear designed and moulded
as a replacement for the same application. The original, poorly moulded gear had been purchased off-
the-shelf from a commercial supply house.

12.7.3 Tooth Thickness Measurement


a. Using Gear Tooth Vernier Gear tooth thickness is the length of an arc measured on
the tooth around the pitch circle. Hence, it is called pitch-line tooth thickness. As tooth thickness
Metrology of Gears 337

Pitch
circle

L
L1
θ
θ1

(a) (b)

L2 L3
θ2
θ3

(c) (d)
Fig. 12.10 Tooth displacement method

Schematic of Profile Scan

(a) (b)
Fig. 12.11 Computer-controlled probe scanning method

varies from top to bottom, the instrument must measure tooth thickness at a specified position on the
tooth.
Gear Tooth Vernier is an instrument shown in Fig. 12.12 and is used for measuring pitch-line
tooth thickness. It consists of two perpendicular arms on which the main scales and vernier scales are
engraved. One of the scales (horizontal scale) is used to measure the depth (h), i.e., at the chordal adden-
338 Metrology and Measurement

Main and
vernier
scales

Slide

(a) (b)

Fig. 12.12 (a) Gear tooth vernier caliper (b) Measurement of gear tooth thickness
(Courtesy, Metrology Lab, Sinhgad College of Engg., Pune, India.)

dum on the gear at which its pitch-line thickness is E h


A B
to be measured. The second scale (vertical scale) is
D
used to measure the actual pitch-line thickness, also Pitch
called chordal thickness. Then, the measured values circle

are compared with the calculated values.


Consider one gear tooth as shown in Fig. 12.13. Arc
AEB is a part of the pitch circle and AB is a chord of θ
the pitch circle. W and h are the chordal tooth thick- OB = R
ness and the chordal addendum on the gear respec-
tively at which the magnitude of W is to be measured.
These values can be theoretically calculated by using
the formula which is derived as follows:
From the figure, W = l(AB) = 2 = l(AD)
Now, consider ΔAOD. O
l(AD) = l(OA) sin θ and ∠ AOD = θ = 360/(4z ) Fig. 12.13 Tooth thickness at pitch line
Metrology of Gears 339

where, z = number of teeth.


∴ W = 2 l(OA) sin[360/(4 z )]
= 2 R sin[360/(4 z )] (1)
where, R = l(OA) = l(OB) = l(OE) = pitch circle radius
We know,
module (m) = (pitch circle diameter)/(no. of teeth)
∴ m = 2 R/z
∴ R = (z . m)/2. putting this value in equation (1) we get,
∴ W = 2 [(z . m)/2] sin[360/(4 z)] (2)
∴ W = z . m sin[90°/z] (3)
From Fig. 12.12,
h = l(OC ) − l(OD) (4)
But, l(OC ) = l(OE ) + addendum
l(OC ) = R + m
Using Eq. 2, we get
l(OC) = [(z . m)/2] + m
l(OD) = R cos θ
l(OD) = z . m cos [90 / /z] ………(as addendum = module for metric gear)
Substituting this value in Eq. 4, we get
h = {(z . m) / 2} + (m) − {(z . m) / 2}{cos [90 / /z ]}
∴ h = {(z . m) / 2}.{1 + (2 / z ) − {cos [90 / /z ]} (5)

b. Constant Chord Method In the previous method, measurement of pitch-line tooth thick-
ness by measuring W and h depends upon the number of teeth. Therefore, in the case of a large
number of gears in the set having different values of the number of teeth, if we use gear tooth vernier,
then knowing W and h for each gear and calculating the thickness becomes laborious and time-con-
suming. This limitation has been overcome in the constant chord method.
This method uses this fact as a property that if an involute tooth is considered symmetrically in close
mesh with the basic form then it is observed that when the gear rotates, all teeth come in mesh with the
rack for a given size of tooth, i.e., for the same module, the contact always occurs at points A and B as
shown in Fig. 12.14, which results in distance AB remaining constant. Hence, it is known as Constant
chord. Therefore, it becomes a useful dimension since it has the same nominal value for all gears of a
common system, irrespective of the number of teeth.
340 Metrology and Measurement

Rack form
Pitch line
of the rack
C d
A B
φ
h
D E
P
Tangent to base circle

Pitch circle

Base circle

O
Fig. 12.14 Constant chord method

Refer Fig. 12.14. Distance AB is a constant chord situated at a distance d from the top face. Line AP
is tangent to the base circle.
∴ ∠ CAP = φ
Constant chord,
AB = M = 2 (AC ) (1)
PD = PE = arc(PE )
= [circular pitch/4]
= [( π· PCD)/4 · z ]
∴ l (PD) = [( π· m)/4]
........as, module (m) = (PCD/z ).
Consider Δ APD, AP = PD cos φ
= [(π · m)/4] cos φ (2)
Now consider Δ PAC,
AC = AP cos φ
= [( π · m)/4] cos2 φ
Metrology of Gears 341

Now, putting this value in Eq. (1) we get length of constant chord = l (AB)
= 2 [(π . m)/4] cos2 φ
∴ l (AB) = M = [( π)/2]. m cos2 φ
Now, to calculate the value of distance d consider PAC and Eq. (2).
PC = AP sin φ = [(π. m)/4]( sin φ . cos φ )
d = Addendum − PC
= m − [(π . m)/4]( sin φ. cos φ )
∴ d = m − {1 − (π/4)( sin φ. cos φ)}

c. Base Tangent Method The previously discussed gear-tooth (chordal) thickness-measuring


method using gear tooth vernier does not give a very accurate result since it depends upon two vernier
readings which are interdependent. Moreover, the measurement is made with the faces of its measuring
jaws, which itself are cause of inaccurate measurement. The base tangent method does away these dif-
ficulties by measuring the span of convenient number of teeth between the two parallel planes, which
are tangential to the opposite tooth flanks. The span length is a tangent to the base circle. This distance
is known as base tangent length.
The base-tangent-length method of measure-
W ment system comes from the geometric (rela-
tionship) fact that if a normal is drawn to the
involute tooth profile then the normal will lie
in the plane which is tangent to the base circle.
B Therefore, if now two parallel caliper jaws con-
C tact the tooth profile as shown in Fig. 12.15, the
A jaws will touch the tooth profile tangentially.
Pitch circle Base circle
The operator makes the measurement with a
Fig. 12.15 Measurement of tooth thickness by
‘feel’ as illustrated in Fig. 12.16. Consider a straight
base tangent method
line ABC being rolled back and forth along the
base circle. Then its ends will tend to opposite
involutes passing through points A, A1, A2 and C,
C1, C2. It can be seen that this straight line is also
W
the developed length of the arc confined between
A2 C1
the initial points of generation of the two invo-
lute curves. In other words,

C
W = AC = A1 C1 = A2 C2
A
A1 B C2 To determine distance W, consider the trigo-
B2 B1
nometric relationship illustrated in Fig. 12.15.
Fig. 12.16 Generation of a pair of opposed
involutes by a common generator W = Arc AB + Arc BC (1)
342 Metrology and Measurement

where, arc AB = Tooth thickness at base circle


F and Arc BC = TS (base pitch)
E
where, TS = number of tooth spaces over which
the measurement is made.
Rp
D
A
C B ∴ arc BC = S (π . m . cos φ)

θ
To determine the arc AB, the tooth thickness
Base
circle
at the base circle, the trigonometric relationship
is illustrated in Fig. 12.17.

Inv φ (Involute φ
Now, arc AB = 2 (arc AD) (2)
function of φ)
= 2 (arc AC + arc CD)
Arc (AC/ Rb ) = Inv φ radians
∴ arc AC = R b (tan φ − cos φ)
O
Fig. 12.17 Tooth thickness at base circle ∴ arc AC = [(z . m)/2] cos φ (tan φ − φ) (3)
θ radians = (arc EF )/Rp = (arc CD)/Rb
But, Arc EF = ¼ [circular pitch]
= ¼ [π · m]
θ = ¼ [(π · m)/Rp] radians
∴ θ = ¼ [π · m] · [2/(z · m)] = [π/(2 z )] radians
∴ Arc CD = Rb · θ = {[(z . m) / 2] cos φ} ·
[π/ (2 z )] (4)
From the figure,
arc AB = 2(arc AC + arc CD) and substituting the values from (3) and (4), we get
∴ Arc AB = 2{[(z . m)/2] cos φ (tan φ − φ) + {[(z . m)/2] cos φ} · [π/(2 z)]}
∴ = z . m. cos φ [(tan φ−φ ) + [π/(2 z)]] (5)
Now consider Eq. (1), where
Arc AB (tooth thickness at base circle in Fig. 12.15)= arc AB in Fig. 12.17. Now consider Fig. 12.15 again
for the next part of the derivation.
∴ W = arc AB + arc BC
Substituting values of arc CD and Arc DE in the above equation, we get
∴ W = z . m cos φ [(tan φ − φ) + [π/(2 z)]] + S (π. m . cos φ)
∴ theoretical base tangent length value = W = z . m cos φ [(tan φ − φ)+ [π/(2 z)] + (πS/2z)]
Metrology of Gears 343

where,
z = number of teeth, m = module, φ = pressure angle, S = number of tooth space contained within
space W.
Instruments by which the base tangent length measurement can be made are the David Brown tangent
comparator, vernier calipers and micrometers having suitable fixtures on anvils [as shown in Fig. 12.18

(a)

(b)
Fig. 12.18 Gear tooth micrometer for measuring base-tangent length
344 Metrology and Measurement

(a) (b)
Fig. 12.19 Indicating snap gauge with special attachment for measuring
base tangent length
(Mahr GMBH Esslingen)

(a)]. After calculating the values theoretically, using the


above derived formula, the base tangent length is set
in the tangent comparator using a set of slip gauges as
shown in Fig. 12.19 (a). Then, this becomes the stan-
dard reading for the comparator. Now, the gear span
distance is checked and variation from the set value
can be read on the micrometer dial provided on the
device as shown in Fig. 12.19 (b).

iv. Gear Rolling Tester Gears are a fun-


Fig. 12.20 Gear tester
damental means of transferring motion and
power. Like all machine components, they are sub-
ject to deviations resulting from the manufacturing process. The challenge to produce a constant qual-
ity implies the use of inspection methods suiting shop-floor requirements so that experts from quality
assurance increasingly demand simple and quick inspection methods. Checking composite error could
do it. Figure 12.20 shows a gear tester made in 1923 to check the involute accuracy on spur gears. It
was the first instrument with a variable base circle adjustment. A well-known automobile manufacturer
used this unit for 40 years. Therefore, we can measure the variations of centre distance when the gear
under test is made to rotate with a mate (master gear), preferably of known high quality. We can mea-
sure characteristics during rotation and test for the general accuracy of a gear by checking its composite
error. This test is generally known as rolling gear test. The gear rolling tester for carrying this test can be
either single contact (fixed centre-distance method) or dual (variable centre distance method). This
refers to action happening on one side or simultaneously on both sides of the tooth. This is also com-
monly referred as single and double flank testing. Because of simplicity, the dual contact testing is more
popular than single contact.
Metrology of Gears 345

For a precise assessment of the


One 360° revolution or rotation
gears’ operational behavior, manufac-
turers make use of the field-proven

F "i
double-flank gear-roll testers. These

F r"
instruments help us quickly deter-
mine the existing composite-process
errors on gears. Double-flank gear-

F i"
roll testers are based on well-proven
mechanical inspection procedures for
external and internal spur gears, worm
gears, and bevel gears. Accept/reject Fig. 12.21
results may be evaluated to ISO, DIN,
JIS, AGMA, and/or user-specified standards for traditionally cut metal, plastic injection molded
and powered metal gears. The exploitation of admissible tolerances helps reduce the production time.
To understand the use of a gear-rolling tester, let us define some terms related to spur gear profile
(refer Fig. 12.21).

Total Radial Composite Deviation, Fi″ (TCV) Fi is the difference between the maximum and mini-
mum values of the working centre distance, a , which occurs during a radial (double flank) composite test,
when the product gear with its right and left flank is simultaneously in tight mesh contact with those of a
master gear, and is rotated through one complete revolution.

Tooth-to-Tooth Radial Composite Deviation, f 1″ (T TCV) f i is the value of the radial com-
posite deviation corresponding to one pitch, 360°/z, during one complete cycle of engagement of all
the product gear teeth.

Radial Runout, Fr″ (RRO) Fr — the value of radial run-out of the gear — is the difference be-
tween the maximum and the minimum radial distance from the gear axis as observed by removing the
short-term or undulation pitch deviations and analyzing the long-term sinusoidal waveform.

Radial Centre Distance, A (CD) AA Total (one turn) running error


permits calculation of size and functional
tooth thickness. See Fig. 12.22 for the roll
test graphic display pattern. In this tech- One pitch running error
nique, the gear is forcefully meshed with One turn
a master gear such that there is intimate Fig. 12.22 A typical plot for dual contact running
tooth contact on both sides and, there- test report
fore, no backlash. The contact is forced
by a loading spring. As the gears rotate, there is variation of centre distance due to various errors,
most notably run-out. This variation is measured and is a criterion of gear quality. A full rotation
presents the total gear error, while rotation through one pitch is a tooth-to-tooth error.
346 Metrology and Measurement

Single Flank Testing (Single Contact Testing) In this test, the gear is mated with a
master gear on a fixed centre distance and set in such a way that only one tooth side makes contact.
The gears are rotated through this single flank contact action, and the angular transmission error of the
driven gear is measured. This is a tedious testing method and is seldom used except for inspection of
the very highest precision gears.
Gear roll testers come along with a frictionless measuring carriage, which rides on high-precision
roller bearings and guarantees high measuring accuracy and repeatability of the results. The setting
carriage is opposed to the measuring carriage so that tests can be performed with two production gears
or one production gear meshed with a master gear. The measuring carriage transmits the centre dis-
tance deviations to a pick-up or simply a dial indicator.

a"

Fig. 12.23 Cross-sectional view of gear rolling tester

Double Flank Gear Roll Testing (Double Contact Testing) Two gears are rotated in
tight mesh without play against each other. Under the influence of a pressure that is applied in the direc-
tion of the radial centre distance, at least one left and one right gear flank are meshed (double flank
meshing). This causes variations in the radial centre distance. As always, two tooth flanks are in mesh, the
measurement result represents the sum of the variations of both tooth flanks. For quality assessment,
measuring results are defined as total radial composite deviation Fi ′′, tooth-to-tooth radial composite
Metrology of Gears 347

Master
gear

Gear under test


Dial
indicator

Fig. 12.24 Gear rolling tester


(Courtesy, Mahr GMBH Esslingen)

deviation fi ′′, and radial run-out, Fr ′′. Additionally, this method allows users to compare nominal vs actual
radial centre distance with upper and lower tolerances and to make Go/No-go decisions.

Design Features of Gear Rolling Tester Gear rolling testers employ a virtually frictionless,
backlash-free measuring carriage, which rides on high-precision roller bearings or parallelogram leaf
springs. This exceptionally sound mechanical design is coupled with a solid and stable machine base for
unrivaled accuracy and repeatability. Generally, a maximum of 300-mm diameter gears, 150-mm or still
smaller ones are also tested. The accuracy is of the order of ± 0.0001 mm. Measurement data may be
read on a Milligraph high-speed recorder or an electronic evaluation instrument in connection with an
inductive probe. For simple gear roll tests, mechanical dial comparators or dial indicators may be used.
Modern evaluation possibilities as well as PC hardware and software complete these testers so that they
have become important means for quick and easy quality control (refer Fig. 12.25).

Features of Gear Rolling Tester

1. Stationary Centres and Arbors Because the centres or arbors remain stationary during in-
spection, the system prevents concentricity variations from influencing the measurement results. A spe-
cial drive mechanism within the measuring carriage rotates the gear being checked around the mounting
element.
2. Solid, rigid design
348 Metrology and Measurement

Fig. 12.25 Gear rolling tester with PC interface


(Courtesy, Mahr GMBH Esslingen)

3. Easy-to-operate, but extremely rigid modular components


4. True modular system — Modular components allow you to perform a variety of measurement tasks
on a variety of gear configurations and types.

5. Expansion Possibility Additional modular components are available so that existing ma-
chines may be expanded and adapted to additional measurement tasks. It is also equipped with height-
adjustable quill or driving block.
6. Directly Variable Measuring Force It enables for quick and easy measurable gears of vary-
ing size and quality. For internal gears, the measuring force direction may be reversed.
7. Quick-change Feature of the Measuring Carriage In case of double-flank gear-roll test-
ing, rapid disengagement of mating gears lets you quickly and easily change gears to be measured with-
out resetting the centre distance of the axes.
8. It is particularly suitable for shop-floor measurements next to the gear-cutting machines in order
to perform a first inspection of the manufactured gears.
Metrology of Gears 349

12.8 INSPECTION OF SHRINKAGE AND PLASTIC GEARS

Moulded gears do not shrink in any simple fashion such as a photographic reduction. There are a
minimum of four distinct shrinkage rates for any gear. Even simple features such as outside and
root diameters must be carefully inspected. A simple caliper check will often miss important features.
These diameters must be inspected for total form error as well as concentricity to the principle bore or
datum. For precision inspection purpose, probe the tip and root of each tooth and construct a best-
fit diameter with respect to the gear datum [as shown in Fig. 12.26(b)]. Inspecting the gear involute
profiles requires just as much attention to detail. Each tooth should be inspected since the moulding
process can result in errors anywhere on the gear. The actual form errors of the teeth should be mea-
sured directly so that these errors can be eliminated in the moulding process or compensated for in
the mould cavity.

12.9 MEASUREMENT OVER ROLLERS

For rapid and precise measurement of dimension over balls, roundness and conicity of internal
gears in any position and at any depth, a dial bore gauge for inside serrations can be used as shown
in Fig. 12.27. It is a convenient method of checking tooth thickness and obtaining some indication
of accuracy of involute profile in order to measure a gear over a roller placed in opposite tooth
spaces.
The gauge consists of a few modular units for quick conversion of the gauge to another gear size
within the large total measuring range. Two or three different sizes of rollers can be used so that varia-
tions at several places on the tooth flanks can be detected.

Outside
diameter
12.10 RECENT DEVELOPMENT IN GEAR
METROLOGY
Root
The improved manufacturing capability of gear produc- diameter
tion equipment demands higher accuracy measurement
equipment. The uncertainty of calibration data must, as Base circle diameter
a consequence, be reduced to realize the full benefit of
the investment that industry is making in new production
equipment and measuring equipment. Indeed a facility
Base tooth
made available on the commercial level by recent devel- thickness
opments in the field of gear metrology would encourage
manufacturers to invest in inspection and promote the (a)
use of best metrology practice in industry to improve Fig. 12.26 Inspection of shrinkage
gear accuracy. and plastic gears
350 Metrology and Measurement

(b)

INPECTION REPORT
+ TOL - 0.88830 (inch) Dia (6)
− TOL - 0.00030 (inch) CD 'OUTSIDE 00'

SCALE CIRCLE 00
X - 0.00020
Y - 0.00005
Root Diameter = 0.98530 Z - 0.06810

Form = 0.00077 Dev−0.00035


D - 1.25400
V - 0.00050
X-center = −0.00014 Forn - 0.00006

Y-center = −0.00002 −Dev −0.00041


Mn Dev - 0.00035
Mn P - 1

Mn Dev - 0.00050
Dev −0.00060
Mn P - 6

Mn Dev - 0.00021
HUMPIS -0

(INCH)
Y
6002/20TEETH #1
20
Outer Dia = 1.25400 AXICON
100-1.43_DP
Dev 0.00035 14_Decrees
form = 0.00086 21107-3
Y
Y
X-center = −0.00020 N

Y-center = −0.00006

AXICON Part No. 502/20TEETH #1

(c)
Fig. 12.26 (Continued)

With the present development of Computer Numerical Controls (CNC), many inspection
machines for lead/involute profile checking, and pitch measurements have got simplified. The fol-
lowing are the descriptions of some of the commercially available machines along with their brief
descriptions.
Metrology of Gears 351

Mi + 2 DM
DM Mi DM

DM = Ball diameter of the ball anvil


M i = Measurement over balls
M i + 2DM = Setting value
(Length of the gauge block required)

Fig. 12.27 Measurement of dimension over balls


(Courtesy, Mahr GMBH Esslingen)

CNC controlled, four-axis gear inspection centre shown in Fig. 12.28 incorporate the most advanced
coordinate measurement technology in the world and can automatically supply extremely accurate veri-
fication of gear tooth topography. Table 12.3 explains the important specifications.

Fig. 12.28 CNC controlled, four-axis


gear inspection centres
(Courtesy, Milwaukee Gear Company)
352 Metrology and Measurement

Table 12.3 Important gear inspection centre‘s specifications


Dimensions mm
Granite base, with integrated control 991×1168
Table diameter 330
X-axis travel 254
Y-axis travel 304
Z-axis travel 304
System dimensions
Depth 1168
Width 1626
Height 1715
Total system weight 3090 kg
Capacities
Outside diameter 381 mm
Distance between centers 508 mm
Measuring length 304
Helix angle 0º-90°
Weight on table 135 kg
Tooth size* ( DP/Module) 1-50/.5-25

Review Questions

1. Discuss the importance of inspecting gears.


2. Name the various elements of the spur gear which are checked for accuracy of the gear.
3. Explain following spur gear terminologies
a. Pitch circle
b. Chordal thickness
c. Circular tooth thickness
d. Tooth flank
e. Clearance
4. Discuss the types of errors in spur gear.
5. Explain pitch errors and their effects.
6. Enumerate the different methods of inspecting spur gears.
Metrology of Gears 353

7. Explain the two-dial gauge method for pitch measurement.


8. Explain gear tooth profile measurement by tooth displacement method.
9. Explain gear tooth profile measurement using an involute measuring machine.
10. Enumerate the different methods of tooth thickness measurement and explain gear tooth vernier.
11. Describe the constant chord method.
12. Describe the base tangent method.
13. Explain the construction, working and applications of gear rolling testers.
14. Explain how roundness and conicity of internal gears in any position and at any depth can be
measured.
15. Write short notes on
a. Quality of (spur) gear
b. Gear blank run-out errors
c. Tooth profile errors
d. Gear tooth errors
e. Checking of composite error
16. In the inspection of shrinkage and plastic gears, what are the various methods of specifying the
pitch of a gear? Which one of them is a direct measured quantity? Discuss their interrelationship.
17. Describe briefly the ‘gear tooth vernier caliper’. Discuss the two possible pairs of dimensions for
measuring tooth thickness. Which has the widest application and why?
18. Write briefly on optical methods of gear inspection.
19. Distinguish between the elemental checks and the composite checks of a spur gear.
20. Draw a typical graph obtained in a gear roller tester. Locate the position of maximum tooth load
on the same graph. Draw the curve for run-out.
13 Miscellaneous
Measurements

‘In case of irregular shape parts, trigonometry is used to perform its miscellaneous
metrology by dividing the shape into many profiles and counters…’
INTRODUCTION AND NEED OF a general measurement with existing
MISCELLANEOUS MEASUREMENTS instruments is very difficult. For such
Irregularly shaped parts do not have a measurements, instead of using only
defined single-phase geometry. Instead, measuring tools, some parts can be
their geometry is divided into many pro- inspected; and based upon simple trigo-
files and contours. Sheet-metal parts nometric calculations, we can get the
such as car bodies, buckets, cups, uten- required results. These methods can be
sils, mixers and grinders, trucks and typically applied to problems faced during
many other parts have profiles for which actual measurement.

13.1 MEASUREMENT OF TAPER ON ONE SIDE

Measurement of taper on one side can be done with the help of two rollers of different diameters.
In Fig. 13.1, X and Y are the centres of two rollers. Line PQ is drawn perpendicular to the line joining
the two centres of rollers, XY. XA is a horizontal line while XB is parallel to the tapered surface of the
piece and inclined at an angle β to the horizontal line. The angle XAY = β/2.

P
B

Y
A X
b/2
Q L

Disc of dia. D1 Disc of dia. D2


Fig. 13.1 Taper on side measurement
Miscellaneous Measurements 355

In the right-angled Δ XAY,


YA ( D1 − D2 ) / 2
tan β/2 = =
XA ⎛⎜ D1 + D2 ⎞
⎜⎜⎝ + L ⎟⎟⎟
2 ⎠
where L is the length of slip gauges and D1 and D2 are the diameters of the rollers.

13.2 MEASUREMENT OF INTERNAL TAPER

In this method, two rollers of different diameters are used.


To measure height with respect to the surface, height gauges and depth gauges are also used. The
small roller of radius R2 is put gently in the lower portion and the depth H1 from the upper surface of
the hole to which the top of the ball surface is measured. The larger roller of radius R1 is positioned
as shown in Fig. 13.2 (a). A1 and A2 are the centres of the two balls. Draw A1B parallel to PQ of the
tapered hole line and A2B perpendicular to A1B.

R1 P H2

A2 H1
B

A1

R2

Q
(a)

H2

H1
A2
B

A1

(b)
Fig. 13.2 Measurement of internal taper: a) Larger roller outside the groove, b) Both rollers inside
356 Metrology and Measurement

X
Then ∠ A1A2B = , where X is the angle of the tapered hole,
2

Sin, X = A1 B = R1 − R2 , from which the taper angle A can be found out.


2 A2 B H 1 + H 2 + R2 − R1

In case of both the balls lying inside the groove, as shown in Fig. 13.2 (b), the formula for the taper
angle A can be found out by
X R1 − R2
Sin =
2 H 1 + R2 − ( H 2 + R1 )

13.3 MEASUREMENT OF INCLUDED ANGLE OF INTERNAL DOVETAIL

In case of a mating operation a dovetail provides good mating assembly due to its specialized geometry.
The dovetail has sloping sides which act as a guide and prevent lifting of the female mating part during
the mating operation. The angle which the sloping face makes with an imaginary vertical centre (X )
plane, is the point of consideration. The measuring angle requires two pins of equal size, a slip-gauge
set and micrometers. The two pins are placed in such a way that they touch the sides of the dovetail and
the distance L is measured across these pins with a micrometer as shown in Fig. 13.3. Then the pins
are raised on two sets of equal-slip gauge blocks in such a way that the pins do not get extended above
the top surface of the dovetail. The distance M is measured across the pins with the micrometer. If the
height of the slip gauges is H then

AC ( M − L ) / 2 M −L
tan X = = =
BC H 2H
⎛ M − L ⎞⎟
∴ X = tan−1 ⎜⎜⎜ ⎟
⎝ 2 H ⎟⎠

By knowing L , M and H, the angle X can be found.

B
Pin

X
Slip X
H
gauges
A C
L

Fig. 13.3 Set up to measure included angle of dovetail


Miscellaneous Measurements 357

13.4 MEASUREMENT OF RADIUS

The concave or convex surface can be inspected by using a radius gauge or by using specially designed
templates. However, radius gauges have standard series of measurement for values of 3R, 4R, 5R etc.
Hence a radius of odd dimensions cannot be checked by using radius gauges. The sheet-metal worked
parts have a variety of radius profiles and need to be checked at regular intervals.

13.4.1 Measurement of Large Concave Radius in Any Part


The component is kept in such a way that the surface plate with the concave radius faces are in upward
direction. A depth micrometer is accommodated inside the concave surface as shown in Fig. 13.4. The
depth reading over the spherical ball is noted (D). The length of the micrometer base is recorded as L.
A and B are contacting points of the depth gauge with a vertical axis CE.

D A B
D

L E

(a) (b)
Fig. 13.4 Measurement of concave radius: a) Set-up, b) Schematic

Δ CAD and C BD are the right-angled triangles where


AC 2 = AD 2 + CD 2
and AC is the unknown concave radius.
AD = L/2
CD = CE − DE
CD = R − D
Putting the values of CD and AD to find AC, we get
R2 = (L/2)2 + (R − D)2
R2 = L2/4 + R2 − 2 RD + D 2
358 Metrology and Measurement

1 ⎛⎜ 2 L2 ⎞⎟
R= ⎜d + ⎟⎟⎟
2 D ⎜⎝ 4⎠
This expression can be used to find the unknown concave radius.

13.4.2 Measurement of Large Spherical Convex Radius in Any Part


In a surface plate, a bore-setting ring of convenient dimension and a depth micrometer is required for
checking the spherical convex radius in any part. The component to be checked is kept on the surface
plate with the convexity in the upward direction. A bore-setting ring used along with a bore-dial gauge

45
0
5

5
0
5

(a)

E
X =(t – D)
D B A

C
(b)
Fig. 13.5 Measurement of spherical convex radius: a) Set-up, b) Schematic
Miscellaneous Measurements 359

of a convenient size is kept on the spherical convex radius. D can be found out by the depth microm-
eter reading.
Δ CAB and C DB are the right-angled triangles in which
CA 2 = CB2 + AB 2
and CA = the unknown convex radius R,
CE = CB − BE = R−X,
and AB = L/2, where L is the length of the depth micrometer.
∴ R2 = (R – X )2 + (L/2)2

Solving, we get
1 ⎛ 2 L2 ⎞⎟
R= ⎜⎜ X + ⎟
2X ⎝⎜ 4 ⎟⎟⎠

1 ⎛ 2⎞
or R= ⎜⎜( t − D )2 + L ⎟⎟
2( t − D ) ⎜⎝ 4 ⎟⎟⎠

This equation gives the unknown radius R.

13.4.3 Measurement of Large Cylindrical Convex Radius in Any Part


To measure a cylindrical convex radius, a surface plate, micrometer and a pair of identical roller pins are
required. The component under inspection is kept on the surface plate with its radius resting against
the surface plate. The two roller pins are inserted in gaps between the radius of the component and
surface plate and with the help of a micrometer, the linear dimension over the pins are noted (L). The
pin diameter is noted as D.
C is the centre of the unknown radius and A is the centre of the pin on any of the sides of the pin,
while D and B are contact points of the component and pin. Line AE is the parallel line to the surface
plate. The Δ CAE is the right-angled triangle.

R C

L A E

D B
(a) (b)
Fig. 13.6 Measurement of cylindrical convex radius: a) Set-up, b) Schematic
360 Metrology and Measurement

Here, CA2 = AE2 + CE 2


CA = R + D/2
CE = CB − BE = R − D/2
AE = (L/2) − (D/2) = ½ (L−D)
Puting, CA, CE and AE in Eq. (1), we get
2 2
⎛ ⎞ ⎛ ⎞
⎜⎜ R + D ⎟⎟ = ⎜⎜ R − D ⎟⎟ + 1 ( L − D )2
⎜⎝ ⎟
2⎠ ⎝ ⎜ 2 ⎟⎠ 4
Solving, we get
1
R = D ( L − D )2
8
This expression can be used for measuring the cylindrical convex surface of any part.

Review Questions

1. Discuss the need of miscellaneous measurements.


2. Explain the measurement of taper on one side.
3. Explain the measurement of an internal taper using two rollers of different diameters.
4. Discuss the procedure to measure the included angle of an internal dovetail.
5. Explain in detail the measurement of large concave radius.
6. Discuss the procedure to measure a large cylindrical convex radius using a surface plate, micrometer
and a pair of identical roller pins. Explain with at least two examples what you mean by measuring
‘theoretical dimensions’?
7. Describe how rollers, balls and slip gauges can be used to measure a taper plug gauge and taper ring
gauge.
8. Describe the method of measuring the included angle and nominal values of a diameter for a given
tapered hole having a small angle of taper using
a. two unequal balls of nearly the same diameter
b. two balls
c. two rollers
9. Explain the method of measuring the taper angle of the plug gauge with the help of gauge blocks,
rollers and a micrometer if M1= reading over the rollers and M2= same over the gauge height h and
touching the plugs.
14 Study of Advanced
Measuring Machines

“Advancement in producing a product advances with advancements in metrology… ”


INTRODUCTION TO provides information-based process
MEASUREMENT TECHNOLOGY control. Metrology will slowly migrate
The field of metrology has undergone from offline to inline to achieve integra-
many changes in its evolution. Many of tion and standardization goals. In addi-
these changes have been the result of tion, over the next ten years, microelec-
the ‘demands of time’ and as a result, tromechanical systems (MEMS) are
metrology technology has been closely expected to evolve into new types of
connected to ‘user technology’. In most metrology sensors and test structures.
cases, this has driven the development The combination of offline and inline
of specialized measurement technol- measurements will enable advanced
ogy—well suited for a given task. In more process control and rapid yield learn-
recent years, there has been an increased ing. Manufacturing process stability
emphasis on expanding the capabilities requires stable tools. The objective is
of measuring equipment. Instruments for every tool to perform like every other
that were once used for measuring rough- such tool, with no unique signatures.
ness are now measuring dimension. An appropriate combination of well-
Instruments that once measured only engineered tools and appropriate
dimension are now measuring form; and metrology is necessary.
so on. In the context of dimensional mea- Historically, measurement technology
surement, the most significant aspect is has followed the user’s technological
related to the measurement of surfaces needs. For example, in the time of the
and dimension. These two areas, viz., construction of the Egyptian pyramids,
dimensional and surface metrology, while the cubit (and maintenance thereof ) was
sharing many common principles, have adequate for the purpose. As technologi-
had very little interaction in the past. cal demands increased, so did the de-
However, this culture is rapidly changing. mands on metrology. In many cases this
These changes can once again be attrib- has lead to a sort of technology leapfrog-
uted to changes in the culture in which ging, whereby product or manufacturing
metrology is applied. technology advances require measure-
Measurement technology combined ment advances, and measurement ad-
with computer-integrated manufacturing vances spawn further advances in prod-
(CIM) and data-management systems uct or process technology.
362 Metrology and Measurement

More recently, however, these techno- instrument drivers have historically


logical needs are being considered been technological in nature whereas
alongside other needs such as cost, in today’s marketplace, technology is
ease of use, maintenance, up time and only one of the elements.
speed. Thus, in many regards, the

14.1 CONCEPT OF INSTRUMENT OVERLAPPING

Historically, an instrument served one basic purpose—length-measuring instruments measured length,


roughness instruments measured roughness, and so on. However, advances in instrument technology
have increased the bandwidths of most of today’s metrology equipment. This has resulted in significant
overlaps between the technologies.
As an example of these overlaps, consider the measurement of straightness. There are many mea-
surement approaches—ranging from small stylus roughness instruments to large-scale interferometry
that will yield some kind of straightness. In many cases, these different measurement approaches have
followed very different development and standardization paths, but they are, nonetheless, reporting the
same measurand: ‘straightness’.
Metrology is, in many regards, a ‘customer led’ field as it can only provide data that is used for
some subsequent application. As a result, advances in metrology are mostly provoked by the culture
of the metrology customers. Today’s manufacturing and product development environment continues
to be one of ever-shrinking tolerances. Thus, there is a corresponding push in the metrology field for
lower and lower measuring uncertainties. Furthermore, the design community has continued to move
dimensional tolerancing schemes into smaller and smaller features (for example, micro-electronics and
semiconductors) and surface-tolerancing schemes into larger and larger applications (for example, boat
hulls and airplane wings).
In addition to these technological issues, metrology faces another (perhaps new) major challenge in
the current environment—that being one of ‘economics’. In considering today’s metrology user-base,
we find many companies that are built upon manufacturing or producing some kind of ‘physical’ good
or product. However, the current economic trends indicate that this type of company (in very broad,
general terms) is not receiving the attention of the fast-moving internet-based or ‘dot-com’ companies.
This has driven the management of many metrology users to more carefully scrutinize the purchase
of metrology equipment and the time spent using such equipment. After all, many business models
consider activities such as measurement to be ‘non-value added’!

14.2 METROLOGY INTEGRATION

As the size of the part under inspection increases, and the required measurement resolution shrinks,
both data volumes and data rates will increase dramatically. This raw data must be converted into useful
information to facilitate process control and defect reduction. To accomplish this, metrology data
Study of Advanced Measuring Machines 363

Micro-burrs and Roughness Waviness Straightness Dimension


torn material (size and position)

λ
Microscopy Roundness
(visual and digital and cylindricity
assessment) Instruments
Sharp-stylus
(roughness)
Coordinate
instruments
measuring
machines
(CMM's)
Fig. 14.1 Instrument overlaps

must be integrated into factory and enterprise-level information systems so that it may be associated
both with other data and with wafer-tracking information. The manner in which metrology integra-
tion occurs will be greatly influenced by the implementation of advances in technology. These include
(1) introduction of advanced proximity correction and phase-shift mask technology; (2) the ramp of
193 nm, 157 nm, and next-generation lithography; (3) integration of copper and low-key interconnect
processes; and (4) the shift from 200-mm to 300-mm wafers in high-volume production. One form of
metrology integration is found in advanced process control (APC). APC applies model-based process
control to reduce process variation, reduce send-ahead and tool monitor wafers, shorten learning cycles
and response-times, enable better tool-matching in high-volume production, improve overall equip-
ment effectiveness, shorten development times, and ease process transfer from pilot line to factory. In
this chapter, we shall try to discuss some of the advanced measuring machines.

14.3 UNIVERSAL MEASURING MACHINE

Figure 14.2 shows the first length-measuring machine made in 1908. This instrument has the important
feature of producing constant measuring forces and an error correction using a standard curve to com-
pensate pitch errors of the spindle reading with a vernier scale of 1/10,000 mm.
The original idea behind the unit was to produce a precision universal measuring machine, which was
easy to use and could be employed for checking the inside and outside dimensions of parts or measur-
ing instruments and gauges. This idea took on physical form in 1897 when Carl Mahr presented his
model 300 precision measuring machines—a machine, which today looks like a steel rocking horse with
a steering wheel but which in those early days allowed a resolution of 0.001 mm (39.4 µin), which was
quite unparalleled at that time. Their originally intended tasks—namely testing parts and monitoring
gauges—have remained the same. The only thing that has changed is the way they accomplish these:
Today, for example the 828 model developed by Mahr company as shown in Fig. 14.3 ( Plate 13) employs
computer-aided technology to acquire measuring values, perform automatic nominal/actual comparisons
364 Metrology and Measurement

Fig. 14.2 First length–measuring machine

and employ resolutions down to 0.01 µm (.39 µin). These innovations are all the result of some years’
experience in making good ideas work.
These machines consist of mainly a rigid bed; the universal measuring table (floating or fixed) will
be needed for external or internal measurements, e.g., checking of rings, bores or internal threads.
They are equipped with mechanisms, which enable quick and fine table adjustment. For self-centering
of a job, one pair of support blocks or one symmetrical clamping device are provided as shown in
Fig. 14.4 (b). It consists of a supporting fixed carriage (head) and movable measuring carriage (head).
The distance is adjustable, which depends upon the specifications of the model of machine under
consideration. Generally, measuring head travels for 100 mm. It also consists of a holder for holding
a dial indicator.
It uses different types of probes for specific measurement, as shown in Fig. 14.5 (a), to perform
internal measurements on plain test-pieces from 1.5-mm diameter (0.059 in) on. The probe consists of
a 1-mm (0.039 in) ruby ball, holder, and serration grip.
The probe shown in Fig. 14.5 (b) is used for high-precision measurement of internal threads with
exchangeable measuring anvils. The calipers shown in Fig. 14.5 (c) enable us to perform internal measure-
ments on plain test-pieces. The figure shows a pair consisting of a left-hand and a right-hand caliper.

Features of Universal Measuring Machines

• High measuring accuracy obtained by precise mechanics, such as parallelism of the probe supporter
of ± 1 µm (40 µin), together with up-to-date measuring equipment (See Fig. 14.6, Plate 14)
Study of Advanced Measuring Machines 365

Clamping device
for centers
Centers

Clamping device for


support plate

(a)

Clamping screw

Clamping jaws

Base-locking
screws

(b)

Clamping screw Holder for dial-indicator inspection


Measuring carriage

Extension

(c)
(Continued)
366 Metrology and Measurement

Holder

Clamping
device
Table plate
of the
universal Ring gauge
measuring
table

Base plate
with
support
prism
(d)
Fig. 14.4 Accessories of universal measuring machines

• Depending on the requirements of the measuring task, different display units used, for example,
the digital dial indicator, analog dial indicators as well as inductive or incremental probes
• Easy structure of the unit allows a precise performance of the measuring procedure and a fast
adaptation to new measuring tasks
• Due to serration grip, a quick and easy accessory exchange is ensured

Exchangeable

Serration grip, right side


(a) (b)

Serration profile, left side Serration profile, right side


(c)
Fig. 14.5 Probes and caliper pair
Study of Advanced Measuring Machines 367

• Computer support is provided for acquiring, processing, logging, and transmitting measurement data
• Operating reliability and comfort by linking both measuring systems so as to make all information
available on a single screen
• Reliability in complying with documentation requirements through the automatic adoption, stor-
age, and ISO-compliant printout/logging of all relevant measurement data
• Universal application through a generous selection of accessories
• Form stability through a sturdy machine base of hard granite
• Adjustable measuring force for matching to the size and shape of the test-piece—measuring re-
sults are thus unaffected by subjective influences
• Easy change of measuring direction
• High resistance to wear through carbide-reinforced measuring surfaces

14.4 USE OF NUMERICAL CONTROL FOR MEASUREMENT

The terms ‘numerical control’ and ‘digital readout’ have been applied to the many devices developed
for measuring coordinate dimensions on a workpiece. The workpiece is held in a fixture and a probe
is brought in contact with the work surface to be measured. Either the workpiece or the probe is
held on a movable table or arm and the reading is recorded in the readout section of the control
device. Two and three-axis machines are available. A two-axis machine usually registers the vertical
displacement of the probe. A three-axis machine records both of these as well as transverse hori-
zontal motion.
Numerical control inspection is most commonly applied to the inspection of odd-shaped contoures,
which cannot be easily measured by other means. Since it is a relatively slow process, it is not competi-
tive with automatic gauging devices or other conventional methods for the inspection of easily mea-
sured dimensions.

14.4.1 Coordinate Measuring Machines (CMM)


These are mechanical systems designed to move a measuring probe to determine coordinates of points
on a workpiece surface. CMMs are comprised of four main components: the machine itself, the mea-
suring probe, the control or computing system, and the measuring software. Machines are available in
a wide range of sizes and designs with a variety of different probe technologies.
Important specifications for coordinate measuring machines are the measuring lengths along the x,
y and z-axes as well as resolution and workpiece weight. The x-axis measuring length is the total travel,
or measuring length, that can be performed in the x-direction. The y-axis measuring length is the total
travel, or measuring length, that can be performed in the y-direction. The z-axis measuring length is the
total travel, or measuring length, that can be performed in the z-direction. These are not necessarily the
same as the measuring capacity, which is the maximum size of the object in the x, y or z-direction that
the machine can accommodate. The resolution is the least increment of a measuring device; on digital
368 Metrology and Measurement

Granite surface plate


Probe
Data processor

Data processing
program Machine stand/optical
vibration isolator with
auto-leveling function

Controller

Joystick box

Fig. 14.7 Coordinate measuring machine

instruments, it is the least significant bit (Reference: ANSI B-89.1.12). The workpiece weight is the
mass of the workpiece being measured.
Coordinate measuring machines may have manual control, CNC control or PC control. Manual
control implies that machine positioning is operator controlled. The operator physically moves the
probe along the axis to make a contact with the part surface and record the measurement (digital read-
outs). A Computer Numerical Control (CNC) may also control machine positioning. PCs, or personal
computers, also control machine positioning in some coordinate measuring machines. The PC records
the measurements made during the inspection and performs various required calculations. Automatic
measuring machines may involve one or more types of gauging devices.
Operation for a coordinate measuring machine can be achieved through an articulated arm,
bridge, cantilever, gantry or horizontal arm. An articulated arm is very common for portable, or
tripod-mounted-style machines. The articulating arm allows the probe to be placed in many differ-
ent directions. In bridge-style machines, the arm is suspended vertically from a horizontal beam that
is supported by two vertical posts in a bridge arrangement. The machine x-axis carries the bridge,
which spans the object to be measured. In cantilever-style machines, a vertical arm is supported
by a cantilevered support structure. Gantry style machines have a frame structure raised on side
supports so as to span over the object to be measured or scanned. Gantry machines are similar in
construction to bridge-style designs. In horizontal arm machines, the arm that supports the probe
is horizontally cantilevered from a movable vertical support. As a result, this style is sometimes
referred as cantilever.
Study of Advanced Measuring Machines 369

57.9°(1470) 65°(1650)* 76.8 (1950)**

Measuring range

107.5°(2730)
31.9°(810)
X
Y

27.6 °(700)
3.2°(82) 1.85°(47)

Fig. 14.8 Coordinate measuring machine


(Courtesy, Mitutoyo Company)

Model CRT-Apex C 776 CRT-Apex C 7106


Measuring X-axis 27.75" (705 mm)
range Y-axis 27.75" (705 mm) 39.56" (1005 mm)
Z-axis 23.81" (605 mm)
Resolution .000004" (0.0001 mm)
Drive speed CNC mode Max. moving speed = 520 mm/s (20.47"/s) (3D)
Max. measuring speed = 8 mm/s
Joystick mode Moving speed = 0 − 80 mm/s Measuring speed = 0 − 3 mm/s
Max. acceleration 0.23G (3D)
Measuring Material Granite stone
table Size 33.85" × 55.90" 33.85" × 67.71"
(860 mm × 1420 mm) (860 mm × 1720 mm)
Workpiece Max. height 31.49" (800 mm)
Max. weight 1760 lbs. (800 kg) 2200 lbs. (1000 kg)
Dimensions Width 57.87" (1470 mm)
Depth 64.95" (1650 mm) 76.77" (1950 mm)
Height 107.48" (2730 mm)
Machine weight 3885 lbs. (1675 kg) 4293 lbs. (1951 kg)
Air consumption 60 L/min or 2.16CFM (under normal condition)
Air pressure 0.4 Mpa (4 kgf/cm2) or 58 PSI
Temperature Range 1 64.4°F − 71.6°F (18°C − 22°C)
Range 2 60.8°F −78.8°F (16°C −26°C)
Variation/hour 1.8°F (1.0°C)/ hour
370 Metrology and Measurement
( )
Variation/day Range 1 = 3.6°F (2.0°C)/ day, Range 2 = 9.0°F (5.0°C)/day
Gradient 1.8°F (1.0°C)/m(Vertically), 1.8°F(1.0°C)/m (Horizontally)

Accuracy ISO10360-2:2001 MPEE MPEp

Temperature 1 TP2/20 2.2 + 3L/1000µm 2.2µm


64.4−71.6°F TP200 1.9 + 3L/1000µm 1.9µm
(18 to 22°C)
MPP100/SP600 1.7 + 3L/1000µm 1.7µm

Temperature 2 TP2/20 2.2 + 4L/1000µm 2.2µm


60.8−78.8°F TP200 1.9 + 4L/1000µm 1.9µm
(16 to 26°C)
MPP100/SP600 1.7 + 4L/1000µm 1.7µm

Coordinate measuring machines can have one of several mounting options. They include bench
top, free standing, handheld and portable. Manufacturers may use these terms interchangeably. Probe
systems for CMMs can be touch probe or discrete point, laser triangulation, camera or still and video
camera. A multi-sensor coordinate measuring machine has capabilities to mount more than one sensor,
camera, or probe at a time. Figure 14.8 shows Orysta-ApexC 776, 7106 model of Mitutoyo along with
specifications in the tables.

14.4.2 Allied Software (Case Studies)


MeasurLink developed by the Mitutoyo Company supports a variety of statistical process functions
based on the statistical method described in QS-9000, which defines standards for quality control in the
US automobile industry. It is a tool used to identify problems in manufacturing processes and to analyze
them efficiently in order to improve the manufacturing process or to resolve problems. With Measur-
Link, the user can construct a measuring system by which to increase reliability, thereby establishing an
excellent quality-assurance system on behalf of the customers.

1. MeasurLink SPC-Super Statistical Process Control Program MeasurLink


SPC-Super developed by Mitutoyo Company performs the real-time statistical processing of data col-
lected by a CMM in the inspection room or on the manufacturing site, and graphically outputs control
charts, GO/NG judgments, process capabilities and such to make them more comprehensible to the
operator. The statistical method using control charts makes it possible to detect abnormalities on the
manufacturing line at an early stage, thereby preventing the occurrence of product defects.

2. Correct Plus (Data Feedback System) Mitutoyo Company develops the correct
plus system. After measuring the components mass-produced by a machining centre, the Correct Plus
system feeds the compensation data calculated from the measurement result and nominal value
back to the machining centre. This data feedback system maintains and improves the accuracy of
processing.

• Allows construction of a production system with improved accuracy and reduced ratio of defec-
tive parts.
Study of Advanced Measuring Machines 371

• If the NC program is partially corrected prior to processing, there is no need to make a correction
during processing. Accordingly, Correct Plus promises worry-free operation.
• Two types of systems are available, according to the type of production system:

1. Manual Feedback System The operator decides whether or not to give feedback on correc-
tion data.

2. Automatic Feedback System Correction data feedback is completely automatic, according


to the setting.

• It can support multiple machining centres.


• Allows measurement results to be stored, output to a chart, and used for statistical data processing.

Wide varieties of CMM specifications for inspection of small-sized components to complete car
body profile measurement are commercially available in the market. Figure 14.9 ( Plate 15) shows a
new, horizontal arm-type CMM inspecting the profile of the car body, and Fig. 14.10 ( Plate 15) shows
a CNC CMM, which provides a huge measuring range.
Common applications for coordinate measuring machines include dimensional measurement, pro-
file measurement, angularity or orientation, depth mapping, digitizing or imaging, and shaft measure-
ment. Features common to CMMs include crash protection, offline programming, reverse engineering,
shop-floor suitability, SPC software and temperature compensation.

14.4.3 CMM Probes


CMM probes (Coordinate Measuring Machine) are transducers that convert physical measurements
into electrical signals, using various measuring systems within the probe structure. CMM probes have
a wide classification including instruments using diverse technologies for direct and comparative
measurements.
CMM probes are available in three main probe forms:

i. Touch-trigger or discrete point,


ii. Displacement measuring, and
iii. Proximity or non-contact probes.

1. Touch-trigger Probes are the most common types


of probe. They actually touch the surface of the workpiece, and
upon contact, send a signal with the coordinates of that point to
the CMM. The probe is then backed off and moved to the next
location where the process is repeated.

2. Displacement Measuring CMM Probes are


also referred as scanning probes. This method generally involves Fig. 14.11 Types of hard probes
372 Metrology and Measurement

passing the probe over a target surface at its working range. As the probe scans the surface, it transmits
a continuous flow of data to the measurement system. Scanning contact probes may use linear variable
differential transformer ( LVDT ) or optoelectronic position sensing.

3. Proximity or Non-contact Probes function similarly to displacement measuring CMM


probes, but they use laser, capacitive or video measurement technology instead of LVDTs. CMM probes
use many different sensor technologies to achieve their measurements. Each technology offers its own
strengths that may be specifically desired for a given application. The most common technologies include
kinematic or switching, strain sensing, piezoelectric, LVDT, optoelectronic, laser triangulation, capaci-
tive, and video imaging.
Kinematic or switch technologies are available in a wide range of probing products. In terms of size,
they are the smallest of the probe types, and offer low over-travel force, simple interfacing, and robust
and universal filament.
Strain sensing CMM probes offers fewer lobing errors, long operating life, wide-operating speed
range, long stylus carrying and they are ideal for peck, or stitch scanning. The key characteristics of
piezoelectric sensor types include very few lobing errors, large stylus-carrying capability, multi-mode
sensor operating extend versatility, and restricted operating-speed range. LVDT sensor types provide a
high degree of accuracy and large stylus-carrying capacity. Optoelectronic devices offer a higher degree
of accuracy than kinematic sensor types and high data rates. Key characteristics of laser triangulation
sensor types include single axis profile measurement and the laser may be reflected by surface reflectiv-
ity. Capacitive CMM probes provide an alternative noncontact technology. Material types may affect
profile, form and surface flaw measurements or surface chemistry and they therefore use a fixed stylus.
Video-imaging sensor types are suitable for 2-D and flexible parts. Automatic edge detection is pos-
sible, but they may be affected by surface reflectivity and ambient light.

4. Three-Dimensional Measuring Probe It is a mechanically simple device designed for


analog scanning. As shown in Fig. 14.12, the sensors for the probe are three orthogonally mounted Linear
Variable Differential Transformers (LVDTs) that provide frictionless measurement and are suited for
sensing nanometer displacements. These sensors are rigidly coupled to a probe tip through a triad and
inconel beam. The system is free to rotate in two dimensions (X and Y ) and translates in a third (Z) via
a thin pretensioned diaphragm flexure. The diaphragm flexure and orthogonal arrangement of the three
LVDTs allows independent sensing of the three directions. The diaphragm flexures also have the advan-
tage that they can be easily substituted in different configurations to suit the measurement application.
This is an electronic three-axis measurement probe, which was developed to be low cost, flexible, rugged,
sensitive and accurate to ±0.25 µm in all the three directions over a 250-µm measuring range. The unique
diaphragm flexure design reduces the problem of force lobing while maintaining low probing forces
around 20 mN/µm (2 gf/µm).

14.4.4 Probe Illumination


The white LED probe illumination unit can be installed at the rear of the probe adapter to illuminate the
area around the tip of the stylus. This is very useful for the deep hole measurement. The optional acces-
sories can be used as per requirement.
Study of Advanced Measuring Machines 373

12
13 10

6
9
11
8
10

3
5
2 4

Fig. 14.12 Three-dimensional probe Schematic cross section of 3-D probe 1. Stylus 2. Connect-
ing nut 3. Outer clamping spacer 4. Diphragm flexure 5. Middle spacer 6. Triad 7. Shell 8. LVDT
Support (bottom) 9. LVDT support (top) 10. LVDT coils 11. LVDT cover (X shown) 12. LVDT
core 13. LVDT Cover (Z)

14.4.5 Styli And Accessories


In the majority of probing applications for
proper selection of styli and to maximize accu-
racy, the recommendations are the following:

1. Keep Styli Short And Stiff The


more the stylus bends or deflects, the lower
the accuracy. Probing with the minimum stylus
length for your application is recommended and
where possible, the use of one-piece styli is sug-
gested. Probing with excessive styli/extension
combinations should therefore be avoided.
Fig. 14.13 Probe illumination

2. Keep The Stylus Ball As Large


As Possible This will ensure maximum ball/stem clearance whilst providing a greater yet rigid
Effective Working Length (EWL). Using larger ruby balls also reduces the effect of surface finish of
the component being inspected.
374 Metrology and Measurement

Knuckle
joint

Probe
Overall length

EWL

Stylus
Ball/stem
clearance
Fig. 14.14 CMM probe and stylus Fig. 14.15 Stylus and EWL

Effective Working Length (EWL) is the penetration that can be achieved by any ruby ball stylus
before its stem fouls against the feature. Generally, the larger the ball diameter, the greater the EWL
(refer Fig. 14.15 ).

3. While Choosing Styli For Scanning The choice of scanning styli will be dependent
on the scanning application and the type of scanning probe used. Use a stylus which has the same
diameter as the finished cutting tool used to produce the part. Keep the stylus as short as pos-
sible to prevent excessive bending, but ensure that the stylus is long enough to prevent scanning
on the shank.

14.4.6 Types of Styli


Refer Fig. 14.16.

A. Ruby Ball Styli These are suitable for the majority of probing applications. They incorpo-
rate highly spherical industrial ruby balls. Ruby is an extremely hard ceramic material, and hence the
wear of stylus balls is minimized. It is also of low density which keeps the tip mass to a minimum.
This avoids unwanted probe triggers caused by machine motion or vibration. Ruby balls are available
mounted on a variety of materials including non-magnetic stainless steel, ceramic and carbide, to
maintain stiffness over the total range of styli.
Study of Advanced Measuring Machines 375

(a) (b) (c) (d)

(e) (g) (h)


(f)

TS

(i) (j)
Fig. 14.16 Types of stylii

B and C. Star Styli These can be used to inspect a variety of different features. Using star styli
to inspect the extreme points of internal features such as the sides or grooves in a bore, minimizes
the need to move the probe, due to their multi-tip probing capability. Each tip on a star stylus requires
datuming in the same manner as a single ball stylus.

D. Pointer Styli These should not be used for conventional XY probing. Designed for the mea-
surement of thread forms, specific points and scribed lines (to lower accuracy). The use of radius end
pointer styli allows more accurate datuming and probing of features, and can also be used to inspect
the location of very small holes.
376 Metrology and Measurement

E. Ceramic Hollow Ball Styli These are ideal for probing deep features and bores in X, Y and
Z directions with the need to datum only one ball. In addition, the effects of very rough surfaces can
be averaged out by probing with such a large diameter ball.

F. Disc Styli These ‘thin sections’ of a large sphere are usually used to probe undercuts and
grooves. Although probing with the ‘spherical edge’ of a simple disc is effectively the same as probing
on or about the equator of a large stylus ball, only a small area of this ball surface is available for con-
tact. Hence, thinner discs require careful angular alignment to ensure correct contact of the disc surface
with the feature being probed. A simple disc requires datuming on only one diameter (usually in a ring
gauge) but limits effective probing to only X and Y directions.
Adding a radius end roller allows you to datum and hence probe in the Z direction, provided the
centre of the ‘radius end roller’ extends beyond the diameter of the probe. The radius end roller can
be datumed on a sphere or a slip gauge. Rotating and locking the disc about its centre axis allows the
‘radius end roller’ to be positioned to suit the application.
The disc may also have an M2 threaded centre to allow the fixing of a centre stylus, giving the addi-
tional flexibility of probing the bottom of deep bores (where access for the disc may be limited).

G and H. Cylinder Styli These are used for probing holes in thin sheet material, probing vari-
ous threaded features and locating the centres of tapped holes. The ball-ended cylinder styli allow full
datuming and probing in X, Y and Z directions, thus allowing surface inspection to be carried out.

I. Stylus Extensions These provide added probing penetration by extending the stylus away
from the probe. However, using stylus extensions can reduce accuracy due to loss of rigidity.

J. Tool Datuming Styli The tolerances to which tools can be set depends upon the flatness and
parallelism of the stylus tip to the machine axis. Fine adjustment is provided on all probes and probe
holders to allow these settings to be achieved. Where rotating tools are to be datumed for diameter, the
tools must be rotated in reverse to the cutting direction.

14.5 OPTICAL 3D MEASURING INSTRUMENTS: LASER VISION

This accuracy requirement can only be met by doing precision optics design and by using the latest
imager technology to obtain the necessary 3-D image resolution and speed. Figure 14.17 schematically
illustrates how laser vision works. The range image is acquired in real time through optical laser trian-
gulation, profile after profile at a rate of 100 to 1000 profiles per second. This can be done by moving
the camera with the processing machine, for example, in the case of inspection of weld-joints, where
the camera is moved along with the welding torch over the joint or the weld bead, and the system builds
a complete contour of the joint or bead over the complete scanned length. This 3-D contour digitiza-
tion is the most efficient method to detect minute weld defects and to acquire enough information to
track the joint at a speed of 1 to 20 metre/minute, which is compatible with the laser-welding process
speed.
Study of Advanced Measuring Machines 377

Imager
(CCD or
CMOS) Laser
diode
α

Collecting Joint
lens
Laser
stripe

Part A Part B
Fig. 14.17 Working of laser vision Fig. 14.18 3-D Image of a weld

Traditionally, this technology has been used for joint tracking and adaptive welding. Beginning
in the mid-1990s, the technology was directed to developing the capability to measure pre-weld
joint fit up as well as finished weld inspection. The early applications were in the automotive and
mining/construction industries. Figure 14.18 presents the 3-D image of a weld taken by a laser
vision system. The laser system is really a sophisticated profiler capable of geometric measure-
ment as well as defect identification. Software library templates have been developed for unwelded
joints, partially welded joints, and finished welds. The system can be programmed by inputting the
applicable weld standard (API 1104, AWS D1.1) requirements such as root openings and included
groove angles for an unwelded joint, and items like groove weld width, convexity, and toe entry
angle for a completed joint.
Laser vision sensors can automate visual inspection of pipes and tubes, help ensure the reliabil-
ity of automatic ultrasonic testing, and make it easier to observe trends. Pipeline welding requires
the utmost attention to detail throughout every phase of manufacturing, starting with material
preparation right through to final inspection. Although automation has entered this world in the
form of mechanized welding systems and semi-automated radiographic and ultrasonic testing, the
human factor is still very much a part of these operations. Two of the most important steps in the
process are joint fit up and visual weld inspection. Laser vision sensing can help improve these
operations.
Automotive manufacturing is a very demanding industry for factory automation because of the
extremely high volumes per year resulting in short cycle times, minimum preventative maintenance
and a need to have a high uptime. Laser vision systems have been used for years, mainly in conjunction
with robots, to do seam finding and seam tracking on components ranging from chassis to body. More
recently, laser vision systems are being used for real-time process control and measurement of welding
processes ranging from arc to laser. The following are the examples of the use of laser vision cameras
in the automotive industry.
378 Metrology and Measurement

14.5.1 Seam Tracking Of Tailor-Welded Blanks


A welded-blank manufacturing firm experiences that welded blanks increase in use at a 20% rate per
year. Most of these blanks are made with dedicated automated machines and laser welding at rates up
to 14 metres/minute. The most common joints have been straight line in the past, but future parts like
body enclosures are moving to non-linear-type seams. In addition, some parts consist of up to 5 indi-
vidual blanks. All of these trends means the need for a tracking system is becoming a requirement to
handle the variation coming from the blank edges and the fixturing.
A SMART 3D laser camera offers very high resolution combined with suitable stand-off to avoid
the tooling, which holds the blank in place. The advantages of several manufacturers using this type of
technology include:
• Ability to be a little less stringent on the blank-edge preparation
• Ability to allow more forgiveness in the tooling which brings the blanks together
• Capability to implement some adaptive welding whereby the travel speed or power can automati-
cally be adjusted based on the gap seen
• Opportunity to use the camera system during the initial machine set-up and runoff to verify that
the blank fit up meets requirements and that the seam location is repeating
• Achievement of much higher travel speeds with an improvement in weld quality (See Fig. 14.19,
Plate 16.)

14.5.2 Inspection of an Automobile Chassis


Aluminum is becoming more prevalent in the chassis components used for automobiles. Front engine
cradles, rear suspension cross members and suspension links are slowly being converted from steel to
aluminum. Aluminum is a very efficient material but requires more precautions to be taken in prepara-
tion welding and part processing to attain all the benefits.
This laser vision system is capable of measuring the complete profile of a weld as well as identifying
defects such as excessive undercut and porosity. If the cradle is rejected, it goes to a repair area. If good,
the same robot used to maneuver the cradle under the camera, is used to place it in the next station.
Full reporting allows for the ability to do continuous improvement using 6-sigma methodology. Figure
14.20 ( Plate 16) below shows the system inspecting an aluminum cradle.

14.5.3 Optical 3D Measuring Instrument (Model - Multi-scope 250/400)

AQ: It consists of mainly the following subsystems: (See Fig. 14.25, Plate 18.)

i. Compact 3D-CMM Z-column and machine base are made of rigid granite. All axes are guided by
high-precision roller bearings. The table is made of a special aluminum featuring low mass and high rigidity.

ii. Drive System DC-servo drives with precision, backlash-free centre mount ball screw

iii. Control Unit 3- to 5-axes microprocessor CNC with path control


Study of Advanced Measuring Machines 379

iv. Measuring System Linear incremental 0.5 µm resolution scales

v. Operator System Pentium PC processor and Windows NT

vi. Software Universal 3D-Multisensor measuring software

vii. High-Accuracy Version High-accuracy MS models with improved length measurement uncer-
tainty, based on incremental linear scales of 0.1 µm resolution and the volumetric error correction (CAA).

viii. Illumination Computer-controlled fiber optic light sources for on axis (top) and back light.
Computer controlled 4-quadrant LED ring light.

ix. Multi-sensor-System This consists of the following:

1. Optical Sensor High-resolution CCD camera, digital image processing with gray scale evalu-
ation, automatic sub-pixel edge detection, automatic filter routines, multi-window technology, high-
speed focus, opto-electronic 2-step zoom. 0.5 µm resolution, 0.12 µm accuracy, 0.05- µm repeatability,
0.1-s/point Avg. measuring time (Test conditions: 20 × LWD lens). (See Fig. 14.21, Plate 16.)

2. Touch Probe (optional) Touch probe system TPS with Renishaw touch trigger probe TP6 and
integrated automatic probe changer PAC. (Measuring range in x will be reduced by 50 mm). Additional
probe systems available and rotary tables are optional. (See Fig. 14.22, Plate 17.)

• Several stylus-set combinations


• Motorized probe heads (only PMC) available
• Multiple dynamic touch probe systems offer maximum application flexibility
• Several probe-changing systems available

3. Laser Probe (optional) In has a fast laser auto-focus for static measurements and is used for fast
and very accurate focusing, also used to detect 3D points on surfaces 10 times faster than video focus.
(See Fig. 14.23, Plate 17.)
• Offset free measurements between optical sensor and laser sensor
• Submicron accurate measurements in milliseconds
• Scanning and digitizing of free-form surfaces
• 0.5-µm resolution, 1-µm accuracy, 0.2-s avg. focus time, 500-s/point scanning rate

The laser scan software allows use of this sensor for non-contact 2D contour scanning with evaluation
of geometrical elements. High-resolution 3D scanning and display of surfaces are additional applica-
tions. This kind of topography is often used for fast measurement and digitizing of free-form surfaces.
(See Fig. 14.24, Plate 18.)
Laser vision sensors can both automate the visual inspection process as well as ensure that the auto-
matic UT process is reliable. The non-contact nature of the sensor makes the process robust in this
380 Metrology and Measurement

tough manufacturing environment. This automatic inspection method removes the subjectivity inherent
with any manual process and eliminates the arguments between production and quality control. The
results speak for themselves and do not require any interpretation—one simply sets the tolerance limits
and the system says “go” or “no go”. The other advantage of automatic visual inspection is the abil-
ity to observe trends and implement targeted process improvement efforts. In addition, the inspection
can examine 100% qualitative parameter of the job under consideration and the results (both data and
image) are archived digitally, thus eliminating paper.

14.6 IN-PROCESS GAUGING

Mass production has brought with it the need for inspection methods, which can keep pace with fast
production. Sampling of course, helps to speed up the inspection process, but often this is not fast
enough and frequently 100 per cent inspection is necessary. As a result, many types of automatic inspec-
tion devices have been developed.

14.6.1 Inline Inspection


Automatic inline gauging machines may involve one or more types of gauging devices. Both fixed
gauges and deviation-type gauges are used. Many machine functions merely sorter. These machines
usually consist of some type of conveyor or gravity chute that carries a continuous flow of parts
into or through a series of gauging devices. The parts are oriented such that they will pass through
the established limit for that particular dimension. If the part does not fall within the prescribed
limit, an ejection device will usually remove the part from the conveyor. Those parts that fall within
the prescribed limits can pass on to the next gauge. The rejected parts are ejected into separate pans
or boxes. This sorting of defective parts is very helpful to the material review board.
Dimensions are the only characteristics that can be inspected on automatic machines. Non-destructive
methods can also be applied to automatic inspection machines. Ultrasonics, X-rays, and lasers have been
applied to online inspection. Automatic weighing and counting are common also. Automatic inspection
as used here is not to be confused with adaptive
control. (The adaptive control system concept is
illustrated in the next article.) Online inspection is
completely separate from any other process. The
automatic inspection device is placed in sequence
along with the other fabricating equipment. In
practice, all cases of automatic inline inspection
result into 100 per cent inspection. The follow-
ing case study explains online gauging. (See Figs.
14.26 and 14.27, Plate 19.)

Case Study: Weld Inspection of a


Truck Frame A major Tier truck frame Fig. 14.28 Results screen
Study of Advanced Measuring Machines 381

manufacturer is using the Flexcell/AW system to verify the quality of several critical welds on a heavy-duty
frame. The system consists of a robot and camera system which is located in the same station as the robots
welding the cross members to the side rail. The inspection robot follows up and measures these welds as
well as a couple of welds made in the previous station that cannot be seen by the operator. The system is
set up to stop the production line and signal a technician if a defective weld is found. If a marginal weld is
found, the condition is logged for tracking and continuous improvement purposes. Features being mea-
sured include weld size, contour and attributes such as porosity and undercut.
Another benefit of the system is it’s ability to measure the location of the cross member while it is
inspecting. This is very helpful when one is trying to identify where the variation in fit-up or part location is
coming from when the weld shows variation. This information can then be used by the people responsible
for tooling, to improve the detailed parts or fixture.

14.6.2 Concept of Adaptive Machining Combined with Inspection System


The prevention of defective parts is only indirectly considered a function of inspection. But if inspec-
tion can alert manufacturing immediately after a defective part is produced, corrective measures can
prevent defects of similar nature from being produced in subsequent parts. In effect, this means inspec-
tion will control the manufacturing process. This requires online monitoring of production and instan-
taneous feedback of output information.

Measurement

Parts to be processed
Processing

Feedback of measurement result


(corrected value)
NC program for processing
Tool diameter is corrected
Machining center
Tool length is corrected
Coordinates are corrected Modifying the correction NC
value of the NC program controler
CMM
Machining center CMM

Drawing NC program Machining Measurement


Measuring program
Processing program NC data-conversion program DIM setting
Tool diameter is corrected Tool length is correced Hole diameter,
Coordinates are corrected hole position,
width, depth
Correct Plus Error correction
Calculating correction value Report File & File
Statistical processing
Fig. 14.29 Model of information flow to and from CMM
382 Metrology and Measurement

Basic adaptive controls consist of a deviation indicator or sensor, which can be electrical, mechani-
cal, pneumatic, or fluidic; a feedback system, usually electronic or fluidic for speed; and correcting unit.
The deviation indicator monitors the workpiece periodically or continuously and senses whether or not
it is within some preset limits. If the dimension being monitored falls outside these limits, the sensor
relays the information through the feedback system to the correcting unit. The correcting unit then
adjusts the position of the workpiece or tool in such a way so as to eliminate the defect in future parts.
In those cases where corrective action cannot be taken, the sensor sends its feedback into the warning
system which stops the machine and/or activates bells or lights.
Adaptive controls can also interface with a computer system, which constantly analyzes the
output of a machine, and produces the statistical information on process as it is being performed.

14.7 FORM TESTING: CASE STUDY

Ultra-High Formtester MFU 8 D With Integrated, Absolute Diameter Measuring


Unit Mahr Company’s form-testing model ‘Formtester MFU 8 D’ as shown in Fig. 14.30 is a
fully automatic form and location-measuring instrument with an integrated, high-precision diam-
eter-measuring unit. With this Formtester, form and diameter measurements can be carried out
in one and the same chucking. Combining the acquisition of relative form errors and absolute

Fig. 14.30 Ultra-high formtester (Courtesy, Mahr GMBH Esslingen)


Study of Advanced Measuring Machines 383

D = L1 − L2 − DK

L1
L2

3
2
Y N

4
1
DK

C
5

Fig. 14.31 Principle of measurement 1. Laser measuring length


2. Plane mirror 3. Probe 4. Test piece 5. Base

diameters ensures a fast and comprehensive workpiece assessment. The instrument construction,
the selection of completely new carbon-fiber reinforced plastics and the development of essential
compensation methods and measuring strategies contribute to an unparalleled measuring accuracy
in the production area of ±0.1 µm (±0.039 in). Due to this accuracy, ring gauges, plug gauges and
gauge blocks can be checked in addition to all kinds of industrial products. The Formtester MFU8
D is ideally suited for all applications, which are time-critical, and demand high precision such as the
direct, rapid and reliable on-site check of precision workpieces. The Formtester MFU8 D also solves
measurement tasks in the limit ranges of mechanical length metrology. Figure 14.31 explains the
working principle of the Formtester.

Features of Formtester with Integrated High-Precision Diameter and Length


Measuring Unit

• ± 0.1-µm (± 0.039 in) absolute measuring accuracy


• Measurements of diameters and length in compliance with Abbey’s principle
• Integrated laser interferometer
• 0.00125-µm measuring value resolution
• Combination of diameter and form measurements for mating parts in order to assess their
clearance
• Automatic measuring run and calibration
384 Metrology and Measurement

• Measuring capacity and the range of accessories for universal form measurements corresponding
to those of the Formtester MFU 8
• Form and diameter measurements in one and the same chucking with an accuracy of ± 0.1µm
(± 0.039 in)

14.8 Improvement Opportunities


The accelerated introduction of new technology generations requires accelerated advancements of
metrology. As the field of metrology considers its past, present and, more importantly, its future, we
should recognize that metrology is an ‘enabling technology’. Developments in metrology don’t often
have a direct impact on the daily lives of people or society. However, developments in metrology can
have a direct impact on areas such as medicine, transportation and communication. With this in mind,
we in metrology can have the biggest impact in areas where we are best understood and applied.
New materials and process needs from now onwards will affect metrology. Thus, it is difficult to
identify all future metrology needs. Shrinking feature sizes, tighter control of electrical device param-
eters such as threshold voltage and leakage current, and new interconnect materials will provide the
main challenges for physical metrology methods. To achieve desired device scaling, metrology tools
must be capable of measurement of properties on atomic distances. Thus, we must be able to pro-
vide measurement technologies suited for customer applications, and we must be able to interact
with users based on their needs rather than based on our technology. This will in most cases require
education—on the part of both the customer and supplier. ‘Harmonization’ has been somewhat of a
buzzword of the last few years and this is an essential topic for metrology. The metrology customer
does not care primarily about roughness, form or dimension—his concern is focused on his product
or process. The metrology standardization community as well as the instrument providers need to
accommodate this thinking, and in the end we need to come to a point where we have one ‘language’
of measurement. Finally, the metrology community must actively pursue collaboration between its
various disciplines. This presents a precarious situation in that ‘collaboration’ is often avoided due
to fears of competition. Competition must be maintained, as it is a catalyst for advancement. However,
collaboration is required to ensure that we are speaking the same language and providing comparable
results. Ultimately, collaboration grows the metrology customer base, whereas the lack of collabora-
tion can fragment it.

Review Questions
1. Explain the concept of instrument overlapping.
2. What do you mean by metrology integration?
3. Explain the applications of universal measuring machine.
4. Discuss the use of numerical control for measurement.
5. Discuss the working of a three-dimensional measuring probe.
Study of Advanced Measuring Machines 385

6. Explain the importance of probe illumination.


7. Explain the concept of optical 3D measuring instruments.
8. Discuss the concept of in-process gauging.
9. Explain the concept of adaptive machining combined with inspection system.
10. Explain how form testing is done.
11. Write short notes on
a. Inline inspection
b. Coordinate Measuring Machines (CMM)
c. Specifications of CMM
d. CMM probes
e. Types of styli used in CMM
f. Multi-sensor-system in CMM
12. Explain the construction and working of universal measuring machines.
13. Explain the process of information flow to and from a CMM.
14. State the advantages and possible errors in a CMM.
15. Explain in brief the use of numerical control for measurement.
16. Describe a machine used for automatic gauging with the help of any case study.

Author Query:

AQ: Pleas check the insertion of Figure number.


15 Introduction to
Measurement Systems

‘Measurements should be made to produce the data needed to draw meaningful conclusions
from the system under test…’
Prof. S. M. Umrani, Member Management Committee, V.I.I.T., Pune, India
MEASUREMENT SYSTEMS of a given quantity is essentially an act or
The progress of measurement systems in the result of comparison between the
industry took place largely in the 1930s. quantity and a predefined standard. In
With the growth of continuous manufac- this modern world, there are widespread
turing, the need for continuous measure- applications of measurement systems in
ment of various process variables like various fields, viz., automobiles, residen-
temperature, pressure, vibration, force, tial appliances, war weapons, satellites,
torque, strain, etc., became a need of the etc. Thus, technology of using instru-
time. Thus, measurement science is the ments to measure and control the physi-
foundation of efficient industrial process- cal and chemical properties of materials
ing and manufacturing. The measurement is called instrumentation.

15.1 DEFINITION OF MEASUREMENT

The measuring process is the one in which the property of an object or a system under consideration
is compared to an acceptable standard unit.
For a measurement to be meaningful, the following three basic things are required:

i. The standard used for comparison purposes must be accurately defined and should be commonly
accepted.
ii. The apparatus used and method adopted must be provable.
iii. The numerical measure is meaningless unless followed by the unit used.

15.1.1 Significance of Measurements


The basic purpose of measurement is to obtain requisite information pertaining to the fruitful completion
of a process. The applications of measurement are monitoring of processes, control of processes and
experimental engineering analysis. The significance of measurements are the following:
Introduction to Measurement Systems 387

• As science and technology move ahead, new phenomena and relationships are discovered and
these advances make new types of measurements imperative.
• New discoveries are not of any practical utility unless results are backed by actual measurements.
• The measurements not only confirm the validity of hypothesis but also add to its understanding.
• This results in an unending chain, which leads to new discoveries that require more, new and
sophisticated measurement techniques.
• Science and technology are associated with sophisticated methods of measurement.
• Measurement plays a significant role in achieving goals and objectives of engineering because of
feedback information supplied by them.

15.2 METHODS OF MEASUREMENT

The methods of measurement may be broadly classified into two categories:


i. Direct Methods ii. Indirect Methods.

i. Direct Methods The information that may be available sometimes indicates the progress of
the process in a very simple way involving a direct relation. In direct measurement, the meaning of the
measurement and the purpose of the processing operation are identical. Such direct measurements are
generally accomplished by simple mechanical means.
In direct methods of measurement, the unknown quantity is directly compared against a standard
and the result is expressed as a numerical number and a unit (for example, consider an example of col-
lecting 1 litre of water from a tank. In this example, the meaning of the measurement of volume and
the purpose of the collecting operation, both are same, i.e., collecting 1 litre of water). Direct methods
are quite common for the measurement of quantities like length, mass and time. As human factor is
involved in direct measurement, it may not necessarily be very accurate. The sensitivity obtained is less.
Direct methods are not preferred and are rarely used.

ii. Indirect Methods As direct measurement is not always possible, an indirect measurement
technique, involving a derived relationship between the measured quantity and the desired result is
adopted. In indirect measurement, the meaning of the measurement and the purpose of the processing
operation are not same but they are related to each other. The modem trend in the indirect methods
of measurement is to go for electrical methods which offer possibilities of high speed of operation,
simpler processing of the measurand and adaptation of computer processing as well. The important
aspects of indirect methods are that these methods are comparatively more accurate and have high
sensitivity. Equivalent output is obtained indirectly against a standard, and therefore, these methods are
common and are preferred for measurement of quantities like temperature, level, flow, etc.
Consider an example of pasteurizing milk. This operation is monitored by noting the temperature
of the milk. Here, the temperature measurement is indirect because the purpose of operation is to pas-
teurize the milk, i.e., to remove the bacteria that may damage the milk, and the meaning of measurement
here is to measure the milk temperature. But note that the extent of pasteurization depends upon the
temperature of the milk. In this example, direct measurement would be the bacteria count.
388 Metrology and Measurement

15.3 CLASSIFICATION OF MEASURING INSTRUMENTS

A measuring instrument is simply a device for determining or ascertaining the value of some particular
quantity or condition. The value determined by the instrument is generally, but not necessarily, quan-
titative. A measuring instrument may be required to indicate, record, register, signal or perform some
operations on the value it has determined. Measuring instruments are classified based upon the mode
by which they indicate any change in the quantity to be measured or based on the source of power or
by their function or by construction.

a. Classification Based on Standards (scale) Used for Measurements


1. Absolute Instruments These instruments give the magnitude of the quantity under mea-
surement in terms of physical constants of the instrument, e.g., Tangent galvanometer and Rayleigh’s
current balance. Working with absolute instruments for routine work is time consuming, and it takes a
lot of time to compute the magnitude of the quantity under measurement. Absolute instruments are
seldom used except in standard institutions.

2. Secondary Instruments These instruments are so constructed that the quantity being
measured can only be measured by observing the output indicated by the instrument, e.g., voltmeter,
thermometer, pressure gauge, etc. These instruments are calibrated by comparison against absolute
instruments. These instruments are commonly used, as they give direct readings. Usages of these in-
struments are almost in the sphere of measurement.

b. Classification Based on Working


1. Automatic Instruments These instruments do not require manual assistance for their func-
tioning, e.g., mercury thermometer and float-operated level sensors.

2. Manual Instruments These instruments require manual assistance for their functioning,
e.g., a resistance thermometer with Wheatstone’s bridge indicator requires manual adjustment of the
null point to get the corresponding temperature reading.

c. Classification Based on Source of Power


1. Self-operated These instruments themselves generate the power required for their operation,
e.g., mercury thermometer.

2. Power-operated These instruments require external power supply for their functioning. This
power may be in the form of electricity or compressed air or hydraulic supply.

d. Classification Based on Construction


1. Self-contained These instruments have all of their parts enclosed in one physical assembly,
e.g., mercury thermometer.
Introduction to Measurement Systems 389

2. External construction Some instruments have different elements contained in different


physical assemblies connected by data transmission elements, e.g., RTD.

e. Classification Based on Function


1. Indicating Type These instruments have some kind of calibrated scale and pointer. Any
change in the quantity to be measured is indicated by a change in the pointer position on the scale. The
scale has calibrations in terms of values of the measured quantity, e.g., mercury thermometer.

2. Recording Type These instruments continuously make a written record of the values of the
measured quantity against some other variable like time, e.g., if the furnace is cooled and these cooling
temperatures are sensed by a recording-type temperature-measuring instrument then the plot or graph
of the furnace temperature against time is produced by the instrument.

15.4 GENERALIZED MEASUREMENT SYSTEM

It is possible and desirable to describe the operation of a measuring instrument or a system in a gener-
alized manner without resorting to intricate details of the physical aspects of a specific instrument or
system. The whole operation can be described in terms of functional elements.
Most measurement systems contain the following four functional elements:

1. Primary sensing element


2. Variable conversion and manipulation element(s)
3. Data transmission element (s)
4. Data presentation element

1. Primary Sensing Element This element first receives the energy from the measured
medium and utilizes it to produce a condition representing the value of the measured variable. The
quantity under measurement makes its first contact with the primary sensing element of the mea-
surement system. This act is then immediately followed by the conversion of the measurand into an
analogous electrical signal. This work is done by a device which converts a physical quantity into elec-
trical quantity termed transducer. The first stage of a measurement system is known as the detector
transducer stage.

2. Variable Conversion Element This element converts the condition produced by the
primary element into the condition useful for functioning of the instrument. The output of the pri-
mary sensing element may be an electrical signal of any form. It may be voltage, frequency, current,
change in resistance or some other electrical parameter. For the instrument to perform the desired
function, it may be necessary to convert this output to some suitable form while preserving the
information content of the original signal. For example, suppose the output of the primary sensing
element is an analog signal and the next stage of the system may accept the signal only in the digital
form. Then an analog-to-digital converter is used to convert the signal into the desired form. Many
390 Metrology and Measurement

instruments do not need any variable conversion element, while others need more than one variable
conversion element.

Variable Manipulation Element This element performs certain operations on the condition
produced by the secondary element. It manipulates the signal presented to it, preserving the original
nature of the signal. Manipulation here means only a change in the numerical value of the signal. For
example, an electronic amplifier accepts a small signal as input and produces an output signal which is
also a voltage but of greater magnitude. It is not necessary that a variable manipulation element should
follow the variable conversion element, as shown in Fig. 15.1. It may precede the variable conversion
element. In case the voltage is too high, attenuators are used which lower the voltage or power for
the subsequent stage of the system. This element represents the parts used for indicating, recording,
signaling, registering or transmitting the measured quantity. The process of variable conversion and
manipulation is called signal conditioning.

Output of primary
Sensor / sensing element is Amplification or Data is transmitted if
transducer converted to attenuation of display is at
suitable form converted signal remote location

Primary Variable Variable Data


Measurand sensing conversion manipulation transmission
element element element element

Output as Converted Amplified Transmitted


electrical signal signal signal signal

Digital display / analog Data


display / CRT / recorder / presentation
computer / microprocessor element

Fig. 15.1 Generalized measurement system

3. Data Transmission Element When the elements of an instrument are physically sepa-
rated, it becomes necessary to transmit data from one element to another; or when the primary element
is far away from the secondary element, and an element is essential that transmits the condition of the
primary element to the secondary element. The element that performs this function is called a data
transmission element.
Example Spacecraft are physically separated from the earth where the control stations guiding their
movements are located. Therefore, control signals are sent from these stations to the spacecraft by
telemetry system using radio signals.
Introduction to Measurement Systems 391

4. Data Presentation Element The information about the quantity under measurement
must be displayed in an intelligible form to the personnel or the system for monitoring, control or
analysis purpose. This function is done by the data presentation element to monitor data; visual display
devices are required. These devices may be analog or digital. In case data is to be recorded, recorders
like magnetic tapes, plotters, printers, x-y or y-t recorders and digital storage oscilloscopes may be used.
Using the functional elements we can measure any physical parameter.
Example Suppose we are measuring weight. The function elements will remain the same as shown
in Fig. 15.1. Figure 15.2 shows block diagram for weight measurement. In this case, the primary sensing
element used is the load cell, which is connected to the platform where we will put weights. Weight is
the measurand; load cell is the transducer. When the weight is kept on the platform, it will exert force
on the load cell. The output of the load cell is in millivolts. So voltage proportional to the weight is
produced. Voltage is amplified which is calibrated in terms of weight and given to the conversion ele-
ment. The conversion element here is an analog-to-digital converter. The converted data is given to the
display, which is a digital display. As the display is located in the system, there is no need of the data
transmission element.

Output of load cell


Voltage is Digital output
in mV proportional
amplified equivalent to weight
to weight

Weight Analog-to- Digital


Load cell Amplifier digital display
converter

Primary Data Data Data


sensing manipulation conversion presentation
element element element element
Fig. 15.2 Measurement scheme for weighing machine

15.5 PERFORMANCE CHARACTERISTICS OF MEASURING DEVICES

The knowledge of the performance characteristics of an instrument is essential for choosing the most
suitable instrument for specific measurement. Measurement system characteristics are divided into two
categories, viz., (i) static characteristics, and (ii) dynamic characteristics.
These characteristics give a meaningful description of the quality of measurement. The static char-
acteristics are concerned with the measurement of quantities that are constant or vary slowly with time,
whereas, the dynamic characteristics are concerned with rapidly varying quantities.

15.5.1 Static Characteristics


The characteristics involved in the measurement of quantities that are either constant or slowly varying
with time in order to define a set of criteria that give meaningful description of quality of measurement
392 Metrology and Measurement

are called static characteristics. Normally, static characteristics at a measurement system are those that must
be considered when the system or instrument is used to measure a condition not varying with respect
to time.

1. Accuracy Accuracy of the instrument may be defined as its ability to respond to a true value of
a measured variable under reference conditions. In other words, it can also be explained as the closeness
with which an instrument reading approaches the true value of the quantity being measured. Moreover,
the accuracy of measurement means conformity to the truth. The accuracy of an instrument may be
expressed in different ways, viz., in terms of the measured variable itself, span of the instrument, upper-
range value, per cent of scale length of actual output reading.

Overall Accuracy For the instruments composed of separate physical units like primary, secondary,
manipulation, etc., overall accuracy is expressed by combining individual accuracies of different elements.
For pressure spring thermometer having accuracy of bulb-capillary system as ±0.5% and accuracy
of Bourdon pressure gauge as ±1 %, the overall accuracy can be expressed as

a. least accuracy is within ± (0.5 + I), i.e., within ±1.5


b. root square accuracy is within ± 0.52 + 12 = ± 1.25

2. Precision It is a measure of reproducibility of the measurements, given a fixed value of a


quantity or the degree of exactness for which an instrument is designed or intended to perform. In
other words, precision is a measure of the degree of agreement within the group of measurements.
It is expressed in terms of conformity of the instrument, which is nothing but maximum devia-
tion of an instrument’s actual calibration curve as compared to its specified characteristic curve. In
general, the distinction between the words ‘accuracy’ and ‘precision’ is usual1y very vague. But as far

Table 15.1 Differences between accuracy and precision

Sl. No. Accuracy Precision


1. It is closeness with the true value It is a measure of the reproduc-
of the quantity being measured. ibility of the measurement.
2. The accuracy of measurement The term precise means clearly or
means conformity to truth. sharply defined.
3. Accuracy can be improved. Precision cannot be improved.
4. Accuracy depends upon simple Precision depends upon many
techniques of analysis. factors and requires many
sophisticated techniques
of analysis.
5. Accuracy is necessary but not Precision is necessary but not a
sufficient condition for precision. sufficient condition for accuracy.
Introduction to Measurement Systems 393

as measurement is concerned, there is a difference between the two terms as they have sharp differ-
ences in meanings.

3. Repeatability When an instrument is subjected to a certain fixed, known input, and if instru-
ment readings are noted consecutively by approaching the measurement from the same direction under
the same operating conditions then the closeness of all these readings for the same input represents
repeatability of the instrument.

4. Sensitivity The sensitivity of the instrument denotes the smallest change in the value of a
measured variable to which the instrument responds. In other words, sensitivity denotes the maximum
change in an input signal (measured variable) that will not initiate a response on the output (indication),
e.g., the accuracy of a thermometer is 1°C means the thermometer output (response) would change only
if the temperature around it changes by 1°C. Any changes in temperature less than 1°C are not indicated
by this thermometer. Therefore, the static sensitivity of an instrument is the ratio of the magnitude of
the output signal or the response to the magnitude of the input signal or quantity being measured. Its
units depend upon the type of input and output, e.g., count per volt, millimetre per microampere, etc.

5. Reproducibility The reproducibility of an instrument is the degree of closeness with which


a given value of quantity or condition can be repeatedly measured while approaching the measurement
from both sides under the same operating conditions. It is expressed in terms of units for a given
period of time. Perfect reproducibility means that the instrument has no drift. No drift means that with
a given input, the measured values do not vary with time.

6. Drift Drift is an undesirable quality in industrial instruments because it is rarely apparent. The
gradual shift in the indication or record of the instrument over an extended period of time, during
which the true value of the variable does not change is referred as drift. Different kinds of drift are
explained below.

a. Zero Drift If the whole calibration gradually shifts by the same amount due to slippage, or due to
undue warming up of electronic tube circuits, zero drift sets in. Zero setting can prevent this (i.e., by shift-
ing the pointer position). The input–output characteristics with zero drift are shown in Fig. 15.3 (a).

b. Span Drift or Sensitivity Drift If there is proportional change in the indication all along
the upward scale, the drift is called span drift or sensitivity drift. Hence, higher calibrations get shifted
more than the lower calibrations. The characteristics with span drift are shown in Fig. 15.3 (b).

c. Zonal Drift In case the drift occurs only over a portion of span of an instrument, while the
remaining portion of the scale remains unaffected, it is called zonal drift.
There are many environmental factors which cause drift. They may be stray electric/magnetic fields,
thermal emf changes in temperature, mechanical vibrations, wear and tear and high mechanical stress
developed in some parts of the instruments and systems.
394 Metrology and Measurement

Normal characteristics
Normal characteristics span drift
zero drift
Output

Output
Nominal Nominal
characteristics characteristics
Zero drift

Input Input

(a) (b)
Fig. 15.3 (a) Zero and (b) Span drift

7. Dead Zone It is defined as the largest change of input quantity for which there is no output of
the instrument. For example, the input applied to the instrument may not be sufficient to overcome the
friction and it will, in that case, not move at all. It will only move when the input is such that it produces
a driving force which can overcome the friction forces.

8. Linearity Linearity is one of the most important characteristics of a measurement system


or instrument. Linearity simply means that the output is linearly proportional to the input. For most
of the systems, a linear behavior is desirable. But if the input–output relationship is not a straight
line, it should not be concluded that the instrument is inaccurate; rather, the instrument may be
highly accurate. However, the linear behavior is most desirable because of the following factors:

i. The conversion from a scale reading to the corresponding measured value of input quantity is
most convenient if one merely has to multiply by a fixed constant.
ii. When the instrument is part of a large data or control system, the linear behavior of the past
often simplifies the design and analysis of the whole system.

Thus, linearity can be defined as the measure of the maximum deviation of a calibration point from
a straight line. Figure 15.4 shows the actual calibration curve, i.e., a relationship between input–output
and a straight line drawn from the origin using the method of least squares.

9. Resolution or Discrimination If the input is slowly increased from some arbitrary (non-
zero) input value, it will again be found that the output does not change at all until a certain incre-
ment is exceeded. This increment is called resolution or discrimination of the instrument. Thus, the
smallest increment in input (the quantity being measured), which can be detected with certainty by an
Introduction to Measurement Systems 395

Actual
calibration
curve

Idealized
Maximum straight
deviation line

Output

Input
Fig. 15.4 Actual calibration curve

instrument, is its resolution or discrimination. Thus, resolution defines the smallest measurable input
change while the threshold defines the smallest measurable input itself.

10. Threshold If the instrument input is increased very gradually from zero, there will be some
minimum value below which no output change can be detected. This minimum value defines the thresh-
old of the instrument. The first detachable output change is often described as being any ‘noticeable
measurable change’. This phenomenon is due to input hysteresis.

11. Hysteresis Hysteresis is a phenomenon which depicts different output effects when loading
and unloading, whether it is a mechanical system or an electrical system, and for that matter, any system.
Hysteresis is the non-coincidence of loading and unloading curves. Consider an instrument which has
no friction due to sliding parts. When the input of this instrument is slowly varied from zero to full
scale and then back to zero, its output varies as shown in Fig. 15.5(a).
Hysteresis in a system arises due to the fact that all the energy put into the stressed parts when load-
ing is not recoverable upon unloading. This is because the second law of thermodynamics rules out any
perfectly reversible process in the world.

12. Static Calibration In general, calibration is defined as a process in which the measurand
is compared with the known standard. The static calibration refers to a situation in which all inputs
(desired, interfering, modifying) except one are kept at constant values. Then the one input under study
is varied over some range of constant values. The input–output relations developed in this way comprise
a static calibration valid under stated constant conditions of all the other inputs. The calibration of all
396 Metrology and Measurement

Output

Unloading

0 Input

Loading

(a) Hysteresis when measurement is from (b) Hysteresis when measurement starts on
zero onwards positive and negative ′side
Fig. 15.5 Hysteresis effect

instruments is important since it affords the opportunity to check the instrument against a known stan-
dard and subsequently to find errors and accuracy.

15.5.2 Dynamic Characteristics of Measurement System


The static and dynamic characteristics give meaningful description of the quality of measurement. The static
characteristics are concerned with the measurement of quantities that are constant or vary slowly with time;
whereas, the dynamic characteristics are concerned with rapidly varying quantities. For studying dynamic
behaviour or time-varying response of the instrument, its primary or sensing element is subjected to some
known and predetermined change in the quantity to be measured. The manner by which output of the
instrument responds (adjusts) to this change in input is represented by the graph of output vs. time that gives
dynamic or time-varying behaviour of the instrument. The dynamic characteristics are the following:

1. Speed of Response It is the rapidity or fastness with which an instrument responses to any
changes in the input (measured quantity).It can be observed that instruments rarely respond instanta-
neously to changes in the measured variable. But there is some time lag between the change in input and
the initiation of change in the output of the instrument. Also, the speed at which the output changes is
smaller than the speed at which the input changes:

2. Measuring Lag It is the retardation or delay in the response of a measurement system to


changes in the measured quantity. The measuring lag is of two types:

i. Retardation Type In this case, the response of the measurement system begins immediately
after a change in the measured quantity has occurred.

ii. Time-delay Type In this case, the response of the measurement system begins after a dead
time after the application of the input.
Introduction to Measurement Systems 397

3. Fidelity Fidelity of an instrument is the degree of closeness with which a measurement system
responds (i.e., indicates or records) to changes in the measured variable. Thus, fidelity represents how
close the instrument is reading in the actual value of the measuring quantity.

4. Dynamic Error It is the difference between the true value of the quantity (under measurement)
changing with time and the value indicated by the measurement system if no static error is assumed. It
is also called measurement error.

15.6 TYPES OF ERRORS

Since errors are unwanted entities in any measurement process, it is imperative to interpret the results
of quantitative measurement in an intelligent manner. An understanding and thorough evaluation of
errors is essential. A study of errors is the first step in finding ways to reduce them. Errors may arise
from different sources and are usually classified as

1. Gross error
2. Systematic error
3. Random error

1. Gross Error Gross error mainly covers human mistakes in reading instruments, and record-
ing and calculating measurement results. The observer may grossly misread the scale. For example, a
person may, due to oversight, read the temperature as 32.5°C while the actual reading may be 22.5°C.
He or she may transpose the reading while recording. For example, the person may read 28.5°C and
record it as 25.5°C. As long as human beings are involved, some gross errors will definitely be com-
mitted. Complete elimination of gross errors is probably impossible. One should try to anticipate and
correct them. Gross errors may be of any amount and, therefore, their total elimination is mathemati-
cally impossible. However, they can be avoided by adopting two means—(a) great care should be taken
in reading and recording the data, and (b) two, three or more readings should be taken for the quantity
under measurement. It is always advisable to take a large number of readings as a close agreement
between readings assures that no gross error has been committed.

2. Systematic Error Systematic errors are divided into three categories:


i. Instrumental errors
ii. Environmental errors
iii. Observational errors

Instrumental errors are inherent in instruments because of their mechanical structure. They may be
due to construction or calibration of instruments. Errors may be caused because of friction, hysteresis
or even gear backlash. It is possible to eliminate static errors or at least reduce them to a great extent
by understanding the procedure of measurement. Calibration against standards may be used for the
purpose, correction factors should be applied after determining the instrumental errors and the instru-
ment may be recalibrated carefully.
398 Metrology and Measurement

3. Misuse of Instruments Error The errors caused in measurements are due to the fault of
the operator than that of the instruments. A good instrument used in an unintelligent way may give
erroneous results. Examples for this misuse of instruments may be due to the failure to adjust the zero
of instruments, poor initial adjustments, using leads of too high resistance, etc. These errors can be
eliminated by handling the instrument in a proper manner and by following the manufacturer’s instruc-
tions.

4. Error due to Loading Effects One of the most common types of errors committed by
beginners is the improper use of an instrument for measurement. In measurement system, we deal
with both electrical and mechanical quantities and elements, and hence the loading effect may occur
on account of both electrical and mechanical elements. The loading effects are due to impedances of
various elements connected in a system.

5. Random Errors These errors have unknown or non-determinable causes which can be
treated mathematically using the laws of probability. These errors may be due to improper instru-
ment design, insufficient process parameters, and/or may be due to insufficient knowledge of pro-
cess parameters.
After understanding some basic aspects of measurement systems and its components, we discuss
different types of transducers, intermediate devices and terminating devices in the next chapter.

Review Questions

1. Discuss the significance of measurement.


2. Explain the term measurement and differentiate between direct and indirect measurement.
3. Classify the measuring instruments on the basis of the following aspects:
a. Standards (scale) used for measurements
b. Mode of working
c. Source of power
d. Construction
e. Function
4. Write a short note on generalized measurement system.
5. Describe the working of various parts of the measuring instrument with examples of
a. Bourdon pressure gauge
b. Thermometer (Hg in glass)
Introduction to Measurement Systems 399

6. Discuss the performance characteristics of measuring devices.


7. Differentiate between
a. Static and dynamic characteristics
b. Accuracy and precision
c. Repeatability and reproducibility
d. Random and systemic error
8. Define sensitivity, drift, dead zone, resolution or discrimination and threshold.
9. Define drift and explain its types.
10. Explain what you mean by hysteresis and its effect in the measurement process.
11. Explain the following dynamic characteristics: speed of response, measuring lag, fidelity.
12. Explain the different types of errors in measurement.
16 Intermediate Modifying
and Terminating Devices

Intermediate and modifying devices are used to amplify, attenuate, fire, modulate, filter or
otherwise modify input signal format which, will be acceptable to the output device….
…. M J Khurjekar, Professor, E & TC, V.I.I.T., Pune.
ELECTRONIC
INSTRUMENTATION SYSTEM
An electronic instrumentation system The input device receives the measur-
consists of a number of components to and or the quantity under measurement
perform a measurement and record its and delivers a proportional or analogous
results. As explained in the earlier chap- electrical signal to the signal-condition-
ter, a generalized measurement system ing device where the signal is amplified,
consists of three major components — attenuated, fired, modulated, or other-
an input device, a signal-conditioning or wise modified in a format acceptable to
processing device, and an output device. the output device.

16.1 TRANSDUCERS

The input quantity for most instrumentation systems is a ‘Non-electrical Quantity’. In order to use elec-
trical methods and techniques for measurement, manipulation or control, the non-electrical quantity is
generally converted into an electrical form by a device called a ‘transducer’. It can be defined as a device
which, when actuated, transforms energy from one form to another. Broadly speaking, a transducer is a
device that transforms one type of energy into another. For example, a battery is, therefore, a transducer
(chemical energy converted to electrical energy), ordinary glass thermometer (heat energy converted
into mechanical displacement of a liquid column). A device, which converts mechanical force into an
electrical signal forms a very large and important group of transducers commonly used in the industrial
instrumentation area. Many other physical parameters such as heat, intensity of light, flow rate liquid
level, humidity and pH value may also be converted into electrical form by means of transducers. These
transducers provide an output signal when stimulated by a mechanical or a non-mechanical input: a red-
hot conductor converts light intensity into change of resistance, a thermocouple converts heat energy
into electrical voltage, a force produces a change of resistance in a strain gauge, an acceleration produces
Intermediate Modifying and Terminating Devices 401

a voltage in a piezo-electrical crystal, and so on. In all cases, however, the electrical output is measured by
standard methods; giving the magnitude of the input quantity in terms of an analogous output.

16.1.1 Criterion for Selection of Transducer


The transducers and the methods used may depend upon the instrumentation already available and
also on the technical skill and experience of the user. Unfortunately, most transducers are not sensitive
to just one quantity. If measurements are to be made under conditions where there is a likelihood of
two or more input quantities influencing the transducer, it is desirable to select a transducer which is
sensitive to the desirable quantity and intensive to the unwanted quantity. If this is not possible, ways
and means should be found to eliminate or compensate for the effects of the unwanted input quantity.
The following is the summary of factors influencing the choice of a transducer for measurement of a
physical quantity:

1. Operating Principle Transducers are many times selected on the basis of the operating
principles used by them. The operating principles used may be resistive, inductive, capacitive, optoelec-
tronic, piezoelectric, etc.

2. Sensitivity The transducer must be sensitive enough to produce a detectable output.

3. Operating Range The transducer should maintain the range requirements and have a good
resolution over its entire range. The rating of the transducer should be sufficient so that it does not
break down while working in its specific operating range.

4. Accuracy A high degree of accuracy is assured if the transducer does not require frequent cali-
bration and has a small value for repeatability. It may be emphasized that in most industrial applications,
repeatability is of considerably more importance than absolute accuracy.

5. Cross Sensitivity Cross sensitivity is a further factor to be taken into account when measur-
ing mechanical quantities. These are situations where the actual quantity being measured is in one plane
and the transducer is subjected to variations in another plane. More than one promising transducer
design has to be abandoned because the sensitivity to variations of the measured quantity in a plane
perpendicular to the required plane has been such as to give complete erroneous results when the trans-
ducer has been used in practice.

6. Errors The transducer should maintain the expected input–output relationship described by its
transfer function so as to avoid errors.

7. Transient and Frequency Response The transducer should meet the desired time
domain specifications like peak overshoot, rise time, settling time and small dynamic error. It should
ideally have a flat frequency response curve. In practice, however, there will be cut-off frequencies, and
higher cut-off frequency should be high in order to have a wide bandwidth.
402 Metrology and Measurement

8. Loading Effects The transducer should have a high input impedance and a low output imped-
ance to avoid loading effects.

9. Environmental Compatibility It should be assured that the transducer selected to work


under specified environmental conditions maintains its input–output relationship and does not break
down. For example, the transducer should remain operable under its temperature range. It should be
able to work in corrosive environments (if the application so requires), should be able to withstand
pressures and shocks and other interactions to which it is subjected.

10. Insensitivity to Unwanted Signals The transducer should be minimally sensitive to


unwanted signals and highly sensitive to desired signals.

11 Usage and Ruggedness The ruggedness, both of mechanical and electrical intensities of
a transducer versus its size and weight, must be considered while selecting a suitable transducer.

12. Electrical Aspects The electrical aspects that need consideration while selecting a trans-
ducer include the length and type of cable required. Attention also must be paid to signal-to-noise ratio
in case the transducer is to be used in conjunction with amplifiers. Frequency response limitations must
also be taken into account.

13. Stability and Reliability The transducer should exhibit a high degree of stability to be
operative during its operation and storage life. Reliability should be assured in case of failure of a trans-
ducer in order that the functioning of the instrumentation system continues uninterrupted.

14. Static Characteristics Apart from low static error, the transducers should have a low
non-linearity, low hysteresis, high resolution and a high degree of repeatability. The transducer
selected should be free from load alignment effects and temperature effects. It should not need
frequent calibration, should not have any component limitations, and should be preferably small in
size.

16.1.2 Classification of Transducer


The transducer can be classified as (i) primary and secondary, (ii) active and passive, (iii) on the basis of
transduction form used, (iv) analog and digital transducer, and (v) transducer and inverse transducer.

1. Primary and Secondary Whenever a parameter is to be measured, there may be a require-


ment of a two-stage transducer. The first stage is called a primary transducer and the second stage is
called a secondary transducer. For example, consider the case of a Bourdon tube. The Bourdon tube
acting as a primary detector senses the pressure and converts the pressure into a displacement of its
free end. Now, to convert the displacement of the free end of a Bourdon tube to an analogous electri-
cal signal, the core of an LVDT is connected to it. The LVDT produces an output voltage which is
proportional to the displacement of the free end. Thus, there are two stages of transduction; firstly, the
Intermediate Modifying and Terminating Devices 403

pressure is converted into a displacement by the Bourdon tube and then the displacement is converted
into analogous voltage by LVDT. The Bourdon tube is called a primary transducer and the LVDT is
called a secondary transducer.

2. Active and Passive Transducers may be classified according to whether they are active
or passive. Active transducers are those which do not require an auxiliary power source to produce
their output. They are also known as self-generating. The energy required for production of an
output signal is obtained from the physical quantity. As these are active, the size is more compact.
And passive transducers derive the power required for transduction from an auxiliary power supply.
They are also known as externally powered transducers. They also derive part of the power required
for conversion from the physical quantity under measurement. Auxiliary power supply has to be
considered for size.

3. On the Basis of Transduction Form Used The transducer can also be classified on the
basis of the principle of transduction as resistive, inductive, capacitive, etc., depending upon how they
convert the input quantity into resistance, inductance or capacitance respectively. They can be classified
as piezoelectric, thermoelectric, magnetostrictive, electrokinetic and optical transducers.

4. Analog and Digital Transducers The transducers can be classified on the basis of the
output, which may be a continuous function of time, or the output may be in discrete steps.
Analog transducers convert the output quantity into an analog output, which is a continuous func-
tion of time. Thus a strain gauge, an LVDT, a thermocouple or a thermistor may be called ‘analog
transducers’ as they give an output which is a continuous function of time. And digital transducers
convert the input quantity into an electrical output, which is in the form of pulses.

5. Transducer and Inverse Transducer A transducer can be broadly defined as a device


which converts a non-electrical quantity into an electrical quantity. And an inverse transducer is defined
as a device, which converts an electrical quantity into a non-electrical quantity. These types of trans-
ducers can help circuits to gather information, make noise, and do many other things. These two types
of transducers are also termed input transducers (e.g., photoresistors, phototransistors, thermistors,
microphones, piezoelectric sensors), and output transducers (e.g., light emitting diodes, piezoelectric
buzzers, speakers). Some of these transducers are discussed below in brief.

(a) Photoresistors Photoresistors are variable resistors. When the light shining on them increases
in intensity, their resistance is lowered. Working photoresistors into your circuits will allow you to detect
changes in lighting. For example, you could build a circuit to beep if someone turned on your room
lights.

(b) Thermistor Thermistors are also variable resistors, however, instead of being sensitive to light,
they are sensitive to temperature. There are two types of thermistors, viz., positive temperature co-
efficient (PTC) thermistors and negative temperature coefficient (NTC) thermistors. The resistance of
404 Metrology and Measurement

a PTC device increases with temperature increase, and the resistance of an NTC device increases with
temperature decrease and vice versa.

(c) Microphone/Speaker A microphone converts pressure changes (sound) to voltage changes.


Speakers are usually used to convert voltage changes to pressure changes (sound), but they can also
work the other way. Normally, pressure changes near a microphone do not affect the voltage very
much, and the signal must be amplified.

(d) Piezoelectric Devices Piezoelectric devices contain special crystals. These crystals will pro-
duce a voltage if pressure is applied to them in one direction. The crystals will also bend if voltage is
applied to them. A common use of piezoelectric devices is ‘buzzers’, which produce a buzzing noise
when a voltage is applied.

(e) Light-Emitting Diodes Light emitting diodes, or LEDs as they are usually called, generate
light when a current is passed through them.

(f) Capacitors Capacitors store electrical charge. This charge is built up along one of the capac-
itor’s two plates, and is released when there is a short between the plates. Capacitance is measured in
farads. One farad is an enormous quantity of charge. Most capacitors are much smaller, in the micro
and picofarad range. A capacitor can be charged almost instantly if it’s leads are connected directly to a
power supply. It is possible to increase the charging time by adding a resistor between the power supply
and the capacitor.
The actual formula for determining the charging time is q = qinitial[1 − e−t/(RC)], where, RC is a time
constant equal to the time required for the capacitor to accumulate 63.2% of its equilibrium charge. In
addition to releasing it’s charge by shorting, a capacitor may also lose it’s charge by ‘leaking’ after it is com-
pletely charged. This process can be slowed by connecting a resistor across the two leads of the capacitor.
The larger the resistor, the longer the discharge will take. The formula for discharging is q = qinitiale−t/(RC).
Finding total capacitance is the opposite of finding total resistance, in that when determining the total
amount of capacitance in series, you use 1 over the sum of the reciprocals of the capacitors. This gives Ctotal.
When finding the total capacitance in parallel, you just have to add the values of the capacitors together.

6. Electro-mechanical Transducer We concern ourselves here with a specific class of


transducers: devices, which translate an input of mechanical energy into equivalent electrical signals
for measuring and/or controlling the input phenomena. This type of electro-mechanical transducer is
usually located at the source of the physical force or energy, and responds to its magnitude. The readout
or control instrumentation can then be positioned at any convenient distance from the transducer, and
connected to it by electrical wiring. Common examples of such transducers are used to measure fluid
pressure, weight, acceleration, displacement, torque, etc. The word ‘cell’ is often used for convenience
to describe a compact transducer (i.e., load cell, pressure cell). Certain forms of transducers have their
own family names, which usually derive from the physical phenomena they measure. Examples include
accelerometers, extensometers and vibrometers.
Intermediate Modifying and Terminating Devices 405

Transducers can be, and are, manufactured on many different operating principles—resistive,
inductive, capacitive, piezoelectric, etc. Miniature accelerometers for the measurement of high-range
dynamic acceleration forces, for example, are usually constructed with piezoelectric sensing elements
because of the resulting small size and weight, and the self-generated electrical output. Similarly,
when some special aspect of the application requires it, capacitive or inductive sensors may be used.
The bonded metallic-resistance strain gauge, however, because of its unique set of operational char-
acteristics, has easily dominated the transducer field for the past twenty years or so.

7. Electro-magnetic Transducer (Eddy – Current Transducer) The eddy current


transducer uses the effect of eddy (circular) currents to sense the proximity of non-magnetic but con-
ductive materials. A typical eddy-current transducer (shown in Fig. 16.1) contains two coils—an active
coil (main coil) and a balance coil. The active coil senses the presence of a nearby conductive object,
and the balance coil is used to balance the output bridge circuit and for temperature compensation. It
can be used for non-contacting measurement. It gives high resolution and high-frequency response,
but the effective distance is limited to close range. Also, the relationship between the distance and the

Probe

Reference coil

Active coil
Magnetic
Field
line

Target
(conductive materials)

Fig. 16.1 Typical eddy-current transducer

impedance of the coil is non-linear and temperature dependent. Fortunately, a balance coil can com-
pensate for the temperature effect. As for the non-linearity, careful calibrations can ease its drawback.
It cannot be used for detecting the displacement of nonconductive materials or thin metalized films.
However, a piece of conductive material with sufficient thickness can be mounted on nonconductive
targets to overcome this drawback. A self-adhesive aluminum-foil tape is commercially available for this
purpose. However, this practice is not always possible. Calibration is generally required, since the shape
and conductivity of the target material can affect the sensor response.
406 Metrology and Measurement

16.2 USE OF TRANSDUCERS FOR DISPLACEMENT MEASUREMENT

Displacement is a fundamental variable whose measurement is involved in many other physical param-
eters such as velocity, acceleration, force, torque, etc. When measurement is direct, it gives displacement
directly but when indirect methods are used, information regarding the other associated variables like
force, velocity, acceleration, vibration, torque, etc., can also be obtained.
The displacement is sensed by the primary sensing element, and the output of the primary sensing
element is given to the data manipulation system; so if the output is a weak signal then it is amplified
using the data-manipulation element. The output of the data manipulation element is converted to an
appropriate form by a data-conversion element for indication after processing and calibration.

16.2.1 Types of Displacement Measurement


Depending on the means of measurement, it may broadly be classified as mechanical measurements.
Mechanical measurements are quite useful in practice, but the range of use of instruments is often
small to medium.

1. Pneumatic Measurements The pneumatic type generally uses a flapper-nozzle assembly.


The accuracy of this method depends mainly on constancy of the supply pressure.

2. Electrical Measurements Electrical methods generally convert the displacement to a con-


venient form of electrical quantity like voltage, current, resistance, etc.

3. Optical Measurements Optical methods use photo-detectors, which yield the output ulti-
mately in an electrical quantity like current, voltage, etc.
The transducers used for displacement measurement are (i) potentiometers, (ii) LVDT, (iii) capaci-
tance type, (iv) digital transducer, and (v) nozzle-flapper transducer.

16.2.2 Linear Variable Differential Transformer (LVDT)


A linear voltage differential transformer (LVDT) is a device commonly used to measure linear displace-
ment. It is an electro-mechanical device designed to produce an ac voltage output proportional to the
relative displacement of the transformer and the armature. An LVDT consists of a stationary coil
assembly and a movable core (see Fig. 16.2). The coil assembly houses one primary and two second-
ary windings. The core is a steel rod of high magnetic permeability, and is smaller in diameter than the
internal bore of the coil assembly, so you can mount the rod and assure that no contact is made with
the coil assembly. Thus, the rod can move back and forth without friction or wear.
When an ac excitation voltage is applied to the primary winding, a voltage is induced in each sec-
ondary winding through the magnetic core. The position of the core determines how strongly the
excitation signal couples to each secondary winding. When the core is in the centre, the voltage of each
secondary coil is equal and 180 degrees out of phase, resulting in no signal. As the core travels to the
left of the center, the primary coil is more tightly coupled to the left secondary coil, creating an output
Intermediate Modifying and Terminating Devices 407

signal in phase with the excitation signal. As the core travels to the right of the centre, the primary coil
is more tightly coupled to the right secondary coil, creating an output signal 180 degrees out-of-phase
with the excitation voltage.

+ Output −
signal

Core

Secondary Primary Secondary

Coil assembly

Fig. 16.2 Cross section of an LVDT

Signal and its Conditioning Many sensors used in process control and monitoring applica-
tions generate a current signal, usually 4 mA to 20 mA or 0 mA to 20 mA. Current signals are some-
times used because they are less sensitive to errors such as radiated noise and voltage drops due to lead
resistance. Signal-conditioning systems must convert this current signal to a voltage signal. To do this
easily, pass the current signal through a resistor, as shown in Fig. 16.3.
Then with the help of a DAQ system, the voltage VO = ISR that is generated across the resistor,
where IS is the current and R is the resistance, is measured. Select a resistor value that has a usable
range of voltages, and use a high-precision resistor with a low temperature coefficient. For example, a
249-ohm, 0.1%, 5-ppm/°C resistor, converts a 4 mA to 20 mA current signal into a voltage signal that
varies from 0.996 V to 4.98 V.

Is Current-output
+ device
VMEAS = IsR R

Fig. 16.3 Process current signals, usually 0 mA to 20 mA or 4 mA to 20 mA, are converted to


voltage signals using precision resistors
408 Metrology and Measurement

16.2.3 Advantages and Limitations of LVDT


Advantages An LVDT has relative low cost due to its popularity, is solid and robust, capable of
working in a wide variety of environments, and has no friction resistance since the iron core does not
contact the transformer coils resulting in an infinite (very long) service life. It also has high signal-to-
noise ratio and low output impedance, negligible hysteresis, and infinitesimal resolution (theoretically).
In reality, the displacement resolution is limited by the resolution of the amplifiers and voltage meters
used to process the output signal and short response time. This is only limited by the inertia of the iron
core and the rise time of the amplifiers—no permanent damage to the LVDT is done if measurements
exceed the designed range.

Limitations The core must contact directly or indirectly with the measured surface which is not
always possible or desirable. However, a non-contact thickness gauge can be achieved by including a
pneumatic servo to maintain the air gap between the nozzle and the workpiece, and dynamic measure-
ments being limited to no more than 1/10 of the LVDT resonant frequency. In most cases, this results
in a 2-kHz frequency cap.

Applications Although the LVDT is a displacement sensor, many other physical quantities can be
sensed by converting displacement to the desired quantity via thoughtful arrangements. Several exam-
ples can be given, viz., extensometers, temperature transducers, butterfly valve control, and servo-valve
displacement sensing. Measurement of deflection of beam, strings, or rings load cells, force transducers
and pressure transducers are discussed in detail in Chapters 17, 19, and 21 respectively.
Measurement of thickness variation of workpieces (Fig. 16.4) can be done by using dimension
gauges, thickness and profile measurements, and product sorting by size.
Fluid level measurement can be done using LVDT by position sensing in hydraulic cylinders.

LVDT

LVDT Float

Fluid
Workpiece
Fig. 16.4 Profile gauge Fig. 16.5 Fluid-level gauge

16.3 INTRODUCTION TO INTERMEDIATE MODIFYING DEVICES

A typical measurement system consists of individual sensors with necessary data acquisition and signal-
conditioning, multiplexing, data conversion, data processing, data handling and associated transmis-
sion, storage, and display systems. In order to optimize the characteristics of a system in terms of
performance, handling capacity and cost, the relevant sub-systems may often be combined. The analog
Intermediate Modifying and Terminating Devices 409

data is generally acquired and converted to digital form for the purposes of processing, transmission,
display and storage.
Processing of data may consist of a large variety of operations from simple comparison to compli-
cated mathematical manipulations. It can be for such purposes as collecting information (averages, sta-
tistics, etc.), converting the data into a useful form (e.g., calculation of efficiency of a prime mover from
speed, power input and torque developed), using data for controlling a process, performing repeated
calculations to separate out signals buried in noise, generating information for displays and a variety of
other goals. Data may be transmitted over long distances (from one location to another) or short dis-
tances (from a test centre to a nearby computer). The data may be displayed on a digital panel meter or
as a part of a cathode ray tube (CRT) presentation. The same may be stored in either raw or processed
form, temporarily (for immediate use) or permanently (for later reference).

16.3.1 Data Acquisition System


Data acquisition is generally related to the process of collecting the input data in digital form, as rapidly,
accurately, completely and economically as necessary. The basic instrumentation used may be a standard
digital panel meter (DPM) with digital outputs, a shaft digitizer or a sophisticated high-speed, high-reso-
lution device. To match the input requirement of the converter with the output (required)/available
from the sensor, some form of scaling and offsetting is necessary and is performed with an amplifier or
attenuator. For converting analog information from more than one source, either additional converters
or multiplexers may be required; to increase the speed with which information is to be accurately con-
verted, a sample-and-hold circuit may be desired or become a necessity. In the case of extra-wide range
analog signals, logarithmic conversion has to be resorted to. A schematic block diagram of generalized
data acquisition system is shown in Fig. 16.6.
The characteristics of data acquisition systems depend upon both the properties of the analog
data itself and on the processing to be carried out. Based on the environment, a broad classification

Signal
Transducer 1
Conditioner 1

Signal - Digital display


Transducer 2
Conditioner 2 - Magnetic tape
A
Analog - Data transmission
D for computer
Multiplexer
C interface
Signal
Transducer 3 - Recorder
Conditioner 3

Signal
Transducer-n
Conditioner-n Program
control
Fig. 16.6 Generalized data acquisition system
410 Metrology and Measurement

divides data acquisition systems into two categories, viz., those suited to favorable environments
(minimum radio frequency interference and electromagnetic induction) and those intended for hostile
environments. The former category may include, among others, laboratory instrument applications,
test systems for gathering long-term drift information on zeners, high-sensitivity calibration tests, and
research or routine investigations, such as ones using mass spectrometers and lock-in amplifiers. In
these, the system designers’ tasks are oriented more towards making sensitive measurements rather
than to the problems of protecting the integrity of the analog data. The second category specifically
includes measurements protecting the integrity of the analog data under the hostile conditions. Situ-
ations of this nature arise in industrial process control systems; aircraft control systems, turbovisory
instrumentation in electrical power stations, and a host of other measurements to be carried out under
industrial environments.
Measurements under hostile conditions often require devices capable of wide temperature-range
operation, excellent shielding and redundant paths for critical measurements, and considerable pro-
cessing of the digital data. In addition, digital conversion of the signal at early stages, thus making full
use of high-noise immunity of digital signals, as well as considerable design effort in order to reduce
common mode errors and avoidable interferences, can also enhance performance and increase reli-
ability. On the other hand, laboratory measurements are conducted over narrower temperature ranges,
with much less ambient electrical noise, employing high sensitivity and precision devices for higher
accuracies and resolution. The prevention of an appropriate signal-to-noise ratio may still have to be
achieved with due emphasis on design and measurement techniques. The important factors that decide
the configuration and the sub-system of the data acquisition system are the following:

1. Resolution and Accuracy The resolution desired for a measurement is often governed by
the overall accuracy required from the system and is typically three to five times better than the desired
accuracy value. The resolution obtainable from a measurement is not only dependent on the resolution
that the measuring device is capable of, but also on the relative time stability of the measurand itself.
When a time-varying stationary parameter is under observation, improvement in stability, and in turn
resolution, is possible by statistical averaging of the measured values. Accuracy being the closeness with
which a measured value agrees with a specified standard, absolute accuracy can always be brought into
a system which has sufficient stability and linearity, by providing a calibration facility. Once the system
has been calibrated, accuracy impairments will depend on the stability of the system variants, such as
gain stability and reference stability. Since the resolution with which a measurement can be made often
decreases for higher measurement rates, for the same cost, the need for a specific resolution desired has
to be examined with great care and full understanding of the requirement.

2. The Number of Channels to be Monitored The number of channels on which mea-


surements are to be carried out and the desired rate at which each channel should be measured decide
the overall bit rate of the converters necessary to be used. If n channels need to be monitored with a
rate less than k readings (measured value) per second in any of the channels, the bit rate of the convert-
ers should be, as a guideline, approximately 3 nkP bits/s, where P refers to the resolution in bits for the
readings desired to be taken.
Intermediate Modifying and Terminating Devices 411

3. Sampling Rate per Channel When the sample rate desired from a specific number of
channels are lesser by a factor of two or more, it may be possible to employ sub-commutation in order
to reduce the effective number of channels that have to be scanned at the highest rate.

4. Signal-conditioning Requirement of Each Channel

5. Cost

16.4 SIGNAL-CONDITIONING SYSTEMS

This article first explains a general overview of signal-conditioning and then discusses some of the
converter technologies. Experienced users of signal-conditioning systems may skip the introductory
part and refer directly to the critical technologies section.
Signal-conditioning is one of the most important and most overlooked components of a data-
acquisition system. With it, we can bring real-world signals into our digitizer. Many sensors require
special signal-conditioning technology, and no instrument has the capability to provide all types
of signal-conditioning to all sensors. For example, thermocouples produce very low-voltage sig-
nals, which require amplification, filtering and linearization. Other sensors, such as strain gauges
and accelerometers, require power in addition to amplification and filtering, while other signals may
require isolation to protect the system from high voltages. No single instrument can provide the
flexibility required to make all of these measurements. However, front-end signal-conditioning can
combine the necessary technologies to bring these various types of signals into a single data acquisi-
tion system.
Not all signal-conditioning requirement/options are equal. Most choices are non-intelligent, paral-
lel-in/parallel-out configurations that offer the bare minimum of functionality for a select few sig-
nals or sensor types. However, for computer-based measurement and automation, we want a system
designed to take advantage of the latest PC-based data acquisition and instrumentation technologies.
This system should have programmable input settings, the ability to be automatically detected by your
computer, and tight integration with your software to handle scaling and channel management. The
system under consideration should offer all of the conditioning technologies that is needed, proof of
their accuracy, and the capability to take advantage of the advances in high-speed digitizers.

16.4.1 Defining Signal-Conditioning


As discussed in the earlier article, most signals require some form of preparation before they can be
digitized. As previously mentioned, thermocouple signals are very small voltage levels that must be
amplified before they can be digitized. Other sensors (transducers), such as RTDs, thermistors, strain
gauges and accelerometers, require electrical power to operate. Even pure voltage signals can require
special technologies for blocking large common-mode signals or for safely measuring high voltages.
Transducer characteristics define many of the signal-conditioning requirements of your measurement
system which forms the basis of further system design and installation. Table 16.1 summarizes the
412 Metrology and Measurement

examples of basic characteristics and conditioning requirements of some common transducers. All of
these preparation technologies are forms (called) as of signal-conditioning.

Table 16.1 Electrical characteristics and basic signal-conditioning requirements of common


transducers

Sensor Electrical Characteristics Signal-Conditioning Requirement


Thermocouple Low-voltage output Reference temperature
Low sensitivity sensor (for cold-junction
Non-linear output compensation)
High amplification
Linearization
RTD Low resistance (100 ohms typical) Current excitation
Low sensitivity Four-wire/three-wire
Non-linear output configuration
Linearization
Strain gauge Low-resistance device Voltage or current excitation
Low sensitivity High amplification
Non-linear output Bridge completion
Linearization
Shunt calibration
Current output device Current loop output Precision resistor
(4 mA -- 20 mA typical)
Thermistor Resistive device Current excitation or voltage
High resistance and sensitivity excitation with reference
Very non-linear output resistor Linearization
Active accelerometers High-level voltage or current output Power source
Linear output Moderate amplification
AC Linear Variable Differential AC voltage output AC excitation
Transformer (LVDT) Demodulation
Linearization

Because of the vast array of signal-conditioning technologies, the role and need for each technology
can quickly become confusing. Therefore, we’ve provided a list of common types of signal-conditioning,
their functionality, and examples of when you need them.

1. Amplification When the voltage levels measuring are very small, amplification is used to
maximize the effectiveness of the digitizer. By amplifying the input signal, the conditioned signal
uses more of the effective range of the analog-to-digital converter (ADC) and enhances the accuracy
and resolution of the measurement. Typical sensors that require amplification are thermocouples and
strain gauges.
Intermediate Modifying and Terminating Devices 413

2. Attenuation Attenuation is the opposite of amplification. It is necessary when the voltages to


be digitized are beyond the input range of the digitizer. This form of signal-conditioning diminishes
the amplitude of the input signal so that the conditioned signal is within the range of the ADC. Attenu-
ation is necessary for measuring high voltages.

3. Isolation Voltage signals well outside the range of the digitizer can damage the measurement
system and harm the operator. For that reason, isolation is usually required in conjunction with attenu-
ation to protect the system and the user from dangerous voltages or voltage spikes. Isolation may also
be required when the sensor is on a different ground plane from the measurement sensor (such as a
thermocouple mounted on an engine).

4. Multiplexing Typically, the digitizer is the most expensive part of a data-acquisition system.
By multiplexing, we can sequentially route a number of signals into a single digitizer, thus achieving a
cost-effective way to greatly expand the signal count of your system. Multiplexing is necessary for any
high-channel-count application.

5. Filtering Filtering is required to remove unwanted frequency components from a signal, pri-
marily to prevent aliasing and reduce signal noise. Thermocouple measurements typically require a
lowpass filter to remove power line noise from the signals. Vibration measurements normally require an
antialiasing filter to remove signal components beyond the frequency range of the acquisition system.

6. Excitation Many sensors, such as RTDs, strain gauges, and accelerometers, require some form
of power to make a measurement. Excitation is the signal-conditioning technology required to provide
this power. This excitation can be a voltage or current source, depending on the sensor type.

7. Linearization Some types of sensors produce voltage signals that are not linearly related to
the physical quantity they are measuring. Linearization, the process of interpreting the signal from the
sensor as a physical measurement, can be done either with signal-conditioning or through software.
Thermocouples are the classic example of a sensor that requires linearization.

8. Cold-Junction Compensation Another technology required for thermocouple measure-


ments is cold-junction compensation (CJC). Any time a thermocouple is connected to a data acquisition
system, the temperature of the connection must be known in order to calculate the true temperature
the thermocouple is measuring. A built-in CJC sensor must be present at the location of the connec-
tions.

9. Simultaneous Sampling When it is critical to measure two or more signals at the same
instant in time, simultaneous sampling is required. Front-end signal-conditioning can provide a much
more cost-effective simultaneous sampling solution than purchasing a digitizer for each channel. Typi-
cal applications that might require simultaneous sampling include vibration measurements and phase-
difference measurements.
414 Metrology and Measurement

Most sensors require a combination of the previously described signal-conditioning technologies.


Again, the thermocouple is the classic example because it requires amplification, linearization, cold-
junction compensation, filtering, and sometimes isolation. Ideally, a good measurement platform should
give you the ability to select the type of signal-conditioning that is needed for your application. In some
systems, front-end signal-conditioning is an option, but in other systems, front-end signal-conditioning
is a necessity to make the required measurements. As a rule of thumb, the measurement system should
include signal-conditioning if we are planning to use any of the following sensors: thermocouples,
RTDs, thermistors, strain gauges, force/load/torque transducers, LVDTs/RVDTs/resolvers, acceler-
ometers, mixed low-voltage/high-voltage sources, current sources, resistance source.

16.4.2 Technologies used for Proper Signal-Conditioning


When the signal-conditioning is necessary for a data acquisition system, we should choose a system
that takes advantage of the latest advances in computer-based measurement and automation. For a
signal-conditioning platform to fully exploit these advances, there are several critical technologies that it
should possess. These critical technologies ensure that you get a high-performance signal-conditioning
platform that integrates tightly with the rest of your system, all at a reasonable total cost. The primary
technologies we will examine in depth are integration, calibration, connectivity, switching, isolation,
expandability, bandwidth, software, and ease of use. By understanding each of these technologies, you
will be able make an informed decision on the purchase of a front-end signal-conditioning system.

1. Integration The ability of a signal-conditioning system to integrate easily with the rest of
your system is technology that is a must. Your system should be modular, thus giving you the ability to
choose the types of signal-conditioning necessary for your system. It is also critical to have a system
that accommodates mixed signal types. For example, the system should be able to connect currents,
high voltages, various sensors, analog outputs, digital I/O, and switching all into the same platform.

2. Calibration One of the most critical technologies that a signal-conditioning system should
possess is the ability to be easily and accurately calibrated. Most measurement devices are calibrated at
the factory, but the accuracy immediately starts to drift with time and temperature changes. To make the
most accurate measurements possible, it is necessary to periodically calibrate the entire data acquisition
system. If the system has precision onboard voltage references, the operator can adjust the measure-
ment system to compensate for temperature changes. In addition, you must have access to external
calibration services to keep your system performing up to the manufacturer’s specifications year after
year. It is very important to learn the calibration process for any signal-conditioning system under
consideration because that is the only way to ensure that investment contains the technology which is
needed to make accurate and reliable measurements.

3. Connectivity Because connecting signals to the signal-conditioning system can be a major


issue, it is critical to select a platform that gives you the connectivity options you need. A good front-
end signal-conditioning system should give the operator a wide range of connectivity options, including
thermocouple plugs, screw terminals, and BNC connectors.
Intermediate Modifying and Terminating Devices 415

4. Switching In today’s demanding test environments, the ability to route signals easily through-
out a measurement system is a technology that can lead to huge improvements in test times. As an
example, consider a case where a unit under test (UUT ) must be subjected to four separate mea-
surements in the testing process. Without the proper technology, the UUT must be reconnected to
each different measurement device for each test. Nowadays with state-of-the art switching technol-
ogy, the operator can not only route the UUT leads automatically to each measurement device in
turn, but can also test several UUTs at the same time. Thus, we could achieve more efficient use of
your test equipment, faster test times, and less user intervention. The selection of a signal-condi-
tioning system that offers this technology can have a huge impact on the overall performance of
the system.

5. Isolation Another important technology to consider is isolation. When we are measuring sig-
nals that either are high-voltage signals or are subject to voltage spikes, it is critical that those signals are
isolated from the rest of your system. Inadequate isolation compromises the safety of the operator, as
well as the integrity of the entire data-acquisition system. When determining the isolation requirements
of your system, it is imperative to have reliable and accurate isolation specifications, including both a
safe working voltage rating and an installation rating.

6. Expandability Any front-end signal-conditioning system should be easily expandable. Adding


more channels or different types of signals to the UUT system must not require a massive overhaul of
your data-acquisition system. With the right technology, expanding the system should be as simple as
plugging in another module.

7. Bandwidth In addition to being expandable, a system should also have the bandwidth to handle
the data throughput from a high-channel-count system. The bandwidth should also be high enough
to accommodate future growth in channel count. System bandwidth is typically expressed in samples/
second (Hz). To determine the minimum necessary bandwidth of the system, we should multiply the
total number of expected channel times the maximum sampling rate needed on an individual channel.
For a high-channel-count system, the required bandwidth for a modest acquisition rate can quickly
reach several hundred kHz. Bandwidth is an often overlooked, but extremely important, technology to
consider when selecting a signal-conditioning system.

8. Software A large portion of the total cost of a test and measurement system is application
development. To keep application development costs to a minimum, software tools must be used that
maximize system productivity. The signal-conditioning system should be designed to integrate tightly
with these software tools. Only with the capability to fully control the signal-conditioning system under
consideration, can the software application take full advantage of the latest technologies in computer-
based measurement and automation.

9. Configuration/Installation Finally, any signal-conditioning system should be easy to use.


No one can afford to lose time due to overly complex installation or configuration issues. An ideal
416 Metrology and Measurement

conditioning system will poll the hardware, report which equipment we have, and provide us a software
interface for setting up all signal-conditioning settings. Configuring channels through software, having
the capability to set up channel names and scaling to engineering units should be proper.

16.4.3 Signal-Conditioning Using Computer-based Data Acquisition


Systems
Computer-based measurement systems are used in a wide variety of applications. In laboratories, in
field services and on manufacturing plant floors, these systems act as general-purpose measurement
tools well suited for measuring voltage signals. However, many real-world sensors and transducers
require signal-conditioning before a computer-based measurement system can effectively and accu-
rately acquire the signal. The front-end signal-conditioning system can include functions such as signal
amplification, attenuation, filtering, electrical isolation, simultaneous sampling, and multiplexing. In
addition, many transducers require excitation currents or voltages, bridge completion, linearization, or
high amplification for proper and accurate operation. Therefore, most computer-based measurement
systems include some form of signal-conditioning in addition to plug-in data acquisition DAQ devices,
as shown in Fig.16. 7.

Physical Transducers Signal conditioning Data acquisition Personal computer


phenomena device

Fig. 16.7 Signal-conditioning is an important component of a PC-based DAQ system

16.4.4 Methods of Signal-conditioning


In this article, the two methods of signal-conditioning discussed, which are particularly applicable with
advantage, are ratiometric conversion and logarithmic compression.

1. Ratiometric Conversion Consider a transducer using four strain gauges in a Wheatstone


bridge network. The output voltage is a function of the change in resistance of each arm and the excita-
tion voltage of the bridge. When the strain gauges are under maximum but constant unbalance and if
the excitation voltage varies by X %, the output of the bridge also varies by X %. However, if the bridge
output is conditioned in such a way that the output of the signal amplifier is a voltage proportional to the
strain only and independent of the excitation voltage, the system accuracy improves since the fluctuation
in the excitation voltage does not affect the sensitivity of the system. The analog method of achieving this
is to incorporate an analog divider to which the amplifier output and the excitation voltage are fed so that
Intermediate Modifying and Terminating Devices 417

the output of the divider is a ratio of the amplifier output voltage to the excitation voltage. An alternative
method, as shown in Fig. 16.8 is to feed the bridge-excitation voltage as an external reference voltage
for the analog to digital (AID) converter in which the conversion factor is inversely proportional to the
reference voltage. The system sensitivity is then independent of the fluctuations in bridge-excitation
voltage.

R1 R2 Instrumentation
ADC Buffer
amplifiers

R4 R3

Fig. 16.8 Ratiometric conversion

2. Logarithmic Compression A logarithmic compression circuit enables the measurement


of a fractional change in the input as a percentage of the input magnitude rather than as a percentage
of a range. Now consider, for an input in the range of 100 mV to 100 mV, the output may correspond
to zero volt for 100 mV input and 3 V for 100 mV input, if the logarithmic conversion gain is 1 V per
decade. Consider now a 1% change in the input from 100 mV to 101 mV. The output of the log ampli-
fier would change by
∇V = [ log (10l/100)] × IV = 4.3 mV
Since the output change is related to the ratio of the inputs, it is evident that the change in the
output is the same, viz., 4.3 mV, whether the input changes from 10.0 mV to 10.1 mV or from 100 μV
to 101 μV. If the log amplifier output is converted into a digital output using a 12-bit BCD converter,
the resolution of the converter would be 3 V/103 = 3 mV, for a 3-V full-scale input provided the
output of the log amplifier is scaled up appropriately. With this resolution of the converter, it is pos-
sible to monitor and record changes as low as 1 μV for an input of 100 μV, or 10 μV for an input of
1 μV. In the absence of log conversion, if the 100 mV input was scaled up to yield the full scale of the
12-bit converter, the resulting resolution would have been only 100 μV [= 100 mV/103]. Thus, a 100
to 1 improvement in resolution can be affected by logarithmic compression, admittedly, as indicated in
Figs 16.9(a) and (b).
418 Metrology and Measurement

1 V/Decade 0–3 V 0–10 V


100 μV
to Log amplifier Gain stage ADC
100 mV

12-Bit BCD
I/P resolution: Varies from 0.7 μ to 700 μV
(a)

100 μV X 1000
to ADC
amplifier
100 mV
I/P resolution: 100 μV
(b)
Fig. 16.9 Logarithmic compression

This cannot be achieved without loss of performance elsewhere. For example, while the log ampli-
fier can enhance the resolution at high inputs (99.9 mV ), it is definitely poorer. At this input, one least
significant bit (LSB) change in the output of the ADC can occur only if the input is decreased to 92.2 mV,
i.e., an equivalent resolution of only 700 μV. (This would have been uniformly 100 μV without log con-
version.) The log conversion in effect thus distributes the resolution on a ‘percentage of reading’ basis
as against a ‘percentage of full scale’ as with AID conversion. Such conditioning can be advantageous
in systems possessing an output relationship involving the logarithmic of the measurand or where a
moderate accuracy measurement (= 1%) is desired, over a wide range (1 : 105).
Since the log function is inherently unipolar, other types of compression can be employed when
handling bipolar inputs. A particular case of interest is the sinh-1 function, which can be obtained using
complementary logarithmic transconductors.

16.4.5 ADC and DAC


1. Analog-to-digital converter (ADC) ADCs are required to transform information from
analog to binary/digital form. ADCs receive input from transducers within intelligent instruments
in analog form, perform calculations on the analog signal, and then digitally encode the output in a
format that computerized systems can accept and process. Analog-to-digital converter chips are used
in a variety of applications, including data-acquisition, communications, instrumentation, and signal
processing. To cover a broad range of performance needs, ADCs are available in different resolutions,
bandwidths, accuracies, packaging, power requirements, and temperature ranges.
Analog quantities are continuous functions with time and digital quantities are discrete and vary in
equal steps. Each digital number is a fixed sum of equal steps, which is defined by that number. There-
fore, converting an analog signal to a digital number involves quantization. And the discrete output levels
can be identified by a set of numbers such as a binary code. These two processes of quantization and
coding represent the basic operation of an ADC. An input–output form of ADC is shown in Fig. 16.10.
The X-axis shows the analog input shown and the Y-axis shows the discrete output levels. This
Intermediate Modifying and Terminating Devices 419

characteristic is ideal with analog decision levels D at


110 6 values of 0.5, 1.5, 2.5, etc. In-between values cannot be
101 5 coded, as decision levels are set values, which lie about
Output codes

100 4 the true levels. A quantizer with a binary output code


011 3 has 2n discrete output levels with 2n–1 analog decision
Decision levels, where n is the number of bits in the code. If the
010 2
D level input to the quantizer is moved through its full range
001 1
of values and subtracted from the discrete output
000 0 levels, an error signal results as shown in Fig. 16.10.
0.5 1.5 2.5 3.5 4.5 5.5 6.5
Analog Level (in D) The difference between the analog voltages of two
Fig. 16.10 Discrete output levels successive/adjacent levels is termed the quantizational
interval. The digital output is not always proportional
to the analog input and thus there will be an error, called quantizational error. This error depends upon
quantizational levels or the resolution of the quantizer. When the input is centered over the interval, the
quantizational error is zero and the maximum error is D/2. To perform the operation of quantifying
and coding the signal, an ADC requires a time, termed the aperture time.
Figure 16.11 illustrates the analog signal and 16.11(a) shows the sampling pulses as a clock signal, which
supplies the time signals at which the sampling occurs. The result of the sampling process is identical to
multiplying the analog signals by a train of pulses of unit magnitude, as shown in Fig. 16.11(b). The result
of the sampling is a series of narrow pulses (modulated signals), shown in Fig. 16.11(c). The analog signal
is sampled and held until the next sample pulse occurs, with the result as shown in Fig. 16.11(d).
There are four methods used in ADC — the successive approximation method ( potentiometric),
voltage-to-frequency conversion (integrating type), voltage-to-time (ramp type), and dual-slope integra-
tion method. In terms of performance, analog-to-digital converter selection may vary according to res-
olution, sample rate, input voltage range, operating temperature, and a number of other variables, viz.,
signal-to-noise (SNR) ratios, signal-to-noise distortion (SINAD) ratios, and differential non-linearity
(DNL). The potentiometer ADC is probably the most widely used in general practice due to its combi-
nation of high resolution and high frequency.
Successive-approximations register (SAR) and flash are two common architectures for analog-to-
digital converters. SAR architecture uses a single comparator and multiple conversion cycles. Flash, or
parallel, architecture uses multiple comparators and a single conversion cycle. With flash, ADCs use a
set of 2n – 1 comparators to measure an analog signal to a resolution of n bits. Consequently, flash ADCs
are faster than SAR ADCs, but require a greater number of comparators. Pipeline architecture over-
comes some of the limitations of flash architecture by dividing the conversion task into several consec-
utive stages. Each stage consists of a sample and hold circuit, an m-bit ADC (e.g., a flash converter), and
an m-bit digital-to-analog converter (DAC). In this way, pipelined converters achieve higher resolutions
than flash converters containing a similar number of comparators. However, pipeline analog-to-digital
converter chips increase the total conversion time from one cycle to p cycles.
Another approach, subranging, combines flash, SAR, and pipeline architectures and breaks n-bit con-
versions into m-bit subconversions. Like pipeline architecture, subranging consists of several cascading
420 Metrology and Measurement

Analog
signal
(a)

Sampling
pulses
(b)

Sampled
signal
(c)

Sampled
and held
signal
(d)

Fig. 16.11 Signals sampling-holding process

stages, each of which uses a low-resolution analog-to-digital converter to estimate the input, and an
accurate DAC to convert the output. Subranging also calculates the residue, the difference between
the estimated input and the actual output. A gain block is used to amplify and restore the residue to an
appropriate level for further estimation by the next stage.
Sigma-delta architecture takes a fundamentally different approach than other ADC architectures.
Sigma-delta converters consist of an integrator, a comparator, and a single-bit DAC. The DAC output
is subtracted from the input signal, the resulting signal is integrated, and the comparator converts the
integrator output voltage to a single-bit digital output (1 or 0). The resulting bit becomes the DAC’s
input, and the DAC’s output is subtracted from the ADC’s input signal. With sigma-delta architecture,
the digital data from the ADC is a stream of ones and zeros, and the value of the signal is proportional
to the density of digital ones from the comparator. This bit stream data is then digitally filtered and
decimated to result in a binary-format output.
Intermediate Modifying and Terminating Devices 421

2. Digital-to-Analog Converter (DAC) DACs transform information from digital to


analog form. They convert signals that have two defined states, on and off, into signals that have a theo-
retically infinite number of states. For example, modems convert digital computer data that consists of
ones and zeros into audio frequency (AF) tones that can be transmitted over telephone lines. Digital-
to-analog converters are also used in digital signal processing to improve the intelligibility and fidelity
of analog signals. First, analog-to-digital converter chips (ADCs) are used to convert analog signals into
digital form. Next, special circuitry is used to improve these signals. Finally, digital-to-analog converters
are used to transform the digital impulses back into analog form.
There are several architectures for digital-to-analog converter chips. Some DACs use a resistive
ladder network (R 2R) in which each segment consists of two resistors—one with a value of R and
one with a value of 2R. Other DACs include a string of resistors, each of which has a value of R.
Current steering is an architecture that uses an internal current source to deliver the output cur-
rent. Sigma-delta architecture takes a fundamentally different approach. In their most basic form,
sigma-delta converters consist of an integrator, a comparator, and a single-bit DAC. The output of
the digital-to-analog converter is subtracted from the input signal. The resulting signal is integrated,
and the output voltage is converted to a single-bit digital output by the comparator. The resulting
bit becomes the input to the DAC, and the output is subtracted from the input signal. Performance
specifications for digital-to-analog converter chips include resolution, settling time, differential non-
linearity ( DNL), integral non-linearity (INL), power dissipation, reference access, and special fea-
tures. Resolution measures the number of discrete levels used to represent a signal and is usually
defined in bits.
Digital-to-analog converter chips are available in a variety of integrated circuit (IC) package types.
Basic types include ball grid array (BGA), quad flat package (QFP), single inline package (SIP), and
dual inline package (DIP). Many packaging variants are available. For example, BGA variants include
plastic-ball grid array (PBGA) and tape-ball grid array (TBGA). QFP variants include low-profile quad
flat package (LQFP) and thin quad flat package (TQFP). DIPs are available in either ceramic (CDIP)
or plastic (PDIP). Other IC package types for digital-to-analog converter chips include small outline
package (SOP), thin small outline package (TSOP), and shrink small outline package (SSOP).

16.4.6 Operational Amplifier (op-amp)


The operational amplifier (op-amp) was designed to perform mathematical operations. Although now
superseded by the digital computer, op-amps are a common feature of modern analog electronics.
The op-amp is constructed from several transistor stages, which commonly include a differential-input
stage, an intermediate-gain stage and a push-pull output stage. The differential amplifier consists of a
matched pair of bipolar transistors or FETs. The push-pull amplifier transmits a large current to the
load and hence has a small output impedance.
The op-amp is a linear amplifier withVout ∝ Vin. The dc open-loop voltage gain of a typical op-amp
is 107 to 108. The gain is so large that most often feedback is used to obtain a specific transfer function
and control the stability. The following kinds of amplifiers are used to perform appropriate functions
to get output in required forms.
422 Metrology and Measurement

1. Differential Amplifier The input and feedback connections are both made to the inverting (−)
input. The non-inverting input (+) is grounded through a resistor. This is what forces the inverting input
to be a virtual ground: the amplifier output voltage depends on the voltage difference between the two
inputs, rather than the absolute voltage at either input. As the input transistors inside the op-amp do actu-
ally require a very slight input current, there is a very slight corresponding voltage drop across the resistors
connected to those inputs.
The circuit shown in Fig. 16.12 is used for finding the difference of two voltages, each multiplied by
some constant (determined by the resistors).

Rf

R1
V1 −
Vout
R2
V2 +

Rg

Fig. 16.12 Differential amplifier

Differential Zin (between the two input pins) = R1+R 2


⎛ ( R + R ) R ⎞⎟ ⎛R ⎞
⎜ f
⎟⎟ −V1 ⎜⎜⎜ f ⎟⎟⎟
g⎟
V out = V 2 ⎜⎜
1
⎜⎜( R + R ) R ⎟⎟ ⎝ R1 ⎟⎠
⎝ g 2 1⎠

Amplified difference
Whenever R1 = R 2 and Rf = Rg,
Rf
V out = (V 2 −V1 )
R1
When R1 = Rf and R 2 = Rg (including previous conditions, so that R1 = R 2 = Rf = Rg):
V out = V 2 −V1

2. Inverting Amplifier Although the standard op-amp configuration is as an inverting


amplifier, there are some applications where such inversion is not wanted. Since the op-amp itself is
actually a differential amplifier, there is no reason why it cannot be configured to operate in a non-
inverting mode.
Intermediate Modifying and Terminating Devices 423

Figure 16.13 shows an inverting amplifier, represented by the triangle. It inverts its polarity and
amplifies a voltage (multiplies by a negative constant) producing an output voltage, Vout.
V out = −V in ( Rf / Rin )

Zin = Rin (because V- is a virtual ground)


A third resistor of value Rf � Rin = Rf Rin / ( Rf + Rin ), added between the non-inverting input and
ground, while not necessary, minimizes errors due to input-bias currents.

Rf

Rin
Vin −

Vout

Rg

Fig. 16.13 Inverting amplifier

Vin +
3. Non-Inverting Amplifier A non-inverting ampli-
fier amplifies (shown in Fig. 16.14) a voltage (multiplies by a
Vout
constant greater than 1).

⎛ R ⎞
V out = V in ⎜⎜⎜1 + 2 ⎟⎟⎟
R1 R2 ⎝ R1 ⎟⎠
Z = ∞ (realistically, the input impedance of the op-amp
itself, 1 MΩ to 10 TΩ)
Fig. 16.14 Non-inverting amplifier
A third resistor, of value Rf � Rin, added between the Vin
source and the non-inverting input, while not necessary, minimizes errors due to input bias currents.

4. Summing Amplifier One of the most common applications for an op-amp is to algebraically
add two (or more) signals or voltages to form the sum of those signals. Such a circuit is known as a sum-
ming amplifier, or just as a summer. The source of these signals might be anything at all. Common input
424 Metrology and Measurement

sources are another op-amp, some kind of sensor circuit, or Rn


Vn
an initial constant value. Since we don’t have the first two avail- R3
able at this time, we’ll use the third source for this experiment. V3 Rf
R2
The point of using an op-amp to add multiple input signals is V2
to avoid interaction between them, so that any change in one
R1
input voltage will not have any effect on the other input. V1 −
Vout
Refer Fig. 16.15, in which a summing amplifier sums several
+
(weighted) voltages.
⎛V V V ⎞
V out = −Rf ⎜⎜⎜ 1 + 2 + � + n ⎟⎟⎟
⎝ R1 R2 Rn ⎟⎠ Fig. 16.15 Summing amplifier

when, R1 = R2 = … = Rn, and Rf are independent

⎛R ⎞
V out = −⎜⎜ f ⎟⎟⎟(V1 +V 2 + � +V n )
⎜⎝ R1 ⎟⎠

when R1 = R2 = � = Rn = Rf

V out = −(V1 +V 2 + � +V n )

Output is inverted.
Input impedance Zn = Rn, for each input (V- is a virtual C
ground).

5. Integrating Amplifier Integrating amplifier R


integrates the (inverted) signal over time (refer Fig. 16.16) Vin −
Vout
t V
V out = ∫ − in dt +V initial +
0 RC

(where Vin and Vout are functions of time, Vinitial is the Fig. 16.16 Integrating amplifier
output voltage of the integrator at time t = 0.)
R

6. Differentiating Amplifier A differentiating


amplifier (shown in Fig. 16.17) differentiates the (inverted) C
signal over time. Vin −

⎛ dV ⎞ Vout
V out = −RC ⎜⎜⎜ in ⎟⎟⎟
⎝ dt ⎠ +

(where Vin and Vout are functions of time)


Note that this can also be viewed as a type of electronic
filter. Fig. 16.17 Differentiating amplifier
Intermediate Modifying and Terminating Devices 425

7. Comparator Compares two voltages and outputs one of two


V1 −
states depending on which is greater.
Vout
V2 +

⎪V S+ V1 >V 2
V out = ⎪


⎩V S− V1 <V 2
⎪ Fig. 16.18 Comparator

8. Voltage Follower A voltage follower is used as a


buffer amplifier, to eliminate loading effects or to interface
impedances (connecting a device with a high source imped- −
ance to a device with a low input impedance) Vout

Vin +
V out = V in

Z in = ∞ Fig. 16.19 Voltage follower

(realistically, the differential input impedance of the op-amp itself, 1 MΩ to 1 TΩ)

16.5 INTRODUCTION TO TERMINATING DEVICES

16.5.1 X–Y Plotter


A plotter is a vector-graphics printing device that connects to a computer. An X–Y plotter is a plotter
that operates in two axes of motion (X and Y ) in order to draw continuous vector graphics. Techni-
cally speaking, all plotters function in two axes, such that referring to a plotter as an X–Y plotter is a
bit redundant.
Plotters print their output by moving a pen across the surface of a piece of paper. This means that
plotters are restricted to line art, rather than raster graphics as with other printers. They can draw com-
plex line art, including text, but do so very slowly because of the mechanical movement of the pens.
(Plotters are incapable of creating a solid region of colour; but can hatch an area by drawing a number
of close, regular lines.) When computer memory was very expensive, and processor power was very
slow, this was often the fastest way to produce colour high-resolution vector-based artwork, or very
large drawings efficiently.
Plotters differ from inkjet and laser printers in that a plotter draws a continuous line, much like a
pen on paper, while inkjet and laser printers use a very fine matrix of dots to form images, such that
while a line may appear continuous to the naked eye, it in fact is a discrete set of points. Tradition-
ally, printers are primarily for printing text. This makes it fairly easy to control them—simply sending
the text to the printer is usually enough to generate a page of output. This is not the case of the line
art on a plotter, where a number of printer control languages were created to send more detailed
information like “draw a line from here to here”. The two common ASCII-based plotter control
languages are Hewlett Packard’s HPGL and Houston Instrument’s DMPL with commands such as
“PA 3000, 2000; PD”.
426 Metrology and Measurement

Programmers in FORTRAN or BASIC generally used software packages, such as the Calcomp
library, or device-independent graphics packages such as Hewlett-Packard’s AGL libraries or BASIC
extensions or high-end packages such as DISSPLA. These would establish scaling factors from world
coordinates to device coordinates, and translate to the low-level device commands.
Early plotters (e.g., the Calcomp 565 of 1959) worked by placing the paper over a roller which
moved the paper back and forth for an X motion, while the pen moved back and forth on a single arm
for a Y motion. Another approach (e.g., Computervision’s Interact I) involved attaching ball-point pens
to drafting pantographs and driving the machines with motors controlled by the computer. This had
the disadvantage of being somewhat slow to move, as well as requiring floor space equal to the size of
the paper, but could double as a digitizer. A later change was the addition of an electrically controlled
clamp to hold the pens, which allowed them to be changed and thus create multi-coloured output.
Hewlett Packard and Tektronix created desk-sized flatbed plotters in the late 1970s. In the 1980s,
the small and lightweight HP 7470 used an innovative ‘grit wheel’ mechanism which moved only the
paper. Modern desktop scanners use a somewhat similar arrangement. These smaller ‘home-use’ plot-
ters became popular for desktop business graphics, but their low speed meant they were not useful
for general printing purposes, and another conventional printer would be required for those jobs. One
category introduced by Hewlett Packard’s MultiPlot for the HP 2647 was the ‘word chart’ which used
the plotter to draw large letters on a transparency. This was the forerunner of the modern Powerpoint
chart. With the widespread availability of high-resolution inkjet and laser printers, inexpensive memory
and computers fast enough to rasterize colour images, pen plotters have all but disappeared.

Other Uses Plotters are used primarily in technical drawing and CAD applications, where they
have the advantage of working on very large paper sizes while maintaining high resolution. Another
use has been found by replacing the pen with a cutter, and in this form plotters can be found in many
garment and sign shops. A niche application of plotters is in creating tactile images for visually handi-
capped people on special thermal cell paper.

16.5.2 Cathode-ray Oscilloscope (CRO)


An oscilloscope (sometimes abbreviated to CRO for cathode-ray oscilloscope, or commonly just
scope or O-scope) is a an electronic test equipment that allows signal voltages to be viewed, usually
as a two-dimensional graph of one or more electrical potential differences (vertical axis) plotted as a
function of time or of some other voltage (horizontal axis).
It is like a voltmeter with the valuable extra function of showing how the voltage varies with time.
A graticule with a 1-cm grid enables us to take measurements of voltage and time from the screen.

Constructional Details The earliest and simplest type of oscilloscope consisted of a cath-
ode ray tube, a vertical amplifier, a timebase, a horizontal amplifier and a power supply. These are
now called ‘analog’ scopes to distinguish them from the ‘digital’ scopes that became common in the
1990s and 2000s. The cathode ray tube is an evacuated glass envelope, with its flat face covered in a
Intermediate Modifying and Terminating Devices 427

Fig. 16.20 Cathode Ray Oscilloscope

phosphorescent material (the phosphor). The screen is typically less than 20 cm in diameter, much
smaller than one in a usual television set.
In the neck of the tube is an electron gun, which is a heated metal plate with a wire mesh (the grid) in
front of it. A small grid potential is used to block electrons from being accelerated when the electron beam
needs to be turned off, as during sweep retrace or when no trigger events occur. A potential difference
of at least several hundred volts is applied to make the heated plate (the cathode) negatively charged rela-
tive to the deflection plates. For higher bandwidth oscilloscopes, where the trace may move more rapidly
across the phosphor target, a positive post-deflection acceleration voltage of over 10,000 volts is often
used, increasing the energy (speed) of the electrons that strike the phosphor. The kinetic energy of the
electrons is converted by the phosphor into visible light at the point of impact. When switched on, a CRT
normally displays a single bright dot in the centre of the screen, but the dot can be moved about electro-
statically or magnetically. The CRT in an oscilloscope uses electrostatic deflection.
Between the electron gun and the screen, two opposed pairs of metal plates called the deflection plates
are arranged. The vertical amplifier generates a potential difference across one pair of plates, giving
rise to a vertical electric field through which the electron beam passes. When the plate potentials are
the same, the beam is not deflected. When the top plate is positive with respect to the bottom plate,
the beam is deflected upwards; when the field is reversed, the beam is deflected downwards. The
horizontal amplifier does a similar job with the other pair of deflection plates, causing the beam to
move left or right. This deflection system is called electrostatic deflection, and is different from the
electromagnetic deflection system used in television tubes. In comparison to magnetic deflection,
electrostatic deflection can more readily follow random changes in potential, but is limited to small
deflection angles.
The timebase in an electronic circuit is incorporated, which generates a ramp voltage. This is a volt-
age that changes continuously and linearly with time. When it reaches a predefined value, the ramp is
428 Metrology and Measurement

reset with the voltage, thus reestablishing its initial value. When a trigger event is recognized, the reset is
released, allowing the ramp to increase again. The timebase voltage usually drives the horizontal ampli-
fier. Its effect is to sweep the electron beam at constant speed from left to right across the screen, then
quickly return the beam to the left in time to begin the next sweep. The timebase can be adjusted to
match the sweep time to the period of the signal.
Meanwhile, the vertical amplifier is driven by an external voltage (the vertical input) that is taken
from the circuit or experiment that is being measured. The amplifier has a very high input impedance,
typically one megaohm, so that it draws only a tiny current from the signal source. The amplifier drives
with vertical deflection plates with a voltage that is proportional to the vertical input. Because the
electrons have already been accelerated by hundreds of volts, this amplifier also has to deliver almost
hundred volts, and this with a very high bandwidth. The gain of the vertical amplifier can be adjusted
to suit the amplitude of the input voltage. A positive input voltage bends the electron beam upwards,
and a negative voltage bends it downwards, so that the vertical deflection of the dot shows the value of
the input. The response of this system is much faster than that of mechanical measuring devices such
as the multimeter, where the inertia of the pointer slows down its response to the input.
When all these components work together, the result is a bright trace on the screen that represents a
graph of voltage against time. Voltage is on the vertical axis, and time on the horizontal.
Observing high speed-signals, especially non-repetitive signals, with a conventional CRO is difficult,
often requiring the room to be darkened or a special viewing hood to be placed over the face of the
display tube. To aid in viewing such signals, special oscilloscopes have borrowed from night-vision
technology, employing a microchannel plate in the tube face to amplify faint light signals.
The power supply provides low voltages to power the cathode heater in the tube, and the ver-
tical and horizontal amplifiers. High voltages are needed to drive the electrostatic deflection plates.
These voltages must be very stable. Any variations will cause errors in the position and brightness of
the trace. Later, analog oscilloscopes added digital processing to the standard design. The same basic
architecture—cathode ray tube, vertical and horizontal amplifiers—was retained, but the electron beam
was controlled by digital circuitry that could display graphics and text mixed with the analog wave-
forms.
The graph, usually called the trace, is drawn by a beam of electrons striking the phosphor coating
of the screen making it emit light, usually green or blue. This is similar to the way a television picture
is produced. In its simplest mode, the oscilloscope repeatedly draws a horizontal line called the trace
across the middle of the screen from left to right. One of the controls, the timebase control, sets the
speed at which the line is drawn, and is calibrated in seconds per division. If the input voltage departs
from zero, the trace is deflected either upwards or downwards. Another control, the vertical control,
sets the scale of the vertical deflection, and is calibrated in volts per division. The resulting trace is a
graph of voltage against time (the present plotted at a varying position, the less recent past to the left,
the most recent past to the right). A dual trace oscilloscope can display two traces on the screen, allow-
ing you to easily compare the input and output of an amplifier, for example. It is well worth paying
the modest extra cost to have this facility. If the input signal is periodic then a nearly stable trace can
be obtained just by setting the timebase to match the frequency of the input signal. To provide a more
Intermediate Modifying and Terminating Devices 429

stable trace, modern oscilloscopes have a function called the trigger. The scope then waits for a speci-
fied event before drawing the next trace. The trigger event is usually the input waveform reaching some
user-specified threshold voltage in the specified direction (going positive or going negative).
The effect is to resynchronise the timebase to the input signal, preventing horizontal drift of the
trace. In this way, triggering allows the display of periodic signals such as sine waves and square waves.
Trigger circuits also allow the display of non-periodic signals such as single pulses or pulses that don’t
recur at a fixed rate. The chief benefit of a quality oscilloscope is the quality of the trigger circuit. If
the trigger is unstable, the display will always be fuzzy. The quality improves roughly as the frequency
response and voltage stability of the trigger increase.

Measurement of Voltage and Time Period The trace on an oscilloscope screen is a graph
of voltage against time. The shape of this graph is determined by the nature of the input signal. In
addition to the properties labeled on the graph, Voltage
there is frequency, which is the number of cycles Amplitude
Peak-peak
per second. Figure 16.21 shows a sine wave but voltage
these properties apply to any signal with a con- 0
stant shape. Time
Time period

Amplitude is the maximum voltage reached by


the signal. It is measured in volts, V.
Fig. 16.21 Sine wave
Peak voltage is another name for amplitude.
Peak-peak voltage is twice the peak voltage (amplitude). When reading an oscilloscope trace, it is
usual to measure peak-peak voltage.
Time period is the time taken for the signal to complete one cycle. It is measured in seconds (s),
but time periods tend to be short, so milliseconds (ms) and microseconds (µs) are often used. 1 ms =
0.001 s and 1 μs = 0.000001 s.
Frequency is the number of cycles per second. It is measured in hertz (Hz), but frequencies tend to
be high so kilohertz (kHz) and megahertz (MHz) are often used.
1 kHz = 1000 Hz and 1 MHz = 1000000 Hz.

1 1
Frequency = and Time period =
Time period frequency

Voltage Voltage is shown on the vertical y-axis and the scale


is determined by the Y amplifier (volts/cm) control. Usually, peak-
peak voltage is measured because it can be read correctly even if
the position of 0 V is not known. The amplitude is half the peak-
peak voltage.
If you wish to read the amplitude voltage directly, you must Fig. 16.22 The trace of an
check the position of 0 V (normally halfway up the screen)—move AC signal
430 Metrology and Measurement

the ac/GND/dc switch to GND (0 V) and use Y-shift (up/down) to adjust the position of the trace if
necessary. Switch back to dc afterwards so you can see the signal again.
Voltage = distance in cm × volts/cm
Example: peak-peak voltage = 4.2 cm × 2 V/cm = 8.4 V
Amplitude (peak voltage) = ½ × peak-peak voltage = 4.2 V

Time Period Time is shown on the horizontal x-axis and the scale is determined by the TIME-
BASE (TIME/CM) control. The time period (often just called period) is the time for one cycle of the
signal. The frequency is the number of cycles per second, frequency = 1/time period. Ensure that the
variable timebase control is set to 1 or CAL (calibrated) before attempting to take a time reading.
Time = distance in cm × time/cm

Example: Time period = 4.0 cm × 5 ms/cm = 20 ms and frequency = 1


time period = 1 20 ms = 50 Hz

16.5.3 LED Displays


LED digital displays involve single crystal phosphor materials, which distinguishes them from the poly-
crystal electroluminescent displays. Light emitting diodes (LED) are pn junction devices that give off light
radiation when biased in the forward direction. Most light-emitting diodes function in the near infrared
and visible ranges, though there are now UV LEDs. Light-emitting diodes are a reliable means of indica-
tion compared to light sources such as incandescent and neon lamps. LEDs are solid-state devices requir-
ing little power and generating little heat. Because their heat generation is low and because they do not rely
on a deteriorating material to generate light, LEDs have long operating lifetimes. One of the alternatives,
incandescent bulbs, consume much more power, generate a great deal of heat, and rely on a filament that
deteriorates in use. Neon bulbs, on the other hand, rely on excited plasma, which, along with its electrodes,
can deteriorate over time. LED digital displays are highly versatile and well suited to a variety of measure-
ment applications. Display types can be segmented or dot matrix. A seven-segment display can display
numbers only, a sixteen-segment display can display numbers and letters. A 4 × 7 dot matrix display can
display numbers only, a 5 × 7 dot matrix display can display numbers and letters. LED digital displays,
can be numeric or alphanumeric. Numeric displays display numbers only—a seven-segment display or a 4
× 7 dot matrix display. An alphanumeric display can display both numbers and letters—a sixteen-segment
display or a 5 × 7 dot matrix display.
LED displays can have a common anode or common cathode configuration. In a common anode
configuration, all the segments in the LED share one anode pin. In a common cathode configuration,
all the segments in the LED share one cathode pin. The common cathode is the ‘standard’ circuit where
the cathode is connected to the ‘common’ point on the circuit, usually ground, and usually through a
resistor, which is often bypassed with a capacitor, placing it at ‘ac’ ground potential. Important display
package specifications to consider when searching for LED digital displays, include the number of rows
and the number of characters per row. Standard colour choices for LEDs include standard red, yellow,
high-efficiency red, orange, green, and blue. Other display specifications to consider include colour
wavelength, character height, and viewing distance. Wavelength of the display will be determined by
Intermediate Modifying and Terminating Devices 431

the colour of the LED. Viewing distance is determined primarily by the minimum size requirements
for objects that the user must see. The viewing angles on the x and y-axis are also important to con-
sider. The viewing angle of the display is the angle, in degrees, between a line normal to the display
surface and the user’s visual axis. Minimum and typical luminous intensity describes the luminous flux
per unit solid angle, and its unit of measurement is the candela (cd). Case dimensions include width,
depth and height. The case or package of the display will have separate dimensions than the actual
viewing area of the display.

Review Questions

1. Explain the term transducer as a device with the help of any one example.
2. Explain the factors influencing the choice of Transducer for measurement of a physical quantity.
3. Explain the terms:
a. Cross sensitivity
b. Transient and frequency response
c. Primary and secondary transducer
d. Active and passive transducer
e. Sampling rate per channel
f. Signal-conditioning
g. Multiplexing
h. Amplification and attenuation
i. Switching
j. Terminating devices
4. Discuss the classification of a transducer and explain any one electro-mechanical transducer in
detail.
5. Discuss the types of displacement measurement.
6. Explain with the help of a neat sketch how a Linear Variable Differential Transformer (LVDT ) is
used for displacement measurement?. State its advantages and limitations.
7. What do you mean by intermediate-modifying devices?
8. Discuss the concept of generalized data acquisition system.
9. What are the important factors that decide the configuration and the sub-system of the data
acquisition system?
432 Metrology and Measurement

10. What do you mean by signal-conditioning?


11. “Transducer characteristics define many of the signal-conditioning requirements of your measure-
ment system which forms the basis of further system design and installation”. Illustrate the state-
ment with appropriate examples.
12. List the common types of signal-conditioning with their functionality by considering a suitable
example.
13. Explain what you mean by cold-junction compensation with an example?.
14. List down the technologies used for proper signal-conditioning and explain one in detail.
15. Explain methods of signal-conditioning in detail.
16. Write short notes on
a. Analog-to-digital converter
b. Digital-to-analog converter
c. Operational amplifier
d. Inverting amplifier
e. Voltage follower
17. List the types of amplifiers used to perform appropriate functions to get output in required forms
and explain summing amplifier and integrating amplifier in detail.
18. Describe the working of (a) X–Y Plotter (b) Cathode-ray oscilloscope (CRO) (c) LED displays
17 Force and Torque
Measurement

‘Force and torque instruments measure the real strength of the entity under test…..’
INTRODUCTION TO FORCE AND is the frequency range over which the
TORQUE MEASUREMENT device meets its accuracy specifications.
Force and torque instruments are used Accuracy is degraded at lower and lower
to measure force, weight or torque. frequencies unless the device is capable
Some can measure force and torque by of dc response, and at higher frequen-
changing the sensor/transducer. Force cies near resonance and beyond, where
or weight measurements include ten- its output response rolls off. Frequencies
sion or compression loading; and the in the database are usually the 3-dB roll-
units are pounds, Newtons, etc. Torque off frequencies.
measuring instruments display torque Common configurations for force and
units (in-oz, ft-lbs, etc.). torque instruments include handheld, por-
Important parameters to consider when table, modular and battery-powered instru-
specifying force and torque instruments ments. Measurement features of force
are the force measurement range and and toque instruments include tare, limits
accuracy, and the torque measurement or set points, peak hold, controller func-
range and accuracy. Sensor or trans- tionality, temperature compensation, biax-
ducer interfaces for force and torque ial measurement, and triaxial measure-
instruments include strain gauge and ment. Units with tare can zero out reading
piezoelectric devices. For strain gauge to measure differences for weighing.
devices, strain gauges (strain-sensitive Limits and set points include hi–lo. Peak
variable resistors) are bonded to parts of hold shows or holds a peak measurement
the structure that deform when making value. Controller functions include set
the measurement. These strain gauges limits, regulator, P/PI/PID, etc. Instruments
are typically used as elements in a Wheat- with temperature compensation have soft-
stone bridge circuit, which is used to ware or adjustments for compensating for
make the measurement. For piezoelectric variations in temperature that may cause
devices, a piezoelectric material is com- measurement errors. Force and torque
pressed and generates a charge that is instruments with biaxial measurement
measured by a charge amplifier. The have an accelerometer capable of mea-
analog bandwidth is another important surement along two, usually orthogonal,
specification to consider. The bandwidth axes. Force and torque instruments with
434 Metrology and Measurement

triaxial measurement have an accelerom- Advanced searching capabilities for


eter capable of measurement along three, force and torque instruments include
usually orthogonal, axes. outputs, digital resolution and sampling
frequency. General features found on
Force and torque instruments usually
force and torque instruments include
have displays for users to interface
filters, event triggering, built-in self-cali-
with the unit. Common types of dis-
bration, self-test diagnostics, and
plays include analog or dial displays,
extreme environment construction.
digital readouts, and video displays.
Force and torque instruments often
Programmability is achieved through
come with data storage in non-volatile
manual operation, front panels, or
memory, hard drives or removable stor-
computer interfaces. Instruments with
age. Very often these instruments are
software support have software spe-
also compatible on a network.
cifically for running on a host com-
puter, which can be a PC or MAC.

17.1 SI UNITS OF FORCE AND TORQUE

The International System of Units (SI) is widely used for trade, science, and engineering. The SI units of
force and torque are the Newton (N) and the Newton-Metre (N-m) respectively. The base units rel-
evant to force and torque are

i. the meter, unit of length, symbol—m


ii. the kilogram, unit of mass, symbol—kg
iii. the second, unit of time, symbol—s

Force is defined as the rate of change of momentum. For an unchanging mass, this is equivalent to
mass × acceleration.
Thus, 1 N = 1 kg·m·s–2
The torque generated about an axis is defined as the product of the component of the force perpen-
dicular to the axis and the perpendicular distance between the line of action of the force and the axis.

17.1.1 Other Units of Force and Torque


Historically, there have been a variety of units of force and torque, and conversion factors to some
of these are given in Table 17.1 and Table 17.2. Exact conversions are shown in bold, and others are
quoted to seven significant figures.

SI Prefixes The use of abbreviated forms for large and small numbers is encouraged by the SI
system. SI prefixes represent multiples of 103 or 10–3 as in Table 17.3. There is an exception to the
system caused by adoption of the kilogram as the base unit for mass rather than the gram. The effect
of this is that prefixes above ‘kilo’ are not used for mass. The tonne is 103 kg.
Force and Torque Measurement 435

Table 17.1 Some force-conversion factors from non-SI units

Unit Symbol Equivalent SI value

dyne dyn 10.0 µN


grain-force grf 635.460 2 µN
gram-force gf 9.806 65 mN
poundal pdl 138.255 0 mN
ounce-force (avdp) ozf 278.013 9 mN
pound-force lbf 4.448 222 N
kilogram-force kgf 9.806 65 N
kilopond kp 9.806 65 N
sthène sthène 1.0 kN
kip (= 1 000 lbf) kip 4.448 222 kN

US ton-force (= 2 000 lbf) (short) tonf (US) 8.896 443 kN

tonne-force (= 1 000 kgf) (metric) tf 9.806 65 kN

UK ton-force (= 2 240 lbf) (long) tonf (UK) 9.964 016 kN

Table 17.2 Some torque-conversion factors from non-SI units

Unit Symbol Equivalent SI value


gram-force centimetre gf.cm 98.066 5 µN·m
ounce-force inch ozf·in 7.061 552 mN·m
kilogram-force centimetre kgf·cm 98.066 5 mN·m
pound-force inch lbf·in 112.984 8 mN·m
pound-force foot lbf·ft 1.355 818 N·m
kilogram-force metre kgf·m 9.806 65 N·m

Table 17.3 Summary of SI prefixes

Multiplying Factor SI Prefix Scientific Notation


1 000 000 000 000 000 000 000 000 yotta (Y) 1024
1 000 000 000 000 000 000 000 zetta (Z) 1021
(Continued)
436 Metrology and Measurement

Multiplying Factor SI Prefix Scientific Notation


1 000 000 000 000 000 000 exa (E) 1018
1 000 000 000 000 000 peta (P) 1015
1 000 000 000 000 tera (T) 1012
1 000 000 000 giga (G) 109
1 000 000 mega (M ) 106
1 000 kilo (k) 103
0.001 milli (m) 10–3
0.000 001 micro (µ) 10–6
0.000 000 001 nano (n) 10–9
0.000 000 000 001 pico (p) 10–12
0.000 000 000 000 001 femto (f ) 10–15
0.000 000 000 000 000 001 atto (a) 10–18
0.000 000 000 000 000 000 001 zepto (z) 10–21
0.000 000 000 000 000 000 000 001 yocto ( y) 10–24

17.2 FORCE-MEASUREMENT SYSTEM

Force-measurement systems can involve a number of different physical principles but their perfor-
mance can be described by a number of common characteristics and terms, and the behaviour of
a system or transducer can be expressed graphically as a response curve—by plotting the indicated
output value (e.g., voltage) from the system against the force applied to it. The terms used are some-
times applied independently to the force transducer, the force-measurement system as a whole, or some
other part of the system and it is important to establish, for any given application, the way in which the
terms are being used.
An idealized response curve is shown in Fig. 17.1 where the force applied increases from zero to
the rated capacity of the force-measurement system and then back again to zero. The deviation of the
response curve from a straight line is magnified in the figure for the purpose of clarity.
Characterizing the performance of a force-measuring system is commonly based on calculating such
a best-fit least-squares line and stating the measurement errors with respect to it.
Vertical deviation from this line is referred to as non-linearity and generally, the largest value is given
in the specifications of a system.
Force and Torque Measurement 437

Output Best-fit straight line


through zero

Rated
output Decreasing Hysteresis
applied force

Non-linearity
Increasing
applied force

0 Applied
Rated force force
0

Fig. 17.1 Typical output characteristics of a force-measurement system

The difference of readings between the increasing and decreasing forces at any given force is defined
as hysteresis. The largest value of hysteresis is usually at the mid-range of the system.
Sometimes non-linearity and hysteresis are combined into a single figure—usually by drawing two lines
parallel to the best-fit line such that they enclose the increasing and decreasing force curves as shown. The
maximum difference (in terms of output) is then halved and referred as the ±combined error.
Any difference between the indicated value of force and the true value is known as an error of
measurement (although note that strictly a ‘true’ value can never be perfectly known or indeed defined
and the concept of uncertainty takes this into account). Such errors are usually expressed as either a
percentage of the force applied at that particular point on the characteristic or as a percentage of the
maximum force—see the difference between ‘% reading’ and ‘% full scale reading’. The rated capacity
is the maximum force that a force transducer is designed to measure.
Full-scale output, also known as span or rated output, is the output at the rated capacity minus the
output at zero applied force. Sensitivity is defined as the full-scale output divided by the rated capacity
of a given transducer/load cell.
The ability of a force-measurement system to measure force consistently is covered by the
concepts of repeatability and reproducibility. Repeatability is defined broadly as the measure of
agreement between the results of successive measurements of the differences of output of a force-
measurement system for repeated applications of a given force in the same direction and within the
range of calibration forces applied. The tests should be made by the same observer, with the same
measuring equipment, on the same occasion (i.e., successive measurements should be made in a
438 Metrology and Measurement

relatively short space of time), without mechanical or electrical disturbance, and calibration condi-
tions such as temperature, alignment of loading, and the timing of readings held constant as far as
possible.
Although many manufacturers quote a value for repeatability as a basic characteristic of a trans-
ducer, it can be seen from the definition that it should not be considered as such. The value obtained for
a given force transducer, in a given force standard machine, will depend not only on the inherent char-
acteristics of the device such as its creep and sensitivity to bending moments, but also on temperature
gradients, resolution and repeatability of the electrical measuring equipment, and the degree to which
the conditions of the tests are held constant, all of which are characteristics of the test procedure. The
value of repeatability obtained is important as it limits the accuracy to which the other characteristics
of the force transducer can be measured.
In contrast to repeatability, reproducibility is defined as the closeness of the agreement between
the results of measurements of the same force carried out under changed conditions of measure-
ment. A valid statement of reproducibility requires specification of the particular conditions changed
and typically refers to measurements made weeks, months, or years apart. It would also measure, for
example, changes caused by dismantling and re-assembling equipment. The reproducibility of force-
measurement systems is clearly important if they are to be used to compare the magnitudes of forces
at different times, perhaps months or years apart. It will be determined by several factors, including the
stability of the force transducer’s many components, the protection of the strain gauges or other parts
against humidity, and the conditions under which the system is stored, transported, and used.
A force-measurement system will take some time to
Output adjust fully to a change in force applied, and the creep
F2 of a force transducer is usually defined as the change of
F1 Creep output with time following a step increase in force from
one value to another. Most manufacturers specify the
creep as the maximum change of output over a speci-
fied time after increasing the force from zero to the
Creep recovery
rated force. Figure 17.2 shows an example of a creep
0
t1 t2 Time curve where the transducer exhibits a change in output
Fig. 17.2 Creep curve of a typical force from F1 to F2 over a period of time from t1 to t2 after a
transducer step change between 0 and t1. In figures this might be,
say, 0.03 % of rated output over 30 minutes.
Creep recovery is the change of output following a step decrease in the force applied to the force
transducer, usually from the rated force to zero. For both creep and creep recovery, the results will
depend on how long the force applied has been at zero or the rated value respectively before the change
of force is made.
The frequency response of a force transducer is affected by the nature of the mechanical structure,
both within the transducer and of its mounting. A force transducer on a rigid foundation will have a
natural frequency of oscillation and large dynamic errors occur when the frequency of the vibration
approaches the natural frequency of oscillations of the system.
Force and Torque Measurement 439

The effect of temperature changes is felt on both the zero and rated output of the force-measurement
system. The temperature coefficient of the output at zero force and the temperature coefficient of the
sensitivity are measures of this effect for a given system. A force-measurement system may need to be
kept at constant temperature, or set up well in advance, to settle into the ambient conditions if high-
accuracy measurements are required. In some cases, the temperature gradients within the measurement
installation create a problem even when the average temperature is stable.
Other influence quantities such as humidity, pressure, electrical power changes, or radio-frequency
interference may have analogous effects to those of temperature and may be considered in a similar
manner.
In general, a force transducer has two interfaces through which a force is applied. These may be
the upper and lower loading surfaces of compression force transducer or the upper and lower screw
threads of a tension device. In some load cells, one or both interfaces are part of the elastic element to
which the strain gauges are bonded; in other transducers the interfaces may be remote from the elastic
element.
At each interface, there will be a force distribution, which will depend on the end loading conditions.
A change in these loading conditions, therefore, may cause a change in the force distribution resulting
in a change of the sensitivity of the transducer, even though the resultant force at the interface remains
unchanged. The International Standard BS EN ISO 376 concerned with the calibration of proving
devices for the verification of materials testing machines recognizes the importance of end loading
conditions by requiring compression proving devices to pass a bearing pad (or similar) test. In this test,
a device is loaded through a flat steel pad and then through each of two steel pads that are conically
convex and concave respectively by 1 part in 1 000 of the radius. Depending on the design of the trans-
ducer, the change of sensitivity caused by a change of end loading conditions can be quite large; some
precision compression load cells with low creep, hysteresis, and temperature coefficients can show dif-
ferences of sensitivity in the bearing pad test of 0.3 %, others less than 0.05 %.
True axial alignment of the applied force along the transducer’s principal axis, and the loading con-
ditions across that surface are major factors in the design of a reliable and accurate installation of a
force-measurement system. Force transducers used to measure a single force component are designed
to be insensitive to the orthogonal force components and corresponding moments, provided these are
within specified limits, but although the error due to small misalignments may be calibrated statisti-
cally, the alignment of force relative to the transducer axis may vary through the load cycle of a typical
application giving potentially large and unquantifiable errors of measurement. Users of force-mea-
surement systems should adhere to manufacturers’ recommendations for alignment when installing
force transducers.

17.3 FORCE AND LOAD SENSORS

Force and load sensors cover electrical sensing devices that are used to measure tension, compression,
and shear forces. Tension cells are used for measurement of a straight-line force ‘pulling apart’ along a
single axis; typically annotated as positive force. Compression tension cells are used for measurement
440 Metrology and Measurement

of a straight-line force ‘pushing together’ along a single axis; typically annotated as negative force. Shear
is induced by tension or compression along offset axes. They are manufactured in many different pack-
ages and mounting configurations.
Important parameters for force and load sensors include the force and load-measurement range and
the accuracy. The measurement range is the range of the required linear output. Most force sensors
actually measure the displacement of a structural element to determine force. The force is associated
with a deflection as a result of calibration. There are many form factors or packages to choose from—
S-beam, pancake, donut or washer, plate or platform, bolt, link, miniature, cantilever, canister, load pin,
rod end, and tank weighing. Shear-cell type can be shear beam, bending beam, or single-point bending
beam. Force and load sensors can have one of many output types. These include analog voltage, analog
current, analog frequency, switch or alarm, serial, and parallel.
Force and load sensors can be many different types of devices including sensor element or chip,
sensor or transducer, instrument or meter, gauge or indicator, and recorder and totalizers. A sensor
element or chip denotes a ‘raw’ device such as a strain gauge, or one with no integral signal condi-
tioning or packaging. A sensor or transducer is a more complex device with packaging and/or signal
conditioning that is powered and provides an output such as a dc voltage, a 4–20mA current loop,
etc. An instrument or meter is a self-contained unit that provides an output such as a display locally
at or near the device. Typically, it also includes signal processing and/or conditioning. A gauge or
indicator is a device that has a (usually analog) display and no electronic output such as a tension
gauge. A recorder or totalizer is an instrument that records, totalizes, or tracks force measurement
over time. It includes simple data-logging capability or advanced features such as mathematical func-
tions, graphing, etc.
The most common force and load sensor technologies are piezoelectric and strain gauge. For
piezoelectric devices, a piezoelectric material is compressed and generates a charge that is conditioned
by a charge amplifier. For strain gauge devices, strain gauges (strain-sensitive variable resistors) are
bonded to parts of the structure that deform when making the measurement. These strain gauges are
typically used as elements in a Wheatstone bridge circuit, which is used to make the measurement.
Strain gauges typically require an excitation voltage, and provide output sensitivity proportional to
that excitation.
Features common to force and load sensors include biaxial measurement, triaxial measurement,
and temperature compensation. Biaxial load cells can provide load measurements along two, typically
orthogonal, axes. Triaxial load cells can provide load measurements along three, typically orthogonal,
axes. Temperature-compensated load cells provide special circuitry to reduce/eliminate sensing errors
due to temperature variations. Other parameters to consider include operating temperature, maximum
shock, and maximum vibration.
Load cells are force sensors that frequently incorporate mechanical packaging for fit into testing
and monitoring systems. They can be used for tension, compression, and/or shear measurement,
and can be configured to measure force or load along multiple axes. Load cells are widely used in
mechanical testing, ongoing system monitoring, and devices such as industrial weigh modules and
scales.
Force and Torque Measurement 441

Important parameters for load cells include the force and load-measurement range and the accu-
racy. The measurement range is the range of required linear output. Load cells can be configured with
multiple axes. Biaxial load cells can provide load measurements along two, typically orthogonal, axes.
Triaxial load cells can provide load measurements along three, typically orthogonal, axes.
Load cells can measure tension, compression, or shear. Tension cells are used for measurement of a
straight-line force ‘pulling apart’ along a single axis; typically annotated as positive force. Compression
tension cells are used for measurement of a straight-line force ‘pushing together’ along a single axis;
typically annotated as negative force. Shear is induced by tension or compression along offset axes.
Most load cells actually measure the displacement of a structural element to determine force. The force
is associated with a deflection as a result of calibration. There are many form factors or packages to
choose from—S-beam, pancake, donut or washer, plate or platform, bolt, link, miniature, cantilever,
canister, load pin, rod end, and tank weighing.
Shear-cell types for load sensors can be shear beam, bending beam, or single-point bending beam.
The most common sensor technologies are piezoelectric and strain gauge. For piezoelectric devices, a
piezoelectric material is compressed and generates a charge that is conditioned by a charge amplifier.
For strain gauge devices, strain gauges (strain-sensitive variable resistors) are bonded to parts of the
structure that deform when making the measurement. These strain gauges are typically used as ele-
ments in a Wheatstone bridge circuit, which is used to make the measurement. Strain gauges typically
require an excitation voltage, and provide output sensitivity proportional to that excitation.
Outputs for load cells can be analog voltage, analog current, analog frequency, switch or alarm, serial,
and parallel. Temperature-compensated load cells provide special circuitry to reduce/eliminate sens-
ing errors due to temperature variations. Other parameters to consider include operating temperature,
maximum shock, and maximum vibration.

17.3.1 Load Cells


Load cells are utilized in nearly every electronic weighing system. In understanding load cells, you’ll be
better able to comprehend the systems in which they are used. The following is a brief summary of the
inner workings of a load cell. While this explanation does not answer every question, it can provide the
basic framework for understanding load cells.

System Components In contemporary control applications, weighing systems are used in both
static and dynamic applications. Some systems are technologically advanced, interfacing with comput-
ers for database integration and using microprocessor-based techniques to proportion material inputs
and feed rates. To send the weight information to computers, signal conditioners are utilized to permit
direct communication from the load cell via conversion of the load cell’s analog signal to a digital signal.
An entire system can be constructed, one piece at a time, from basic modules.

Parts of a System Load cells, cable, junction box (summing up the load cell signals up to one
output), instrumentation (indicators, signal conditioners, etc.), peripheral equipment ( printers, score-
boards, etc.)
442 Metrology and Measurement

Force

Compression
strain gauge

Strain gauge
patch

Tension
strain gauge Fig. 17.4 Strain gauge
mounted on component
Force
Fig. 17.3 Load cell

Fundamentals A load cell is classified as a force transducer. This device converts force or weight
into an electrical signal. The strain gauge is the heart of a load cell. A strain gauge is a device that changes
resistance when it is stressed. The gauges are developed from an ultra-thin heat-treated metallic foil and
are chemically bonded to a thin dielectric layer. ‘Gauge patches’ are then mounted to the strain element
with specially formulated adhesives. The precise positioning of the gauge, the mounting procedure, and
the materials used all have a measurable effect on overall performance of the load cell.
Each gauge patch consists of one or more fine wires cemented to the surface of a beam, ring, or
column (the strain element) within a load cell. As the surface to which the gauge is attached becomes
strained, the wires stretch or compress changing their resistance proportional to the applied load. One
or more strain gauges are used in the making of a load cell. Multiple strain gauges are connected to
create the four legs of a Wheatstone-bridge configuration.
When an input voltage is applied to the bridge, the output + Output
becomes a voltage proportional to the force on the cell.
This output can be amplified and processed by conven-
tional electrical instrumentation.
1/2 Modulus
Types of Force/Load Cells 1/2 Modulus 1/2 Calibration
The load or force cell takes many forms to accommodate 1/2 Calibration − Output
the variety of uses throughout research and industrial appli- − Input
cations. The majority of today’s designs use strain gauges as Input shunt + Input

the sensing element, whether foil or semiconductor. Fig. 17.5 Wheatstone-bridge


Force and Torque Measurement 443

Batch
Engine weighing Bin Bin Bin
A B C
dynamometry

Sollenoid
valves
Checking Spring
connector testing
insertion
force
Load cell

Pedal force Cylinder


Platen
load cell
Booster
Load cell Strain gauge
Brake testing
load cell
Fig. 17.6 Load-cell applications

(i) Foil Gauges offer the largest choice of different types and in consequence tend to be the
most used in load cell designs. Strain-gauge patterns offer measurement of tension, compression
and shear forces.

(ii) Semiconductor Strain Gauges come in a smaller range of patterns but offer the advantages of
being extremely small and have large gauge factors, resulting in much larger outputs for the same given stress.
Due to these properties, they tend to be used for the miniature load cell designs.

(iii) Proving Rings are used for load measurement using a calibrated metal ring, the movement of
which is measured with a precision displacement transducer.
A vast number of load-cell types have developed
over the years, the first designs simply using a strain
gauge to measure the direct stress which is introduced
into a metal element when it is subjected to a tensile
or compressive force. A bending-beam-type design uses
strain gauges to monitor the stress in the sensing ele-
ment when subjected to a bending force. More recently,
the measurement of shear stress has been adopted as a
more efficient method of load determination as it is less
dependent on the way and direction in which the force
is applied to the load cell.

(iv) ‘S’ or ‘Z’ Beam Load Cell This is a simple de-


sign load cell where the structure is shaped as ‘S’ or ‘Z’ and
strain gauges are bonded to the central sensing area in the
form of a full Wheatstone bridge. Fig. 17.7 Load/force cells
444 Metrology and Measurement

(v) Bending-Beam Load Cell The strain gauges are


bonded on the flat upper and lower sections of the load cell
at points of maximum strain. This load-cell type is used for
low capacities and performs with good linearity. Its disad-
vantage is that it must be loaded correctly to obtain consis-
tent results.

(vi) Shear-Beam Load Cell The strain gauges are bond-


ed to a reduced part of the cross section of the beam in order
to maximize the shear effect. They are bonded at 45° angles on
either side of the beam to measure the shear strains.
Fig. 17.8 Load cell

Fig. 17.9 S-type load cell

Fig. 17.10 Bending-beam load cell


Force and Torque Measurement 445

Fig. 17.11 Shear beam load cell (round)

Fig. 17.12 Shear beam load cell Fig. 17.13 Sealed weighing load
sensor

Used for medium to large capacities, the load cell has


good linearity and is not so susceptible to extraneous
loading, in particular to side loads.

(vii) Miniature Load Cells Miniature load cells, be-


cause of their compact size, usually use semiconductor strain
gauges as the sensing element. They are available in many
different configurations for both tension and compression
force measurement. They offer good performance with high
outputs and high overload capability for protection.
Fig. 17.14 Miniature load cells

17.4 DYNAMIC FORCE MEASUREMENT

17.4.1 Quartz Force Sensors


Quartz force sensors are recommended for dynamic force applications. They are not used as ‘load cells’
for static applications. Measurements of dynamic oscillating forces, impactor high speed compression/
tension under varying conditions may require sensors with special capabilities. Fast response, rugged-
ness, stiffness comparable to solid steel, extended ranges and the ability to also measure quasi-static
forces are standard features associated with PCB quartz force sensors.
446 Metrology and Measurement

The following information presents some of the design and operating characteristics of PCB force
sensors to help you better understand how they function, which in turn, will ‘help you make better
dynamic measurements’.
When a force is applied to this sensor, the quartz crystals generate an electrostatic charge propor-
tional to the input force. This output is collected on the electrodes sandwiched between the crystals and
is then either routed directly to an external charge amplifier or converted to a low-impedance voltage
signal within the sensor. Both these modes of operation will be examined in the following sections.

17.4.2 Preloading Force Rings


Force
Force-ring style force sensors are generally installed
Impact cap between two parts of a test structure with an elastic
beryllium–copper bolt or stud. This stud holds the
Housing structure together and applies preload to the force ring.
In this type of installation, part of the force between the
Amplifier two structures is shunted through the mounting stud.
+ +++ + + +++ This may be up to 5% for the beryllium–copper stud
+−
− +−
+−+ ++ + + supplied with the instrument and up to 50% for steel
Quartz
studs. If a stud other than beryllium–copper is used, it
element is crucial that ring sensors be calibrated in a preloaded
Preload stud
state to assure accurate readings and linearity through-
Mounting stud
out the entire working range of the sensor.
Fig. 17.15 Cross section of a typical
quartz force sensor In-house calibration procedure requires the instal-
lation of a force ring with a Be Cu stud in series with
an NIST traceable proving ring. A preload of 20%
(full-scale operating range of the force ring) but not
less than 10 lbs, is applied prior to recording of mea-
surement data. Allow the static component of the
signal to discharge prior to calibration.

17.4.3 Repetitive Pulse Applications


In many force-monitoring applications, it is desired to
monitor a series of zero-to-peak repetitive pulses that
may occur within a short time interval of one another.
This output signal is often referred as a ‘pulse train’. As
has been previously discussed, the ac coupled output
signal from piezoelectric sensors will decay towards an
equilibrium state, making it look like the positive force
Fig. 17.16 Force rings is decreasing, and it is difficult to accurately monitor a
continuous zero-to-peak output signal such as associated
with stamping or pill press applications. With the use of special signal-conditioning equipment, it becomes
possible to position a positive output signal going above a ground-based zero. Operating in drift-free ac
Force and Torque Measurement 447

mode, the instrument provides the constant current–voltage excitation to force sensors and has a zero-based
clamping circuit that electronically resets each pulse to zero. Special circuitry prevents the output from drift-
ing negatively providing a continuous positive polarity signal.

17.4.4 Case Study of Load Cell used in Process Engineering


The latest developments in technology are being used to extend the use of load cells in process engi-
neering applications.
The use of strain-gauge-based load cells in a wide range of
process applications has grown steadily in recent years. Indeed,
non-contact load cells are rapidly becoming the preferred choice
of sensor in many areas, including weight monitoring, batch
control, mixing, dosing and blending of liquids or solids.
Although the fundamental strain-gauge principle has
remained largely unaltered since this type of product was
first used for aircraft weight and balance measurement in
the 1930s, the technology employed in the development
and volume manufacture of load cells is now extremely
Fig. 17.17 Load cell design in pro-
advanced. As a result, the latest generations of load cells are cess engineering
easy to install and calibrate, are robust, stable and reliable,
even in particularly aggressive or hazardous environments,
and are typically accurate to 0.1 per cent or better.
The quality, accuracy and stability of the load cell is, however, only part of the equation. Of perhaps
equal significance is the design and installation of the loading assembly, which transfers the load from
the vessel being monitored to the sensor itself. This is especially important with larger silos, hoppers
and mixing vessels, where greater volume and weight have to be considered.
To date, process, plant and maintenance engineers have either had to produce their own loading
assemblies, or source them direct from the load-cell supplier, and accept that the loading assembly will
probably not have been developed specifically for process weighing.
There are a number of factors that need to be considered when specifying load cells and loading
assemblies. In particular, the load cell must be capable of supporting the total load that can be applied,
even under adverse conditions caused, for example, by shock, high winds or vibration. Additionally, the
loading assembly must provide both lift-off protection—especially in machines where hoppers or buckets
are removed for cleaning—and side-load protection, to compensate for the expansion and contraction
of the vessel being monitored. It must be easy to install, calibrate and replace; and it must be as compact
and lightweight as possible, while being sufficiently robust to withstand often lengthy periods of harsh
operating conditions where vessels are in exposed sites or being used for aggressive materials. Although
the latest load cells require a degree of care, they are far more robust and capable of dealing with the
rigours of most process applications than has previously been the case. For example, overload capacities
are now up to three times the rated load. The design of low-profile cell and loading assemblies help to
overcome operating stresses, with factors such as off-axis loading of up to four degrees from the vertical
being acceptable without affecting overall performance or accuracy.
448 Metrology and Measurement

Developing load cells and loading assemblies that address all of these factors is challenging, both
technically and commercially. Nevertheless, we are now seeing the introduction of a new generation
of load cells, capable of achieving new and higher levels of accuracy, stability and reliability in many
process applications.

17.5 TORQUE MEASUREMENT

Torque sensors and torque instruments are used to measure torque in a variety of applications. Torque
sensors are categorized into two main types, reaction and rotary. Reaction torque sensors measure static and
dynamic torque with a stationary or non-rotating transducer. Rotary torque sensors use rotary transducers
to measure torque.
Important specifications to consider when searching for torque sensors include maximum torque,
accuracy, and temperature compensation. Torque is defined as the moment of a force, a measure of its
tendency to produce torsion and rotation about an axis. Temperature compensation prevents measure-
ment error due to temperature increases or decreases.
The technology of torque sensors can be magneto-elastic, piezoelectric, and strain gauge. A magneto-
elastic torque sensor detects changes in permeability by measuring changes in its own magnetic field.
A piezoelectric material is compressed and generates a charge, which is measured by a charge amplifier.
To measure torque, strain-gauge elements usually are mounted in pairs on the shaft, one gauge measur-
ing the increase in length (in the direction in which the surface is under tension), the other measuring
the decrease in length in the other direction.
Torque sensors can be many different types of devices including sensor element or chip, sensor or
transducer, instrument or meter, gauge or indicator, and recorder and totalizers. A sensor element or
chip denotes a ‘raw’ device such as a strain gauge, or one with no integral signal conditioning or packag-
ing. A sensor or transducer is a more complex device with packaging and/or signal conditioning that
is powered and provides an output such as a dc voltage, a 4 – 20mA current loop, etc. An instrument
or meter is a self-contained unit that provides an output such as a display locally at or near the device.
Typically, it also includes signal processing and/or conditioning. A gauge or indicator is a device that
has a (usually analog) display and no electronic output such as a tension gauge. A recorder or totalizer is
an instrument that records, totalizes, or tracks force measurement over time. It includes simple datalog-
ging capability or advanced features such as mathematical functions, graphing, etc.
Common outputs for torque sensors include analog voltage, analog current, analog or modulated
frequency, switch or alarm, serial, and parallel. Other parameters to consider include operating tempera-
ture, maximum shock, and maximum vibration.

17.5.1 Basics of Torque Measurement


Torques can be divided into two major categories, either static or dynamic. The methods used to measure
torque can be further divided into two more categories, either reaction or inline. Understanding the type
of torque to be measured, as well as the different types of torque sensors that are available, will have a
profound impact on the accuracy of the resulting data, as well as the cost of the measurement.
Force and Torque Measurement 449

Static and Dynamic Torque In a discussion of static vs dynamic torque, it is often easiest to
start with an understanding of the difference between a static and dynamic force. To put it simply, a
dynamic force involves acceleration, whereas a static force does not. The relationship between dynamic
force and acceleration is described by Newton’s second law
F = ma (force equals mass times acceleration). The force required to stop your car with its substantial
mass would be a dynamic force, as the car must be decelerated. The force exerted by the brake caliper in
order to stop that car would be a static force because there is no acceleration of the brake pads involved.
Torque is just a rotational force, or a force through a distance. From the previous discussion, it is
considered static if it has no angular acceleration. The torque exerted by a clock spring would be a static
torque, since there is no rotation and hence no angular acceleration.
The torque transmitted through a car’s drive axle as it cruises down the highway (at a constant
speed) would be an example of a rotating static torque, because even though there is rotation, at a
constant speed there is no acceleration. The torque produced by the car’s engine will be both static
and dynamic, depending on where it is measured. If the torque is measured in the crankshaft, there
will be large dynamic torque fluctuations as each cylinder fires and its piston rotates the crankshaft.
If the torque is measured in the drive shaft, it will be nearly static because the rotational inertia of
the flywheel and transmission will dampen the dynamic torque produced by the engine.
The torque required to crank up the windows in a car (remember those?) would be an example of a static
torque, even though there is a rotational acceleration involved, because both the acceleration and rotational
inertia of the crank are very small and the resulting dynamic torque (torque = rotational inertia × rotational
acceleration) will be negligible when compared to the frictional forces involved in the window movement.
This last example illustrates the fact that for most measurement applications, both static and dynamic torques
will be involved to some degree. If dynamic torque is a major component of the overall torque or is the
torque of interest, special considerations must be made when determining how best to measure it.

Reaction vs Inline Inline torque measurements are made by inserting a torque sensor between torque
carrying components, much like inserting an extension between a socket and a socket wrench (Fig. 17.18).
The torque required to turn the socket will be carried directly by the
socket extension. This method allows the torque sensor to be placed as close
as possible to the torque of interest and avoid possible errors in the measure-
ment such as parasitic torques (bearings, etc.), extraneous loads, and com-
ponents that have large rotational inertias that would dampen any dynamic Sensor
torques.
From the previous example above, the dynamic torque produced by
an engine would be measured by placing an inline torque sensor between
the crankshaft and the flywheel, avoiding the rotational inertia of the fly-
wheel and any losses from the transmission. To measure the nearly static,
steady-state torque that drives the wheels, an inline torque sensor could Fig. 17.18 Inline torque
be placed between the rim and the hub of the vehicle, or in the drive measurement
450 Metrology and Measurement

shaft. Because of the rotational inertia of a typical torque driveline, and other related components,
inline measurements are often the only way to properly measure dynamic torque.
A reaction torque sensor takes advantage of Newton’s third law: for every action there is an
equal and opposite reaction. To measure the torque produced by a motor, we could measure it
inline as described above, or we could measure how much torque is required to prevent the motor from
turning, commonly called the reaction torque (Fig. 17.19).

Motor
Measuring the reaction torque avoids the obvious
Non-Rotating problem of making the electrical connection to the
motor adapter sensor in a rotating application (discussed below), but
does come with its own set of drawbacks. A reaction
torque sensor is often required to carry significant
Rotating extraneous loads, such as the weight of a motor, or at
shaft
least some of the drive line. These loads can lead to
crosstalk errors (a sensor’s response to loads other than
those that are intended to be measured), and sometimes
reduced sensitivity, as the sensor has to be oversized
to carry the extraneous loads. Both of these methods,
Torque sensor inline and reaction, will yield identical results for static
Fig. 17.19 Reaction torque torque measurements. Making inline measurements in a
measurement rotating application will nearly always present the user
with the challenge of connecting the sensor from the rotating world to the stationary world. There are
a number of options available to accomplish this, each with its own advantages and disadvantages.

Slip Ring The most commonly used method to make this connection between rotating sensors and
stationary electronics is the slip ring. It consists of a set of conductive rings that rotate with the sensor,
and a series of brushes that contact the rings and transmit the sensors’ signals (Fig. 17.20).
Slip rings are an economical solution that perform well in a wide variety of applications. They are a rela-
tively straightforward, time-proven solution with only minor drawbacks in most applications. The brushes,
and to a lesser extent the rings, are wear items with limited lives
that don’t lend themselves to long-term tests, or to applications
that are not easy to service on a regular basis. At low to moderate
speeds, the electrical connection between the rings and brushes are
relatively noise free, however at higher speeds, noise will severely
degrade their performance. The maximum rotational speed (rpm)
for a slip ring is determined by the surface speed at the brush/ring
interface. As a result, the maximum operating speed will be lower
for larger, typically higher torque-capacity sensors by virtue of the
fact that the slip rings will have to be larger in diameter, and will,
therefore, have a higher surface speed at a given rpm. Typical maxi-
mum speeds will be in the 5,000-rpm range for a medium-capacity
Fig. 17.20 Slip rings and brushes torque sensor.
Force and Torque Measurement 451

Finally, the brush ring interface is a source of drag torque that can be a problem, especially for very
low-capacity measurements or applications, whereas the driving torque will have trouble overcoming
the brush drag.

Rotary Transformer In an effort to overcome some of the shortcomings of the slip ring, the
rotary transformer system was devised. It uses a rotary transformer coupling to transmit power to the
rotating sensor. An external instrument provides an ac excitation voltage to the strain-gauge bridge via
the excitation transformer. The sensor’s strain-gauge bridge then drives a second rotary transformer
coil in order to get the torque signal off the rotating sensor (Fig. 17.21).
By eliminating the brushes and rings of the slip ring, the issue
of wear is gone, making the rotary transformer system suitable for
Signal Power
long-term testing applications. The parasitic drag torque caused Rotating
by the brushes in a slip ring assembly is also eliminated. However, coils
the need for bearings and the fragility of the transformer cores
still limits the maximum rpm to levels only slightly better than the
slip ring.
The system is also susceptible to noise and errors induced by
the alignment of the transformer primary-to-secondary coils.
Because of the special requirements imposed by the rotary trans- Fig. 17.21 Rotary transformer
formers, specialized signal conditioning is also required in order
to produce a signal acceptable for most data-acquisition systems,
further adding to the system’s cost that is already higher than a typical slip ring assembly.

Infrared (IR) Like the rotary transformer, the infrared (IR) torque sensor utilizes a contactless
method of getting the torque signal from a rotating sensor back to the stationary world. Similarly, using
a rotary transformer coupling, power is transmitted to the rotating sensor. However, instead of being
used to directly excite the strain-gauge bridge, it is used to power a circuit on the rotating sensor. The
circuit provides excitation voltage to the sensor’s strain-gauge bridge,
and digitizes the sensor’s output signal. This digital output signal
is then transmitted, via infrared light, to stationary receiver diodes,
where another circuit checks the digital signal for errors and converts
it back to an analog voltage (Fig. 17.22).
Since the sensor’s output signal is digital, it is much less susceptible to
noise from such sources as electric motors and magnetic fields. Unlike
the rotary transformer system, an infrared transducer can be configured
either with or without bearings for a true maintenance free, no-wear, no-
drag sensor. While more expensive than a simple slip ring, it offers several
benefits. When configured without bearings, as a true non-contact mea-
surement system, the wear items are eliminated, making it ideally suited
for long-term testing rigs. Most importantly, with the elimination of the Fig. 17.22 Infrared (IR)
bearings, operating speeds (rpm’s) go up dramatically, to 25,000 rpm torque sensor
452 Metrology and Measurement

and higher, even for high capacity units. For high-speed applications, this is often the best solution for a
rotating torque transmission method.

FM Transmitter Another approach to making the connection between a rotating sensor and
the stationary world utilizes an FM transmitter. These transmitters are used to remotely connect any
sensor, whether force or torque, to its remote data-acquisition system by converting the sensor’s signal
to a digital form and transmitting it to an FM receiver where it is converted back to an analog voltage.
For torque measurement applications they are typically used for speciality, one of a kind sensors, such
as when strain gauges are applied directly to a component in a driveline. This could be a drive shaft or
half-shaft from a vehicle, for example. The transmitter offers the benefits of being easy to install on
the component as it is typically just clamped to the gauged shaft, and it is re-usable for multiple custom
sensors. It does have the drawback of needing a source of power on the rotating sensor, typically a
9 V battery, which makes it impractical for long-term testing, which is shown in Fig. 17.23.

Fig. 17.23 FM transmitter

Understanding the nature of the torque to be measured, as well as what factors can alter that torque
in the effort to measure it, will have a profound impact on the reliability of the data collected. In
applications that require the measurement of dynamic torque, special care must be taken to measure
the torque in the proper location, and not to affect the torque by dampening it with the measurement
system. Knowing the options available to make the connection to the rotating torque sensor can greatly
affect the price of the sensor package.
Slip rings are an economical solution, but have their limitations. More technically advanced solu-
tions are available for more demanding applications, but will generally be more expensive. By thinking
through the requirements and conditions of a particular application, the proper torque measurement
system can be chosen the first time.

17.5.2 Torque-testing Dynamometers


Deriving its name from the dyne, the fundamental metric unit of force, the dynamometer is an appa-
ratus designed to measure the power, force, or energy of any machine that has a spinning shaft. It is a
device used for measuring the torque, force, or power available from a rotating shaft. The shaft speed
Force and Torque Measurement 453

is measured with a tachometer, while the turning force or torque of the shaft is measured with a scale
or by another method. Power may be read from the instrumentation or calculated from shaft speed and
torque.
The two types are the transmission dynamometer and the absorption dynamometer. The transmission dyna-
mometer transmits the force while measuring the elastic twist of the output shaft. An absorption
dynamometer absorbs the power and dissipates it as heat by restraining the output shaft mechanically
with a friction brake, hydraulically with a water brake, or electrically with an electromagnetic force.
Since the restraining element tends to rotate with the output shaft, the force of the shaft can be
determined by measuring the force required to arrest the rotation of the restraining element. Torque
is then calculated by multiplying the force times the length of the lever arm, or the distance through
which the force acts.
One type of electric dynamometer consists of a direct-current (dc) machine with the stator cradle-
mounted in antifriction bearings. The rotor is connected to the shaft of the machine under test. The
field current is introduced through flexible leads. The stator is constrained from rotating by a radial
arm of known length to which is attached a scale for measuring the force required to prevent rotation.
The torque of the connected machine is found from the product of the lever arm length and the scale
reading, after correcting the scale reading by the amount of the zero torque reading.
Common applications for dynamometers include general purpose, automotive, aircraft or aerospace,
chain or belt drives, gearboxes, fluid-power systems, gas or diesel engines, industrial, marine, transmis-
sions, and turbines. All dynamometers will typically have speed and power feedback for performance
testing and monitoring. Typical features include encoders or other speed / position sensors, torque arms,
and reaction sensors. Common dynamometer interfaces include integral control console, separate con-
sole, computer, or modem or remote control. Features common to dynamometers include PID control,
flow control or throttling, data acquisition or logging, alarms, motor power analysis, and engine exhaust
analysis.

17.6 MOTOR AND ENGINE-TESTING DYNAMOMETERS

Motor and engine-testing dynamometers apply braking or drag resistance to motor rotation and mea-
sure torque at various speeds and power input levels. These devices measure the output torque of
motors, engines, gearboxes, transmissions, and other rotary machines. They can include features such as
fuel and exhaust monitoring for internal combustion engines, input power analysis for electric motors,
and temperature and vibration sensing. Air dynamometers use an impeller to assess the power produced
by a jet engine or gas turbine. AC dynamometers are essentially ac motors mounted and configured to
provide drag against the motor being tested and output the resultant torque and power. DC dynamom-
eters are essentially dc motors mounted and configured to provide drag against the motor being tested
and output the resultant torque and power. Eddy-current dynamometers provide restraining torque
that increases with shaft speed. In a hydraulic or water-brake dynamometer, braking drag is applied to
the dynamometer rotor vanes via water circulating between the rotor and the stator housing. Hysteresis
dynamometers use non-contact magnetic braking to apply resistance to motor rotation. A magnetic
454 Metrology and Measurement

powder dynamometer has a friction-braking system using a magnetic-powder medium between the
rotor and the stator. With a prony or friction brake dynamometer, the braking mechanism uses friction
pads or brake shoes to engage the rotating disk or drum coupled to the motor. A combination of two
or more technologies is a tandem or combination dynamometer.
Important performance specifications to consider when searching for dynamometers include maxi-
mum power absorption, torque capacity, maximum rotary speed, and maximum linear speed on chassis
style. Maximum power absorption is the maximum rotational power the dynamometer can be subjected
to and still operate within specifications. This is typically limited by absorption or braking technology
and configuration. The torque capacity is the maximum continuous torque transmission for which
the shaft is designed. Maximum rotary speed is the maximum-rated rotational speed under load. For
chassis-style dynamometers, the maximum linear speed of the vehicle being tested is typically given in
vehicular speed units such as miles per hour.
Mounting types for dynamometers include chassis, stand or pedestal, adjustable or trunnion mount,
flange or shaft mount, and portable. In a chassis-type unit, rollers on the dynamometer support the
wheels of one or more axles. One of the rollers transmits the power from the vehicle to the dyna-
mometer for measurement of horsepower and speed. Vehicles typically drive onto the rollers and/or
the rollers lift up from a pit or recess. Environmental regulations often require a dynamometer during
exhaust emission testing. A stand or pedestal mount is a stationary mount or stand for positioning; and
may be permanent or moveable between tests. With an adjustable or trunnion mount, the dynamom-
eter can be adjusted for horizontal, vertical, or intermediate testing. This is typically achieved through
trunnion mounting so the dynamometer can pivot to the desired angle. A flange or shaft mount dyna-
mometer has a flange that couples with flange on motor or engine for direct, inline mounting. Portable
dynamometer units can be relocated and include wheeled units.
Scales and weigh modules measure static or dynamic loads for a wide range of industrial applications.
They are used to weigh small packages, the contents of hoppers, and extremely heavy loads that are
hauled by trucks or trains. Performance specifications include measurement type, rated load, and accu-
racy. There are three basic measurement types for scales and weigh modules: compression, shear, and
tension. Compression squeezes contents along the same axis. Shear is compression along the offset axes.
Tension weigh modules are used to convert a suspended tank or hopper into a scale. To provide reliable
measurements, mounting hardware is used to ensure that only the vertical load is measured. Rated load
is the maximum load that scales can handle without sustaining permanent damage. Accuracy is the limit
tolerance or average deviation between the actual output and the theoretical output.
Scales and weigh modules provide analog outputs and differ in terms of display type and user inter-
face. Many devices can output a voltage signal or current signal in proportion to the strain on the
sensor. Common voltage ranges include 0–5 VDC and 1–5 VDC. The most common analog current
loop is 4 –20 mA. Devices with a switch or relay that operates at set point are also available. Scales and
weigh modules display values with analog meters, digital readouts, or video display terminals. Analog
meters include a needle or light emitting diode (LED). Digital readouts are numerical or application-
specific. Video display terminals (VDT) include cathode ray tubes (CRT) and flat panel displays (FPD).
Some scales and weigh modules include an analog front panel with potentiometers, dials, and switches.
Force and Torque Measurement 455

Others have a digital front panel. Larger, more complex systems can often be controlled remotely with
a computer interface and include application software.
Scales and weigh modules differ in terms of applications and features. Benchtop devices are relatively
small and measure a limited range of loads. Conveyor scales weigh items as they pass along an assembly
line. Truck, rail, and axle scales are placed under a vehicle’s tire. Floor scales align the measuring platform
with the main floor and are suitable for shipping heavy freight and animals. Dynamometers measure
the amount of power applied. Counting systems, crane scales, hopper or tank scales, weigh checks, and
general-purpose industrial scales are also available. In terms of features, some scales and weigh modules
have a built-in audible or visual alarm. Others are waterproof, washdown-capable, or ruggedized for
harsh environment.
Piezoelectric devices generate electrical signals in response to vibrations and produce mechanical
energy in response to electrical signals. There are several basic types of piezoelectric devices. Piezo-
electric actuators produce a small displacement with a high force capability when voltage is applied.
They are used mainly in ultra-precise positioning and in the generation and handling of high forces or
pressures. Piezoelectric motors use a piezoelectric ceramic element to produce ultrasonic vibrations in a
stator structure. The elliptical movements of the stator are converted into the movement of a slider that
is pressed into frictional contact with the stator. Depending on the stator’s design, the resulting move-
ment can be either rotational or linear. Piezoelectric transducers convert electrical pulses to mechanical
vibrations and then convert the returned mechanical energy into electrical energy. Piezoelectric sensors
measure the electrical potential caused by applying mechanical force to a piezoelectric material. They
are used in a variety of pressure-sensing applications. Piezoelectric drivers and piezoelectric amplifiers
are power sources used to provide the high-voltage levels needed to drive other piezoelectric devices.
Selecting piezoelectric devices requires an analysis of physical and performance specifications. Typi-
cally, manufacturers specify length, diameter or height, thickness and mass as physical specifications.
Performance specifications differ by device type. For example, specifications for piezoelectric actuators
include maximum displacement, blocked force, maximum operating voltage, stiffness, resonance fre-
quency, and capacitance. For piezoelectric motors, important considerations include motor type, oper-
ating frequency, displacement, no-load speed, and capacitance. For piezoelectric sensors, performance
specifications include pressure range, accuracy, and operating temperature.
Piezoelectric devices use several types of electrical connectors. Bayonet Neil–Concelman (BNC)
connectors were designed for military applications, but are used widely in video and RF applications
to 2 GHz. They have a slotted outer conductor and a plastic dielectric that causes increasing losses at
higher frequencies. Both 50 and 75 BNC connectors are commonly available. American wire gauge
(AWG) connectors include connection points that accept two wires. A US standard for non-ferrous
wire conductor sizes, AWG uses the term ‘gauge’ to refer to a wire’s diameter. The higher the gauge
number, the smaller the diameter and the thinner the wire. For example, AWG 26 connectors accom-
modate wires that are 15.9 mils in diameter, while AWG 30 connectors accept wires that are 10.0 mils in
diameter. Some piezoelectric devices use LEMOÒ connectors, push-pull devices that lock in place for
demanding applications. LEMO is a trademark of LEMO SA. Typically, these connectors are marked
456 Metrology and Measurement

with the LEMO name and the first five characters of the part number, which represent the model, size,
and series.
Six-axis force and torque sensors measure the full six components of force and torque: vertical, lat-
eral, and longitudinal forces as well as camber, steer, and torque movements. Six-axis force and torque
sensors provide electrical outputs as analog current loops, analog voltage levels, frequencies, pulses,
switches and relays. Typically, analog current loops are 0–20 mA or 4 –20 mA. Most analog voltage out-
puts are 0–10 V or ± 5 V. Frequency and pulse signals include amplitude modulation (AM), frequency
modulation (FM), and pulse width modulation (PWM). With switch or relay outputs, contacts are open
or closed depending on the state of the variable being monitored. Typically, six-axis force and torque
sensors are used in strain gauges, piezoelectric devices and optical instruments. They are also used to
monitor robotic hand movements and the performance of car and truck tires.
General specifications for six-axis force and torque sensors include sensor height, sensor weight,
and sensing technology. Typically, sensor height is measured in inches and sensor weight is measured
in pounds. There are three basic types of sensing technologies: strain gauge, piezoelectric, and optic.
With strain-gauge devices, strain-sensitive variable resistors are bonded to part of the structure, which
deforms when measurements are taken. Typically, strain gauges are used as measurement elements in
Wheatstone bridge circuits. With piezoelectric devices, compressing a piezoelectric material generates
a charge that is measured by a charge amplifier. Optical devices use photodiodes or other fiber optic
technologies to detect optical power and convert it to electrical power.
Selecting six-axis force and torque sensors requires an analysis of force and torque requirements.
There are three measurement ranges for force. X-axis force is a longitudinal measurement range, Y-
axis force is a vertical measurement range, and Z-axis force is a lateral measurement range. There are
also three measurement ranges for torque. X-axis torque is measured around the longitudinal axis,
Y-axis force is measured around the vertical axis, and Z-axis force is measured around the lateral axis.
Additional considerations include force-measurement accuracy, torque-measurement accuracy, oper-
ating temperature, shock rating, and vibration rating. Typically, force and torque accuracy measure-
ments are expressed as a percentage. Shock and vibration ratings are usually maximum amounts.
Using 4-arm, 350-ohm bonded foil or 500-ohm bonded semiconductor bridges, these tough stain-
less-steel load cells yield high accuracy and linearity in any number of industrial and research appli-
cations, with exceptional structural resistance to off-axis loading, side-loading, and other extraneous
forces (see load-cell side and bending forces), and with safe overload protection for up to 50% over
capacity.

17.7 STRAIN GAUGES

The resistance strain gauge is an electrical sensing device that varies its resistance as a linear function
of the strain experienced by the structural surface to which it is bonded. ‘Strain’ is the deformation of
a solid material as the result of applied forces (internal or external), and is normally expressed in units
of microinches per inch (or ‘microstrain’ ).
Force and Torque Measurement 457

Fig. 17.24 Typical etched foil-gauge patterns

A typical strain gauge consists of a conductive grid pattern of etched metallic foil, mounted on a
thin base of epoxy or fiberglass. It can then be bonded to a surface in such a way that any subsequent
deformation of the surface produces a like deformation of the gauges.
When the gauge is deformed, its electrical resistance changes. This fact is explained partly by simple
geometry. That is, when a conductor is stretched lengthwise, its cross-sectional area decreases, with a
consequent increase in resistance. It is also partly explained by changes in the actual resistivity of the
gauge material when subjected to strain.
For a given amount of unit strain (ΔL/L), the gauge will undergo a corresponding change in resis-
tance (ΔR/R). The ratio of the unit change in resistance to the unit change in length is known as the
gauge factor (Fg) of the gauge:
Fg = (ΔR/R) / (ΔL/L)
Conventional foil gauges have standardized nominal resistance values of 120 and 350 ohms, and typi-
cally exhibit gauge factors between 1.5 and 3.5. In typical transducer applications, they are subjected to
full-scale design strain levels ranging from 500 to 2000 microstrain.

17.7.1 Strain Gauge Transducers


In transducers, strain gauge configurations are employed to measure weight, pressure, torque, and simi-
lar phenomena, by sensing the deformation of calibrated beams, diaphragms, or other flexures to which
mechanical force is applied. Strain gauge transducers can be rugged, compact, linear, highly accurate, and
readily compensated for wide temperature ranges. They can be operated with many types of available ac
and dc instruments, and are widely used in industrial and research measurement and control systems.
Through proper flexure design and gauge placement, a linear relationship can be achieved between the
applied force and the sensed strain. The Wheatstone Bridge circuit shown in Fig. 17.25 is almost univer-
sally used in load cells and other strain gauge transducers, because it facilitates cancellation of unwanted
temperature effects. [In any reliable load cell, thermal expansion and temperature resistance effects must
be made to cancel. In particular, temperature effects on the modulus of elasticity of the flexure materials
must be compensated, using carefully trimmed temperature-sensitive resistors (Rm in Fig. 17.25).]
458 Metrology and Measurement

Span adjust Modulus correction If the gauges within a load cell are connected in a bal-
Rs Rm anced Wheatstone Bridge circuit, and are excited by a source
of ac or dc voltage, the transducer will produce an electrical
R1 R3 output which is a direct linear function of the excitation volt-
age and the magnitude of the applied mechanical input:
Ein Eb
Eout(mV) = Ein( V ) • K • F/100
R2 R4
where

Rs Rm K = Calibration factor (mV/V, full scale)


Eout F = Input variable (% of full scale)

E
Transducer sensitivity is expressed in terms of millivolts per
volt (mV/V). The exact value of K for each instrument is
Fig. 17.25 Wheatstone bridge circuit
determined by measurement at the time of manufacture and
is furnished as part of that instrument’s calibration data. For
conventional transducers, this value usually falls between 0.5 and 3.0.
Excitation voltage can be either ac or dc, and is usually limited by heating considerations to a maxi-
mum of 10 volts for 120-ohm bridges and 20 volts for 350-ohm bridges (although good practice dic-
tates somewhat lower values).

Loading axis

T (torque)

S (side force
M (bending or "shear")
moment)

Off-axis loading creates


bending moment (M)

Bending of loading beam creates


both shear (S) and bending moment (M)
Fig. 17.26 Strain gauge configurations
Force and Torque Measurement 459

Review Questions

1. Explain the characterization of the performance of a force-measuring system.


2. Discuss the typical output characteristics of a force-measurement system.
3. “A force-measurement system will take some time to adjust fully to a change in force applied, and
creep of a force transducer.” Justify the statement.
4. Write short notes on
a. Force and load sensors
b. Load-cell applications
c. Static and dynamic load measurement
d. Types of torque measurement
e. Infrared (IR) torque sensor
5. Explain the working of a load cell with a neat sketch.
6. List the common types of load cells and explain any two of them with sketches.
7. Explain the working of quartz force sensors.
8. Explain why preloading of force rings are done.
9. Explain the use of a slip ring for measuring torque.
10. Discuss the working of
a. Torque-testing dynamometers
b. Motor and engine-testing dynamometers
18 Vibration Measurements

‘Vibrations are measured to minimize, eliminate or control the vibration and thus the
noise result …’
VIBRATION AND DEGREES OF damping is small, it has very little influ-
FREEDOM ence on the natural frequencies of the
There are two general classes of system, and hence the calculations for
vibrations—free and forced. Free vibra- the natural frequencies are generally
tion takes place when a system oscillates made on the basis of no damping. On the
under the action of forces inherent in the other hand, damping is of great impor-
system itself, and when external impressed tance in limiting the amplitude of oscilla-
forces are absent. The system under free tion at resonance.
vibration will vibrate at one or more of its The number of independent coordinates
natural frequencies, which are properties required to describe the motion of a
of the dynamic system established by its system is called degrees of freedom of
mass and stiffness distribution. the system. Thus, a free particle undergo-
Vibration that takes place under the excita- ing general motion in space will have
tion of external forces is called forced vibra- three degrees of freedom, and a rigid
tion. When the excitation is oscillatory, the body will have six degrees of freedom,
system is forced to vibrate at the excitation i.e., three components of position and
frequency. If the frequency of excitation three angles defining its orientation. Fur-
coincides with one of the natural frequen- thermore, a continuous elastic body will
cies of the system, a condition of reso- require an infinite number of coordinates
nance is encountered, and dangerously (three for each point on the body) to
large oscillations may result. The failure of describe its motion; hence, its degrees of
major structures such as bridges, build- freedom must be infinite. However, in
ings, or airplane wings is an awesome pos- many cases, parts of such bodies may be
sibility under resonance. Thus, in the study assumed to be rigid, and the system may
of vibrations, the calculation of the natural be considered to be dynamically equiva-
frequencies is of major importance. lent to one having finite degrees of free-
dom. In fact, a surprisingly large number
Vibrating systems are all subject to damp- of vibration problems can be treated with
ing to some degree because friction and sufficient accuracy by reducing the system
other resistances dissipate energy. If the to one having a few degrees of freedom.
Vibration Measurements 461

18.1 VIBRATION-MEASUREMENT SYSTEM

Measurements should be made to produce the data needed to draw meaningful conclusions from the
system under test. These data can be used to minimize or eliminate the vibration and thus the resultant
noise. There are also examples where the noise is not the controlling parameter, but rather the quality
of the product produced by the system. For example, in process control equipment, excessive vibration
can damage the product, limit processing speeds, or even cause catastrophic machine failure. The basic
measurement system used for diagnostic analyses of vibrations consists of the three-system compo-
nents shown in Fig. 18.1.

Processing
Vibration
Pre-amplifiers and display
pickups
equipment

Fig. 18.1 Basic vibration-measurement system

18.2 MODELING VIBRATION SYSTEM

The basic vibration model of a simple oscillatory system consists of a mass, a massless spring, and a
damper as shown in Fig. 18.2. The spring supporting the mass is assumed to be of negligible mass. Its
force–deflection relationship is considered to be linear, following Hooke’s law,
F = Kx (1)
where the stiffness k is measured in Newtons/metre.
The viscous damping, generally represented by a dashpot, is described by a force proportional to
the velocity, or
F = cx (2)
The damping coefficient c is measured in Newtons/metre/second.

18.3 CONCEPT OF EQUATION OF MOTION: NATURAL


FREQUENCY
k
Figure 18.3 shows a simple undamped spring–mass system, which is
assumed to move only along the vertical direction. It has one degree of
Unstretched
position Δ freedom (DOF ), because its motion is described by a single coordinate
x. When placed into motion, oscillation will take place at the natural
m
frequency fn which is a property of the system. We now examine some
Fig. 18.2 Simple spring— of the basic concepts associated with the free vibration of systems with
mass system one degree of freedom.
462 Metrology and Measurement

k

Unstretched k(Δ + x)
position Δ Static equilibrium
m x position
m
x· x··

w
Fig. 18.3 Spring–mass system and free-body diagram

Newton’s second law is the first basis for examining the motion of the system. As shown in Fig. 18.3,
the deformation of the spring in the static equilibrium position is D, and the spring force kD is equal to
the gravitational force w acting on mass m

K Δ= w = mg (3)
By measuring the displacement x from the static equilibrium position, the forces acting on m are
k ( Δ+ x ) and w. With x chosen to be positive in the downward direction, all quantities force, velocity,
and acceleration are also positive in the downward direction.
We now apply Newton’s second law of motion to the mass m :
mx =∑ F = w − k( Δ + x ) (4)
and because kD = w, we obtain
mx = − kx (5)
It is evident that the choice of the static equilibrium position as reference for x has eliminated w, the
force due to gravity, and the static spring force kD from the equation of motion. The resultant force
on m is simply the spring force due to the displacement x.
We define the circular frequency n
by the equation
k (6)
ω n2 =
m
Equation 5 can be written as

x + ω 2n x = 0 (7)

and we conclude that the motion is harmonic. Equation (7), a homogeneous second order linear
differential equation, has the following general solution:
x = A sin ωn t + B cos ωn t (8)
Vibration Measurements 463

where A and B are the two necessary constants. These constants are evaluated from initial conditions
x ( 0 ) and x ( 0 ), and Eq. (10) can be shown to reduce to

x(0)
x= sin ωn t + x ( 0 )cos ωn t (9)
ωn
The natural period of the oscillation is established from ωn τ =2π, or

m (10)
τ =2 π
k
and the natural frequency is
1 1 k (11)
fn = =
τ 2π m
These quantities can be expressed in terms of the static def lection D by observing Equation (3),
kΔ= mg . Thus, Equation (11) can be expressed in terms of the static deflection D as

1 g (12)
fn =
2π Δ
Note that τ , f n and ωn , depend only on the mass and stiffness of the system, which are properties
of the system.

18.4 VIBRATION-MEASUREMENT SYSTEM ELEMENTS

18.4.1 Transducers for Vibration Analyses


In general, the transducers employed in vibration analyses convert mechanical energy into electrical energy,
that is, they produce an electrical signal, which is a function of mechanical vibration. In the following section,
both velocity pickups and accelerometers mounted or attached to the vibrating surface will be studied.

18.4.2 Velocity Pickups


The electrical output signal of a velocity pickup is proportional to the velocity of the vibrating mecha-
nism. Since the velocity of a vibrating mechanism is cyclic in nature, the sensitivity of the pickup is
expressed in peak millivolts/cm/s and thus is a measure of the voltage produced at the point of maxi-
mum velocity. The devices have very low natural frequencies and are designed to measure vibration
frequencies that are greater than the natural frequency of the pickup.
Velocity pickups can be mounted in a number of ways; for example, they can be stud-mounted
or held magnetically to the vibrating surface. However, the mounting technique can vastly affect the
pickup’s performance. For example, the stud-mounting technique shown in Fig. 18.4(a), in which the
pickup is mounted flush with the surface and silicone grease applied to the contact surfaces, is a good
reliable method. The magnetically mounted pick-up, as shown in Fig. 18.4(b), on the other hand, in
464 Metrology and Measurement

general has a smaller usable frequency range than the stud-mounted pickup. In addition, it is important
to note that the magnetic mount, which has both mass and springlike properties, is located between
the velocity pickup and the vibrating surface and, thus, will affect the measurements. This mounting
technique is viable, but caution must be employed when it is used.
The velocity pickup is a useful transducer because it is sensitive and yet rugged enough to with-
stand extreme industrial environments. In addition, velocity is perhaps the most frequently employed
measure of vibration severity. However, the device is relatively large and bulky, is adversely affected by
magnetic fields generated by large ac machines or ac current-carrying cables, and has somewhat limited
amplitude and frequency characteristic.

Cable

Velocity pickup
Velocity pickup Magnet

Surface Surface

Stud
Apply silicon grease

(a) (b)
Fig. 18.4 Two-transducer mounting technique [(a) Stud-mount pickup;
(b) Magnetically help velocity pickup]

18.4.3 Accelerometers
The accelerometer generates an output signal that is proportional to the acceleration of the vibrating
mechanism. This device is, perhaps, preferred over the velocity pickup, for a number of reasons. For
example, accelerometers have good sensitivity characteristics and a wide useful frequency range. They
are small in size and light in weight and, thus, are capable of measuring the vibration at a specific point
without, in general, loading the vibrating structure. In addition, the devices can be used easily with elec-
tronic integrating networks to obtain a voltage proportional to velocity or displacement. However, the
accelerometer mounting, the interconnection cable, and the instrumentation connections are critical
factors in measurements employing an accelerometer. The general comments made earlier concerning
the mounting of a velocity pickup also apply to accelerometers.
Some additional suggestions for eliminating measurement errors when employing accelerometers for
vibration measurements are shown in Fig. 18.5(a). Note that the accelerometer mounting employs an isola-
tion stud and an isolation washer. This is done so that the measurement system can be grounded at only one
point, preferably at the analyzer. An additional ground at the accelerometer will provide a closed (ground)
loop, which may induce a noise signal that affects the accelerometer output. The sealing compound applied
at the cable entry into the accelerometer protects the system from errors caused by moisture.
Vibration Measurements 465

The cable itself should be glued or strapped to the vibrating mechanism immediately upon leaving the
accelerometer, and the other end of the cable, which is connected to the preamplifier, should leave the
mechanism under test at a point of minimum vibration. This procedure will eliminate or at least minimize
cable noise caused by dynamic bending, compression, or tension in the cable. Accelerometers for the mea-
surement of acceleration, shock or vibration come in many types using different principles of operation.

Sealing
compound

Accelerometer

Isolation washer

Isolation stud
Fig. 18.5(a) Mounting technique for eliminating selected measurement errors

1. Piezoelectric Principle The active element of an accelerometer is a piezoelectric material.


Figure 18.6 illustrates the piezoelectric effect with the help of a compression disk. A compression disk
looks like a capacitor with the piezoceramic material sandwiched between two electrodes. A force-
applied perpendicular to the disk causes a charge production and a voltage at the electrodes.

Fig. 18.5(b) Accelerometer and its accessories


466 Metrology and Measurement

The sensing element of a piezoelectric accelerometer consists of two major parts:


• Piezoceramic material
• Seismic mass
One side of the piezoelectric material is connected to a rigid post at the sensor base. The so-called seis-
mic mass is attached to the other side. When the accelerometer is subjected to vibration, a force is gen-
erated which acts on the piezoelectric element (refer Fig. 18.6). According to Newton’s law, this force
is equal to the product of the acceleration and the seismic mass. By the piezoelectric effect, a charge
output proportional to the applied force is generated. Since the seismic mass is constant, the charge,
output signal is proportional to the acceleration of the mass.

q = d33 F
A F
d33 d
q u= F
e33 A

d u A electrode area
d thickness
F force
Piezo disk F q charge
u voltage
d33, e33 piezo constants
Fig. 18.6 Piezoelectric effect, basic calculations

F = m.a
q
Seismic mass Charge sensitivity
m q
u Bqa = —
Piezoceramics a

Voltage sensitivity
Acceleration a
u
Bua = —
a

Fig. 18.7 Principle of a piezoelectric accelerometer

Over a wide frequency range, both sensor base and seismic mass have the same acceleration magni-
tude. Hence, the sensor measures the acceleration of the test object.
The piezoelectric element is connected to the sensor socket via a pair of electrodes. Some acceler-
ometers feature an integrated electronic circuit, which converts the high-impedance charge output into
a low-impedance voltage signal. Within the useful operating frequency range, the sensitivity is indepen-
dent of frequency, apart from the later-mentioned limitations.
Vibration Measurements 467

A piezoelectric accelerometer can be regarded as a mechanical low-pass with resonance peak. The
seismic mass and the piezoceramics (plus other ‘flexible’ components) form a spring–mass system. It
shows the typical resonance behavior and defines the upper frequency limit of an accelerometer. In
order to achieve a wider operating frequency range, the resonance frequency should be increased. This is
usually done by reducing the seismic mass. However, the lower the seismic mass, the lower the sensitivity.
Therefore, an accelerometer with high resonance frequency, for example, a shock accelerometer, will be
less sensitive whereas a seismic accelerometer with high sensitivity has a low resonance frequency.
Figure 18.8 shows a typical frequency response curve of an accelerometer when it is excited by a
constant acceleration.

1.30
fL lower frequency limit
f0 calibration frequency
ff resonance frequency
1.10
1.05
1.00
0.95
0.90

0.71
fL 2fL 3fL f0 0.2fr 0.5fr fr f
0.3fr

Fig. 18.8 Frequency response curve

Several useful frequency ranges can be derived from this curve


• At approximately 1/5 the resonance frequency, the response of the sensor is 1.05. This means
that the measured error compared to lower frequencies is 5 %.
• At approximately 1/3 the resonance frequency, the error is 10 %. For this reason, the ‘linear’
frequency range should be considered limited to 1/3 times the resonance frequency.
• The 3-dB limit with approximately 30 % error is obtained at approximately one-half times the
resonance frequency.

The lower frequency limit mainly depends on the chosen preamplifier. Often it can be adjusted. With
voltage amplifiers, the low frequency limit is a function of the RC time constant formed by the acceler-
ometer, cable, and amplifier input capacitance together with the amplifier input resistance.

2. Capacitive Principle Capacitive accelerometers sense a change in electrical capacitance,


with respect to acceleration, to vary the output of an energized circuit. The sensing element consists of
two parallel-plate capacitors acting in a differential mode. These capacitors operate in a bridge circuit,
468 Metrology and Measurement

along with two fixed capacitors, and alter the peak voltage generated by an oscillator when the structure
undergoes acceleration. Detection circuits capture the peak voltage, which is then fed to a summing
amplifier that processes the final output signal.
Capacitive accelerometers sense a change in electrical capacitance, with respect to acceleration, to
vary the output of an energized circuit. When subject to a fixed or constant acceleration, the capaci-
tance value is also a constant, resulting in a measurement signal proportional to uniform acceleration,
also referred as dc or static acceleration.
PCB’s capacitive accelerometers are structured with a diaphragm, which acts as a mass that under-
goes flexure in the presence of acceleration. Two fixed plates sandwich the diaphragm, creating two
capacitors, each with an individual fixed plate and each sharing the diaphragm as a movable plate. The
flexure causes a capacitance shift by altering the distance between two parallel plates, the diaphragm
itself being one of the plates. The two-capacitance values are utilized in a bridge circuit, the electrical
output of which varies with input acceleration.

3. Stroboscope Phase analysis is a powerful tool in troubleshooting rotating machinery. Vibration


strobe is a uniquely designed non-contact frequency measurement to provide precise, instantaneous
synchronization to a number of data collectors and FFT analyzers triggered by an accelerometer. Built
for rugged, portable applications, the vibration stroboscope is the perfect lightweight phase-analysis
tool. The vibration stroboscope allows for the measurement of phase without stopping the machinery
to install reflective tape.
Phase analysis is quick and accurate using the filter bandwidth selector and the relative phase adjust-
ment. The vibration stroboscope is totally self-contained, weighting about 1.1 kg. It may be powered by
battery, which makes it easy to hand-hold or ideal for mounting on a tripod. The powerful xenon lamp
ensures the vibration strobe is bright enough for use in fully lighted areas.
The frequency at which light pulses are produced can be altered and read from the instrument. This
instrument uses the principle of persistence of vision to measure the frequency of a rotating body.
To do so, the required vibrating body is viewed with the stroboscope. A point on the vibrating body
appears stationary if and only if the frequency of the rotating body and the frequency of the pulsating
light is the same. It can measure frequencies up to 15 Hz.

18.4.3 Pre-amplifiers
The second element in the vibration measurement system is the pre-amplifier. This device, which may
consist of one or more stages, serves two very useful purposes––it amplifies the vibration pickup signal,
which is in general very weak, and it acts as an impedance transformer or isolation device between the
vibration pickup and the processing and display equipment.
Recall that the manufacturer provides both charge and voltage sensitivities for accelerometers.
Likewise, the pre-amplifier may be designed as a voltage amplifier in which the output voltage is pro-
portional to the input voltage, or a charge amplifier in which the output voltage is proportional to the
input charge. The difference between these two types of pre-amplifiers is important for a number
Vibration Measurements 469

of reasons. For example, changes in cable length (i.e., cable capacitance) between the accelerometer
and preamplifier are negligible when a charge amplifier is employed. When a voltage amplifier is used
however, the system is very sensitive to changes in cable capacitance. In addition, because the input
resistance of a voltage amplifier cannot in general be neglected, the very low frequency response of
the system may be affected. Voltage amplifiers, on the other hand, are often less expensive and more
reliable because they contain fewer components and thus are easier to construct.

18.4.4 Processing and Display Equipment


The instruments used for the processing and display of vibration data are, with minor modifications, the
same as those described earlier for noise analyses. The processing equipment is typically some type of
spectrum analyzer. The analyzer may range from a very simple device, which yields, for example, the rms
value of the vibration displacement, to one that yields an essentially instantaneous analysis of the entire
vibration frequency spectrum. As discussed earlier, these analyzers, which are perhaps the most valuable
tool in a vibration study, are typically either a constant-bandwidth or constant-percentage-bandwidth
type of device. They normally come equipped with some form of graphical display, such as a cathode
ray tube, which provides detailed frequency data.

Fig. 18.9 Vibration measurement of aircraft components

18.4.5 Shakers and Vibration-and-Shock-Testing Equipment


Shakers, and vibration-and-shock-testing equipment are force generators or transducers that provide
a vibration, shock or modal excitation source for testing and analysis. Shakers are used to determine
product or component performance under vibration or shock loads, detect flaws through modal analy-
sis, verify product designs, measure structural fatigue of a system or material or simulate the shock or
vibration conditions found in aerospace, transportation or other areas.
470 Metrology and Measurement

Fig. 18.10 Vibration generation equipments (vibration exciters)

Shakers can operate under a number of different principles. Mechanical shakers use a motor with
an eccentric on the shaft to generate vibration. Electrodynamic models use an electromagnet to
create force and vibration. Hydraulic systems are useful when large force amplitudes are required,
such as in testing large aerospace or marine structures or when the magnetic fields of electrody-
namic generators cannot be tolerated. Pneumatic systems, known as ‘air hammer tables,’ use pres-
sure air to drive a table. Piezoelectric shakers work by applying an electrical charge and voltage to
a sensitive piezoelectric crystal or ceramic element to generate deformation and motion.
Common features of shakers are an integral slip table and active suspension. An integral slip allows
horizontal or both horizontal and vertical testing of samples. The slip table is a large flat plate that rests
on an oil film placed on a granite slab or other stable base. An active suspension system compensates
for environmental or floating platform variations.
The most important specifications for shakers are peak sinusoidal force, frequency range, displace-
ment, peak acceleration and peak velocity. Some of these specifications may be ratings without a load,
as the manufacturers cannot always predict how the shakers will be used.
The three main test modes shakers can have are random vibration, sine-wave vibration and shock
or pulse mode. In a random-vibration test mode, the force and velocity of the table and test sample
will vary randomly over time. A sine-wave test mode varies the force and velocity of the table and
tests sample sinusoidally over time. In a shock-test mode, the test sample is exposed to high-amplitude
pulses of force.
Vibration Measurements 471

Review Questions
1. Discuss the general classes of vibrations.
2. Justify the statement ‘In the study of vibrations, the calculation of the natural frequencies are of
major importance’.
3. Describe a basic vibration measurement system.
4. Explain basic vibration model with an example.
5. Explain the construction, working and applications of velocity pickups.
6. Explain the construction, working and applications of accelerometers.
7. Explain the different principles of operation of accelerometers with the help of a neat sketch.
8. Explain how the stroboscope can play the role of the instrument to be used for vibration measure-
ment.
9. Discuss the vibration processing and display equipment.
10. Explain in brief shakers, and vibration-and-shock-testing equipment.
11. Write short notes on
a. Piezoelectric principle of operation of accelerometers
b. Stroboscope
c. Shakers, and vibration-and-shock-testing equipment
19 Pressure Measurement

‘With the steam age came the demand for pressure measuring instruments which can
be expressed relative to various zero references…..’
PRESSURE-MEASURING It is important to select a pressure range
INSTRUMENTS that accommodates all anticipated pres-
With the steam age came the demand for sure swings, and which prevents exces-
pressure-measuring instruments. Pres- sive needle movement. It is recommended
sure gauges are used for a variety of to confine normal operating pressure to
industrial and application-specific pres- 25% to 75% of the scale. With fluctuating
sure-monitoring applications. Their uses pressure (e.g., pulsation by a pump or
include visual monitoring of air and gas compressor), the maximum operating
pressure for compressors, vacuum equip- pressure should be lower (50% of the full
ment, process lines and specialty tank range). Choices for pressure-gauge mea-
applications such as medical gas cylin- surement ranges include positive pres-
ders and fire extinguishers. In addition to sure, vacuum measurement, compound
visual indication, some pressure gauges measurement, differential pressure, abso-
are configured to provide electrical output lute pressure, and sealed pressure. A
of indicated pressure and monitoring of positive pressure gauge measures a pres-
other variables such as temperature. sure range from zero pressure to a higher,
Bourdon tubes or bellows, where mechan- positive pressure. Vacuum measurement
ical displacements were transferred to an switches measure vacuum pressure
indicating pointer were the first pressure (negative pressure). A compound pres-
instruments, and are still in use today. sure gauge measures a pressure range
from negative pressure (vacuum) to posi-
Pressure metrology is the technology of tive pressure. Differential pressure gauges
transducing pressure into an electrical give the relative pressure between two
quantity. Normally, a diaphragm con- points. If both operating pressures are the
struction is used with strain gauges, same, the measuring element cannot
either bonded to, or diffused into it, move and no pressure will be indicated. A
acting as resistive elements. Under the differential pressure is indicated when
pressure-induced strain, the resistive one pressure is higher or lower. Low dif-
values change. In capacitive technology, ferential pressures can be measured
the pressure diaphragm is one plate of a directly in cases of high static pressures.
capacitor that changes its value under Absolute gauges are used where pres-
pressure-induced displacement.
Pressure Measurement 473

sures are to be measured independently sure gauges include adjustable point-


of the natural fluctuations in atmospheric ers, maximum and minimum pointers,
pressure. The pressure of the media to be adjustable and stationary set hands,
measured is compared against a refer- throttling devices, and electric contacts.
ence pressure of absolute zero (absolute An adjustable pointer has adjustment
vacuum) in a sealed reference chamber. to zero the pointer via a screw or knob.
Sealed-gauge pressure-measurement is A maximum and minimum pointer indi-
similar in concept to an absolute pressure cates the maximum or minimum pres-
gauge, except that the pressure of the sure attained. An adjustable or
media to be measured is compared to stationary set hand is a separate pointer
standard atmospheric pressure (at sea (hand) to indicate a specifically set
level). Display types available for pres- pressure. A throttling device is used to
sure gauges include digital readouts, reduce pressure impact and pointer
analog meters and needles, and graphical movement caused by pressure pulsa-
and video displays. Pressure range to be tion and/or vibration. Throttling effect is
measured is an important specification to obtained by installing a restricting ori-
consider when searching for pressure fice between the gauge socket connec-
gauges. Accuracy of the pressure gauge tion and the Bourdon tube. Some types
is measured as a per cent of full scale and of throttling devices are throttling
in cases where the accuracy differs screws, pulsation dampeners, elasto-
between middle span and the first and meric bladders, pressure snubbers and
last quarters of the scale, the largest per- needle valves. Electric contacts are
centage error is reported. The tempera- used to turn on signal lights, sound
ture level to which a gauge will be exposed alarms, and operate a pump, valve, etc.
must be considered. Gauges with welded
Some of the important types of pressure
joints will withstand 750°F; with silver
gauges are listed here, viz., digital gauge,
brazed joints, 450°F; and with soft-sol-
differential pressure gauge, water-pres-
dered joints, 150°F, for short times with-
sure gauge, air-pressure gauge, absolute
out rupture. Other parts of the gauge may
pressure gauge, reading vacuum gauge,
be destroyed, however, and calibration
automotive vacuum gauge, bladder pres-
lost.
sure tank, thermistor vacuum gauge,
On an analog pressure gauge, the scale high-pressure air tank, autovacuum
can be single or dual. A single scale gauge, digital pressure regulator, differ-
displays one unit only, a dual scale dis- ential air-pressure gauge, murphy gauge,
plays in two units on the same face. tyre-pressure gauge, blood-pressure
Scale units can be PSI, kPa, bar, inches gauge, dial-thickness gauge, and air-
Hg, cmHg, feet H2O, inches H2O, oz / in2, pressure gauge face.
and kg / cm2. Common features of pres-

19.1 ZERO REFERENCE FOR PRESSURE MEASUREMENT

Pressure measurements may be expressed relative to various zero references.. Absolute pressure of a
fluid is referenced against a perfect vacuum. Gauge pressure is referenced against ambient air pressure,
so it is equal to absolute pressure minus atmospheric pressure. Atmospheric pressure is typically about
100 kPa, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant,
the gauge pressure of the same fluid will vary as atmospheric pressure changes. For gauge pressures,
474 Metrology and Measurement

several times larger than atmospheric pressure, this variation is small as a percentage of reading and
may be ignored. Differential pressure is the difference in pressure between two points.
Examples of absolute pressure measurements include barometric pressure, altimeters, and the Manifold
Absolute Pressure (MAP) sensor used in the engine control systems of modern fuel-injected automobiles.
Examples of gauge pressure measurements include the tyre-pressure gauge and sphygmomanometer. Dif-
ferential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is
to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through
mechanical means, obviating the need for an operator or control system to watch two separate gauges and
determine the difference in readings.
Gauge pressure of vacuum is usually indicated and expressed without a negative sign, so it is equal
to the atmospheric pressure minus the absolute pressure.

19.1.1 Units of Pressure


The SI unit of pressure is the pascal (abbreviation Pa). Atmospheric pressures is usually stated using its
decimal multiple kilopascal (kPa), where 1 kPa is close to 1.0% of the earth’s atmospheric pressure at sea
level. In meteorologic reports, hPa or mbar are the commonly used units. In vacuum systems, the equiva-
lent units torr and millimetre of mercury (mmHg) are also used, with 1 torr equaling 133.3223684 Pa above
an ideal vacuum.
Other vacuum units occasionally encountered in the literature include micrometers of mercury,
the barometric scale, or as a percentage of atmospheric pressure in bars or atmospheres. Low vacuum
is measured in the United States also in inches of mercury (inHg) below atmospheric pressure. ‘Below
atmospheric’ means that the absolute pressure is equal to the atmospheric pressure (29.92 inHg) minus the
vacuum pressure in inches of mercury. (This is effectively a gauge pressure.)Thus, a vacuum of 26 inHg is
equivalent to an absolute pressure of 29.92 inHg − 26 inHg = 4 inHg.

19.2 DEVELOPMENT OF PRESSURE MEASUREMENT

Table 19.1 Interesting facts on pressure measurement

Sl. No. Developments

1594 Galileo Galilei, born in Pisa ( Italy), obtains the patent for a machine to pump water from a river for
the irrigation of land. The heart of the pump was a syringe. Galileo Galilei found that 10 metres
was the limit to which the water would rise in the suction pump, but had no explanation for this
phenomenon. Scientists then devoted themselves to find the cause for this.
1644 Evangelista Torricelli, the Italian physicist filled a 1-metre long tube, hermetically closed at one end,
with mercury and set it vertically with the open end in a basin of mercury. The column of mercury
invariably fell to about 760 mm, leaving an empty space above its level. Torricelli attributed the cause
of the phenomenon to a force on the surface of the earth, without knowing where it came from.
He also concluded that the space on the top of the tube was empty, that nothing was in there, and
called it a ‘vacuum’.
Pressure Measurement 475

Sl. No. Developments

1648 Blaise Pascal, French philosopher, physicist and mathematician, heard about the experiments of Torricelli
and was searching for the reasons of Galileo’s and Torricelli’s findings. He came to the conviction that
the force, which keeps the column at 760 mm, is the weight of the air above. Thus, on a mountain, the
force must be reduced by the weight of the air between the valley and the mountain. He predicted that
the height of the column would decrease which he proved with his experiments at the mountain Puy de
Dome in central France. From the decrease he could calculate the weight of the air. Pascal also formu-
lated that this force, which he called ‘pressure’, acts uniformly in all directions.
1656 Otto von Guericke was born in Magdeburg, Germany. Torricelli’s conclusion of an empty space
or ‘nothingness’ was contrary to the doctrine of an omnipresent God and was thus attacked by
the church. Guericke developed new air pumps to evacuate larger volumes and staged a dramatic
experiment in Magdeburg by pumping the air out of two metal hemispheres which had been fitted
together with nothing more than grease. Even eight horses pulling at each hemisphere were not
strong enough to separate them.
1661 Robert Boyle, an Anglo-Irish chemist, used J-shaped tubes closed at one end to study the relation-
ship between the pressure and volume of trapped gas and stated the law of X V = K (P: Pressure,
V: Volume, K: Constant) which means that if the volume of a gas at a given pressure is known, the
pressure can be calculated if the volume is changed, provided that neither the temperature nor the
amount of gas is changed.
1820 Almost 200 years later, Joseph Louis Gay-Lussac, French physicist and chemist, detected that the
pressure increase of a trapped gas at constant volume is proportional to the temperature. Twenty
years later, William Thomson (Lord Kelvin) defined the absolute temperature.

Mechanical Measurement Technologies


1843 Lucien Vidie, a French scientist, invented and built the aneroid barometer, which uses a spring bal-
ance instead of a liquid to measure atmospheric pressure. The spring extension under pressure is
mechanically amplified on an indicator system. Employing the indicator method of Vidie, Eugene
Bourdon (founder of the Bourdon Sedeme Company) patented the Bourdon tube pressure gauge
for higher pressures in 1849.
Electrical Measurement Technologies

1930 The first pressure transducers were transduction mechanisms where the movements of diaphragms,
springs or Bourdon tubes are part of an electrical quantity. Pressure diaphragms are part of a capaci-
tance. The indicator movement is the tap of a potentiometer.

1938 The bonded strain gauges were independently developed by E E Simmons of the California Insti-
tute of Technology and A C Ruge of Massachusetts Institute of Technology. Simmons was faster
to apply for a patent.
1955 The first foil-strain gauges came up with an integrated full resistor bridge, which, if bonded on a
diaphragm, induce opposite stress in the centre and at the edge.

(Continued )
476 Metrology and Measurement

Sl. No. Developments

1965 The bonding connection of the gauges to the diaphragm was always the cause for hysteresis and
instability. In the 1960’s, Statham introduced the first thin-film transducers with good stability and
low hysteresis. Today, the technology is a major player on the market for high pressure.

1973 William R Poyle applied for a patent for capacitive transducers on glass or quartz basis, and Bob Bell
of Kavlico did the same on ceramic basis a few years later in 1979. This technology filled the gap for
lower pressure ranges (for which thin film was not suited) and is today, also with resistors on ceramic
diaphragms, the widest spread technology for non-benign media.

The Sensor Technology

1967 Honeywell Research Center, Minneapolis/USA, 1967 Art R Zias and John Egan applied for a patent for
the edge-constrained silicon diaphragm. In 1969, Hans W Keller applied for a patent for the batch-
fabricated silicon sensor. The technology is profiting from the enormous progresses of IC-technology.
A modern sensor typically weighs 0.01 grams. If all non-crystalline diaphragms have inherent hysteresis,
the precision limit of this item is not detectable by today’s means.
2000 The piezoresistive technology is the most universal one. It applies for pressure ranges from 100 mbar
to 1500 bar in the absolute, gauge and differential pressure mode. The slow spread of the technology in
high-volume applications for non-benign media resulted from the inability of US companies to develop
a decent housing. In 30 years, KELLER has perfected it at costs comparable to any other technology.

19.3 MECHANICAL ANALOG PRESSURE GAUGES

There are several common types of mechanical analog pressure gauges including bellows, Bourdon
tubes, capsule elements and diaphragm element gauges. Analog pressure gauges should be selected
considering the media and ambient operating conditions. Gauge selection should take into consider-
ation the corrosive environment in which it is to operate. The media being measured must be compat-
ible with the wetted parts of the pressure instrument. Improper application can damage the analog
pressure gauge, causing failure or personal injury and property damage. Diaphragm seals (also called
gauge isolators) can be added to the system to protect the gauge from corrosive attack, and prevent
viscous or dirty media from clogging Bourdon tube analog pressure gauges.
There are several common types of mechanical analog pressure gauges including bellows, Bourdon
tube, capsule element and diaphragm element gauges.

19.3.1 Bourdon Tube


It is a non-liquid pressure-measurement device. It is widely used in applications where inexpensive static
pressure measurements are needed. A typical Bourdon tube [schematic (a) and actual (b)] as shown in Fig.
19.1 contains a curved tube with an oval cross section that is open to external pressure input on one end
and is coupled mechanically to an indicating needle on the other end, as shown schematically as follows.
Pressure Measurement 477

Indicating
needle

Deformed
state

Original Bourdon
state tube

P
(a) (b)
Fig. 19.1 Typical Bourdon-tube pressure gauges

The pressure of the media acts on the inside of this tube resulting in the oval cross section becoming
almost round. Because of the curvature of the tube ring, the Bourdon tube bends when tension occurs.
The end of the tube (which is not fixed) moves, thus being a measurement of the pressure. Bourdon tubes
with a number of superimposed coils of the same diameter (helical coils) are used for measuring high
pressures. In 1849, the Bourdon tube pressure gauge was patented in France by Eugene Bourdon.
In a Bourdon tube, internal linkages are simplified. The external pressure is guided into the tube
and causes it to flex, resulting in a change in curvature of the tube. These curvature changes are linked
to the dial indicator for a number readout. Alternatively, a strain-gauge circuit can be attached on the
tube to convert the pressure-induced deflections into electric voltage signals. These signals can then
be output electronically, rather than mechanically, with the dial indicator. A mercury barometer can be
used to calibrate and check Bourdon tubes.

Advantages Portable, convenient, no leveling required

Limitations Limited to static or quasi-static measurements, accuracy may be insufficient for many
applications

19.3.2 Diaphragm Pressure Gauge


Diaphragm-element analog pressure gauges combine both a chemical seal and a pressure gauge
into one unit. Pressure sensing using diaphragm technology measures the difference in pressure of
the two sides of the diaphragm. In Fig. 19.2, depending upon the relevant pressure, we use the terms
478 Metrology and Measurement

ABSOLUTE, where the reference is vacuum (first


picture), GAUGE, where the reference is atmo-
spheric pressure (second picture), or DIFFER-
(1) (2) (3) ENTIAL, where the sensor has two ports for the
Fig. 19.2 Pressure sensing using diaphragm measure of two different pressures (third picture).
Diaphragm elements are circular shaped, convoluted membranes that are either clamped around the
rim between two flanges or welded in place. The measured media exerts a force on the diaphragm. A metal
pushrod welded to the top of the diaphragm transmits the deflection of the diaphragm to the linkage. The
linkage, in turn, translates the lateral motion of the push rod into a rotational motion of the pointer.
The pressure-loaded diaphragm can be considered as a circular plate subjected to a uniformly distrib-
uted loading, as shown in Fig. 19.3.

PExt − PRef

PExt − PRef

W
Diaphragm

Diaphragm
Fig. 19.3 Diaphragm element

The plate deflection depends upon its material properties, geometric properties, and boundary con-
ditions, and on the magnitude of the loading. Some particular results can be found in most textbooks
or handbooks on theory of plates, such as Roark’s Formulas for Stress and Strain by Young and Roark and
Formulas for Stress, Strain, and Structural Matrices by Pilkey.
It uses the elastic deformation of a diaphragm (i.e., membrane) instead of a liquid level to measure
the difference between an unknown pressure and a reference pressure. A typical diaphragm pressure
gauge contains a capsule divided by a diaphragm, as shown in Fig. 19.4. One side of the diaphragm is
open to the external targeted pressure, PExt , and the other side is connected to a known pressure, PRef ,.
The pressure difference, PExt − PRef , mechanically deflects the diaphragm.

PExt
Diaphragm
PExt − PRef

W
Diaphragm

PRef

Fig. 19.4 Typical diaphragm pressure gauge


Pressure Measurement 479

The membrane deflection can be measured in any number of ways. For example, it can be detected
via a mechanically coupled indicating needle, an attached strain gauge, a linear variable differential
transformer (LVDT; see Fig. 19.5 ), or with many other displacement/velocity sensors. Once known,
the deflection can be converted to a pressure loading using plate theory.

Diaphragm
Diaphragm PExt − PRef
PExt

W
PRef

LVDT

Fig. 19.5 LVDT-based diaphragm pressure gauge

Advantages Much faster frequency response than U-tubes, accuracy up to ±0.5% of full scale,
good linearity when the deflection is no larger than the order of the diaphragm thickness

Limitations More expensive than other pressure sensors


Capsule-element analog pressure gauges consist of two circular shaped, convoluted membranes
sealed tightly around their circumference. The pressure acts on the inside of the capsule and a pointer
indicates the generated stroke movement. Pressure gauges with capsule elements are more suitable for
gaseous media and relatively low pressures.

19.3.3 U-Tube Manometer


A U-tube manometer is a pressure-measuring instrument, usually limited to measuring pressures lower
than atmospheric. It is often used to refer specifically to liquid-column hydrostatic instruments. It con-
tains water or mercury in a U-shaped tube as shown in Fig. 19.6, and is usually used to measure gas pres-
sure. One end of the U tube is exposed to the unknown pressure field and the other end is connected
to a reference pressure source (usually atmospheric pressure), shown in Fig. 19.6.
By comparing the level of the liquid on both sides of the U-tube, the unknown pressure can be
obtained from fluid statics:

p + ρA g (h + Δh ) = ρB g Δh + ρC gh + pRef
⇒ p = pRef + (ρB − ρA ) g Δh + (ρC − ρA ) gh

If Fluid C is the atmosphere, Fluid B is the liquid in the U-tube (e.g., water or mercury), and Fluid
A is a gas then we can assume that r B » rA, r C. The pressure contributed by the weight of gas within the
U-tube can, therefore, be neglected.
480 Metrology and Measurement

Reference
pressure
Unknown
pressure PRef

Fluid C
Fluid A h (atmospheric pressure
(gas in most cases) in most cases)

Δh

Fluid B
(liquid, e.g.,
water or mercury)

Fig. 19.6 Typical U-tube

The gauge pressure of the gas can be approxi-


RR

mated by
ef
R

ef
R

p ≈ pRef + ρB g Δh
Unknown
pressure Vout ⇒ pgauge = p − pRef = ρB g Δh
Reference
pressure To automate the pressure measurement in a
Rw + ΔR/2

Rw − ΔR/2

mercury-filled U-tube, a Wheatstone Bridge can be


fabricated by connecting two external resistances
to a high-resistance wire threading in the interior
Vin Δh, ΔR
of the U-tube, as shown in Fig. 19.7.
The resistance of the U-tube wire is propor-
tional to its current-carrying length. The two parts
of the wire external to the mercury will carry cur-
rent and, therefore, will impart resistances to the
circuit. However, the immersed portion of the wire
carries no current, since the current will instead
travel through the highly conductive mercury. The
U-tube wire is effectively separated into two sepa-
Fig. 19.7 U-tube pressure sensor rate resistances, each resistance dependent upon the
wire length above the mercury. As a result, the difference in the resistance of these two wire segments will
be proportional to the pressure difference across the U-tube,
Pressure Measurement 481

ΔR
Δp = c ΔR = k
RW
≈ ρB g Δh
where c and K = cRw are factors that can be obtained during calibration.
For an initially balanced Wheatstone Bridge, the voltage output is given by
r ΔR
Vout = 2
V in
(1+ r ) RW
where r = Rw / RRef is the efficiency of the bridge circuit.
Thus, the unknown gas pressure (with respect to the reference pressure) is proportional to the output
voltage;
2
ΔR (1 + r ) Vout
Δp = k = k
RW r V in
Advantages Low cost, simple and reliable

Limitations Low dynamic response rate, requires time to damp out oscillations, measurement
accuracy dependent on precise leveling of U-tube, and cannot be used in weightless (0 g) environments.
The liquid in the U-tube must NOT interact with a measured fluid (be it gas or liquid). Mercury or
water-vapour contamination can occur, especially in low-pressure measurements.

19.3.4 Deadweight Tester


A deadweight tester is the basic primary standard used world-
wide for the accurate measurement of pressure. No other piece
of equipment can match the stability, repeatability and accu-
racy of the deadweight tester. It is ideal for calibrating pressure
gauges, transducers, transfer standards, recorders, digital calibra-
tors, etc., and can also be used to directly measure the pressure in
systems and processes where precise readings are important.

Pressure Sensing Unit Using the well-proven piston-


gauge system, which consists of a vertically mounted precision- Fig. 19.8 Deadweight tester
lapped piston and cylinder assembly, accurately calibrated masses
are loaded onto the piston (as shown in Fig. 19.9) which rises freely within its cylinder. These weights
balance the upward force created by the application of pressure within the system. Customized series of
deadweight testers are available, covering a wide variety of applications and ranges of pressure and vacuum.
The piston assemblies are manufactured to the very highest standards with certified accuracies traceable
to international standard laboratories such as the National Institute of Standards and Technology (NIST).
482 Metrology and Measurement

Force

Area

Pressure Force
Pressure =
Area

Fig. 19.9 Schematic diagram of a deadweight tester

Gravity varies significantly with geographical location and this variation has a direct effect on the force of
the weights and the accuracy of the deadweight tester. Each instrument can be calibrated to local gravity. If
unspecified, instruments will be supplied and calibrated to a standard gravity of 980.665 cm/s2. Instruments
are generally supplied with an integral carrying case, making them neat, compact and easily portable. Com-
ponents are stored in the detachable lid, which also provides excellent protection from dirt and damage
when the tester is in transit or storage. Unique test station connections allow quick hand-tight sealing. A
spirit level and adjustable feet are provided to enable the operator to level the instrument. A floatation
indicator is mounted on the top plate, eliminating guesswork when floating the piston. Weights are stored
in a separate box.
In case of hydraulic deadweight testers, with an accuracy better than 0.015% of reading, dual-piston
systems allow calibration over a wide range––the piston is automatically selected without valving or
piston exchange. The overhanging weight carrier protects the carbide piston, improves rotational spin,
sensitivity and stability, water systems eliminate oil contamination, and pressure is generated by a ram
screw.

19.4 LOW PRESSURE (VACUUM) MEASUREMENT

19.4.1 McLeod Gauge


A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the
pressure is a few mmHg. The gas must be well-behaved during its compression (it must not condense,
for example). The technique is slow and unsuited to continual monitoring, but is capable of good accu-
racy. Its useful range is above 10−4 torr (roughly 10−2 Pa).
Pressure Measurement 483

Suppose that initial pressure and volume in a McLeod gauge are given by
P1 = Pi
V1 = V + A·h0
where, V is the reservoir volume and A is the cross-sectional
area of the sealed tube, as shown in Fig. 19.10.
Suppose that the final compressed pressure and volume are given by
P2 = Pgauge
V2 = A·h
According to Boyle’s law, we have
Pi ⋅(V + A ⋅ h0 ) = Pgauge ⋅ A ⋅ h
For a typical manometer, Pgauge = P − PRef = ρ gh − Pi . The
unknown pressure Pi can be reduced to a function of the height
difference h:
ρ gh 2 Fig. 19.10 McLeod gauge
Pi =
V + A (h0 − h )
Furthermore, the volume of the reservoir is usually much larger than the tube:
V » A .( h − h)
0
This allows us to drop the area term, resulting in a simple quadratic function for the pressure:
ρ gh 2
Pi ≈
V
It is considered the standard for low-pressure (vacuum) measurements, where the pressure is below
10−4 torr (10−4 mmHg, 1.33×10−2 Pa, 1.93×10−6 psi). A McLeod gauge compresses a sample
of low-pressure gas to a sufficiently high pressure, obtains the compressed pressure from a standard
manometer, and then calculates the original low pressure through Boyle’s law. The compression is
passed through a dense, nearly incompressible, low-vapour pressure fluid, such as mercury. A sche-
matic of the McLeod gauge is shown in Fig. 19.11.
The error in a typical McLeod gauge measurement is usually larger than 1% and may be much larger,
due to the possibility of gas-to-liquid (or solid) phase change during compression, and to the contami-
nation by mercury vapours.

Advantages Can serve as a reliable benchmark, simple and reliable

Limitations Limited to static measurements, accuracy may not be high enough for some applica-
tions, cannot be used in weightless (0 g) environments, the liquid in the McLeod gauge must NOT
interact with the targeted gas, condensation of low-pressure gas to the liquid/solid phase may occur
during the compression stage, contamination by mercury vapours may occur
484 Metrology and Measurement

Low-Pressure gas
Fluid, e.g., mercury

Unknown
pressure
Pi Pi Pi

Plunger Sealed 0

h
Tube h0
cross
section
area
A
Reservoir
volume
V
Fluid
density
ρ

Introduce Trap Measure


gas gas compressed
pressure
Fig. 19.11 Typical McLeod gauge and its measurement

19.4.2 Thermal Conductivity Gauges


When the pressure of a gas becomes low enough that the mean free path of molecules is large compared
with the pertinent dimension of the apparatus, a linear relation between pressure and thermal conductiv-
ity is predicted by the kinetic theory of gases. For a conductivity gauge, it is the spacing between the hot
and cold surfaces. Again, when the pressure is increased sufficiently, conductivity becomes independent
of gas pressure. The transition region between dependency and non-dependency of viscosity and ther-
mal conductivity on pressure is approximately in the range of 10−2 to 1 torr. The most common types of
conductivity gauges are thermocouples, resistance thermometers (Pirani) and thermistors.

1. Thermocouple Vacuum Gauge The schematic diagram of a thermocouple vacuum gauge


is shown in Fig. 19.12. In this case, the hot film is a thin metal strip whose temperature may be varied
by changing the current passing through it. For a given heating current and the gas, the temperature
assumed by the hot surface depends on pressure; this temperature is measured by a thermocouple welded
to the hot surface. The cold surface here is the glass tube, which is usually near room temperature. Often
the accuracy of such gauges is not high enough to warrant the measurement or correction for changes in
room temperature. Thermocouple gauges of one type or another are available to measure in the range
of 10−4 to 1 torr.
Pressure Measurement 485

Pi

mA
Cold surface (glass tube)

Adjust heating
Hot surface current

Thermocouple

mo

Fig. 19.12 Thermocouple vacuum gauge

2. Resistance Thermometer (Pirani Gauge) In this case, the functions of heating and
temperature measurement are combined in a single element. A construction is shown in Fig. 19.13.
The resistance element is in the form of four coiled tungsten wires connected in parallel and supported
inside a glass tube to which the gas is admitted. Again, the cold surface is the glass tube. Two identical
tubes generally are connected in a bridge circuit, as shown in the second figure of 19.13. One of the
tubes is evacuated to a very low pressure and then sealed off while the other has the gas admitted to it.
The evacuated tube acts as a compensator to reduce the effect of bridge-excitation voltage changes and
the temperature changes on the output reading. Current flowing through the measuring elements heats it
to a temperature depending upon the gas pressure. The electrical resistance of the element changes with
the temperature, and this resistance change causes a bridge unbalance. Generally, the bridge is used as a
deflection rather than a null device. To balance the bridge initially, the pressure in the measuring element is
made very small and the balance pot is set for zero output. Any changes in the pressure will cause a bridge
unbalance. This gauge covers the range of measurement from 10−5 to 1 torr.
Thermister vacuum gauges operate on the same principle as that of a Pirani gauge except that the
resistance elements are temperature-sensitive semiconductors called thermistors, rather than metals
such as tungsten or platinum.
486 Metrology and Measurement

Tungsten
wire

C
Compensating
element
R

A B
Vo

R Pi
Measuring
D element

Excitation
voltage, V
Fig. 19.13 Pirani gauge

19.5 DIGITAL PRESSURE GAUGES

Digital pressure gauges are devices that convert applied pressure into signals. Readouts are then displayed
numerically. Many pressure-gauging technologies are available. Devices that use mechanical deflection
include an elastic or flexible element such as a diaphragm that responds to changes in pressure. Digital
pressure gauges that include a bridge circuit also use a diaphragm, but only to detect changes in capaci-
tance. Typically, strain gauges or strain-sensitive variable resistors are used as elements in Wheatstone
bridge circuits that perform measurements. Other digital pressure gauges use pistons, vibrating ele-
ments, MicroElectroMechanical Systems (MEMS), or thin films to sense changes in pressure. Some
devices use piezoelectric sensors to measure dynamic and quasi-state pressure. Generally, these sensors
have two modes: charge and voltage. Charge mode generates a high-impedance charge and voltage
mode uses an amplifier to convert the high-impedance charge into a low-impedance output voltage.
Digital pressure gauges are capable of performing various pressure measurements and displaying
amounts in different units. Absolute pressure is a pressure measurement that is relative to a perfect
vacuum. Typically, vacuum pressures are lower than the atmospheric pressure. Gauge pressure, the
most common type of pressure measurement, is relative to the local atmospheric pressure. By contrast,
sealed gauge pressure is relative to one atmosphere of pressure (oz) at sea level. Differential pressure
reflects the difference between two input pressures. In terms of units, some digital pressure gauges dis-
play measurements in pounds per square inch (PSI), kilo pascals, bars or millibars, inches or centimetres
of mercury, or inches or feet of water. Other devices display measurements in ounces per square inch
or kilograms per square centimetre.
Pressure Measurement 487

Selecting digital pressure gauges requires an analysis of performance specifications and


optional features. For example, devices differ in terms of maximum allowable pressure, accuracy,
vacuum range, and operating temperature. Accuracy, the difference between the true value and
the indication expressed as a per cent of the span, is often denoted by a lettered grade. Some
digital pressure gauges feature ASME B40.1 and DIN accuracy grades, or list the largest reported
percentage error. In terms of optional features, digital pressure gauges can include temperature
outputs, temperature compensation, alarm switches, and output switches that are compatible with
transistor−transistor logic (TTL). Negative pressure outputs are available for devices that measure
differential pressure.
Digital pressure gauges can produce several types of electrical signals, including analog voltage and
analog current. Output signals can also be encoded via amplitude modulation (AM), frequency modu-
lation ( FM ), or some other modulation scheme such as sine wave or pulse train. Serial and paral-
lel interfaces are available, and common comm unication protocols include Ethernet, Fieldbus, and
Device-Net. Digital pressure gauges are used in a variety of industries and have pharmaceutical, food
processing, and automotive applications. Digital pressure gauges are also used in the containment and
monitoring of hazardous materials.

19.6 PRESSURE TRANSMITTERS

In pressure transmitters, the full signal-conditioning cir-


cuitry is integrated in the housing. The sensor signal is
conditioned into standard output signals of 0...100 mV, Fig. 19.14 Cross section of pressure
0...10 V, 0.5...4.5 V, and 4−20 mA. transmitter
Normally, the signal is independent from the excitation
(i.e., 8...28 V), but in ratiometric transmitters, the signal is
proportional to the excitation. An error band best describes
the accuracy of a transmitter. This band covers all errors over
the full pressure and temperature range. Typical errors are
also given. The typical error describes the accuracy, which
can normally be expected in a measurement.

Fig. 19.15 Pressure transmitters


19.7 MEASURING PRESSURE AT HIGH TEMPERATURES

Pressure measurement is a well-understood technology in the manufacturing industries. It is routinely


performed by a variety of robust and reliable instruments. However, there’s one application area where
nearly all pressure sensors fail: in high-temperature zones, at 300°C, for example. This requires special
treatment. Such high temperatures are encountered in the plastics industry, for example, where it is
necessary to measure the melt pressure on extruder machines. However, with the increasing sophistica-
tion of batch manufacturing in the food and pharmaceutical industries, the need for this special type of
pressure instrument is broadening into other markets.
488 Metrology and Measurement

Designed with silicon oxide to withstand high temperature, Gefran’s


MEMS sensor uses piezoresistors connected to each other in a Wheatstone
bridge. Until now, the best way to measure pressure at high temperature is
to transmit the pressure from the fluid being measured to the sensor, located
some distance away from the heat, by means of a fluid transmission line.
One end of the transmission line is covered by a thin membrane and inserted
into the fluid where the pressure is to be measured. At the other end, at
a comfortable distance from the heat, is a standard pressure sensor. The Fig. 19.16 Gefran’s
transmission line is filled with a medium that is as inelastic and temperature MEMS sensor
independent as possible; this is usually mercury or special types of oil.
These ‘melt-pressure’ sensors have been used for many years. Usually regarded as commodity items,
they are manufactured by a number of different companies, and they do their work very well. But, they
have two major drawbacks ⎯ first, using mercury as a transmission fluid is considered environmentally
unsound and governmental agencies have demanded that the practice be discontinued. Secondly, the
thin membrane (which is only about 0.1-mm thick) separating the transmission fluid from the process
fluid is prone to rupture. This is caused by the abrasion due to charged polymers on the membrand.
Newer coatings have made it less vulnerable to failure, and this has improved its performance. Still,
90% of melt pressure sensor failures are due to the collapse of the membrane.

19.8 IMPACT

After years of manufacturing melt-pressure sensors, GEFRAN engineers thought they could greatly
improve the design, and have spent several years creating a new instrument called ‘Impact.’ Impact is
radically different from the fluid transmission type of sensors. The new design requires, in the manu-
facturing process, extensive use of lasers and special alloys and the coupling of different materials like
steel and ceramics. In creating the new design, Gefran generated four patents. A major design com-
mitment was a new and highly sensitive monolithic piezoresistive sensor, made with MEMS technol-
ogy. The square silicon chip contains both the membrane and sensitive element. It is shown in Fig.
19.17 and mounted in its carrier on the front of the cylinder in Fig. 19.18. The new sensor is so sensi-
tive that its maximum deflection is of the order of one ten-thousandth of a millimetre. The engineers

Fig. 19.17 ’Impact’ melt-pressure sensor is accurate even at 350°C


Pressure Measurement 489

also designed a much thicker membrane (1.5 mm) to come into


contact with the process fluid, but instead of transmitting the pres-
sure value by a liquid such as oil or mercury, a solid ‘push rod’ was
designed to do the job. The membrane and push rod are indicated
in Fig. 19.19; the sensor is mounted just behind the push rod and
connected to it with a special connector so that the two may be
separated during the installation phases on the machine.
The resulting sensor package has impressive specifications––it Fig. 19.18 Chip mounted on its
measures pressures from 100 to 1000 bar at operating tempera- carrier
tures up to 350°C, with a degree of accuracy of 0.25% full scale.
The greater thickness of the membrane––it is 10 to 15 times thicker than membranes on previous
instruments––is the key to the long life of Impact. There are no longer any concerns about the wear
and tear of the membrane due to charged polymers.

Process contact
membrane and push rod

Fig. 19.19 Process contact membrane and push rod

19.9 CASE STUDY OF PRESSURE MEASUREMENT AND MONITORING

19.9.1 Vehicle Tyre-Pressure Monitoring


Tyre-pressure monitoring systems ( TPM or TPMS) were implemented a number of years ago as a
factory-installed feature found only on high-end vehicles. TPMS, as an embedded electronic system, is
expected to be standard equipment in the next few years.

System Overview To do real-time sensing of the exact pressure inside the tyre, the sensing
device must be located in the tyre. This pressure-measurement information must then be carried to
the driver and displayed in the cabin of the car. The remote-sensing module is comprised of a pressure
sensor, a signal processor, and an RF transmitter. The system must compensate pressure variations due to
temperature. Hence, a temperature sensor is also required.
The power supply is provided by a long-life battery that the embedded intelligence helps to manage
as effectively as possible. The receiver could be either dedicated to TPM use, or shared with the other
functions in the car.
490 Metrology and Measurement

Remote sensing module

Fig. 19.20 Remote sensing module

Remote Sensing Module (RSM) Once mounted in the tyre, the RSM is a stand-alone device.
Its embedded intelligence has to independently manage the sensing functions, the measurement pro-
cessing, the RF transmission, and the power management.
To address each of these functions, Motorola offers two new components as a solution. The TPMS
Sensor is an integrated monolithic chip device. It is comprised of both a temperature and pressure
sensor with on-board circuitry.
The second component is a microcontroller and an RF transmitter, with both chips housed in the
same package.

TPM Sensor The Motorola TPM pressure sensor uses less than 0.5 μA in standby mode. The
pressure-sensing cell is capacitive and requires a C to V (capacitance to voltage) conversion stage. The
sensor’s built-in non-volatile memory can store calibration data while the ADC allows a direct digital
serial connection to the controller. In standby mode, all analog and digital blocks are switched off,
except an internal low frequency oscillator that sends a wake-up pulse over an output pin to the control-
ler periodically.
A pressure-measurement mode allows the pressure cell, and the C to V converter to be activated.
The temperature measurement mode activates the temperature cell (a PTC resistor) and its condition-
ing block.
Finally, the read mode enables the measurements to be stored in a sampling capacitor. The read
mode activates the A to D converter and enables the controller to serially read the measurement. These
four modes are coded through two input pins controlled by the microcontroller. The coding is chosen
so as to make the standby mode coded with logic zero on both pins.

Microcontroller The 68HC08RF2 device was chosen for its combination of an HC08 micro
together with an RF transmitter in a single 32-pin LQFP package. The dual chip HCO8RF2 has no
internal connections between the controller die and the RF die, but the pinout is optimised to shorten
the necessary external connections. The 2 Kbytes of user Flash memory with an embedded charge
pump allow designers to implement the necessary software routines to address the TPMS application’s
functional requirements.
Pressure Measurement 491

The RF transmitter is PLL based, addressing both ASK (amplitude modulation) and FSK (frequency
modulation) and its transmission rate is configurable up to 9600 baud. With a reference quartz oscilla-
tor of 13.56 MHz, the PLL is able to generate 315, 433, 868 MHz carriers.

System Architecture The HCO8RF2 controls the sensor-state by setting the different operat-
ing modes. When the sensor is set in standby mode, its internal low-frequency oscillator periodically
wakes up the controller. After each wake-up, the controller may run different and configurable tasks
according to the software program. Between two wake-up pulses, the microcontroller is in the stop
mode, all functions are disabled to minimise the power consumption, and only an external stimulus can
wake it up again.
To improve the battery management, an inertial switch can be employed to detect the parking mode.
In parking conditions, the RF transmissions can be stopped or reduced, improving power management
and reducing the data collision risk between RKE and TPM transmissions. The RSM must be as small
and lightweight as possible since it is mounted inside the tyre. An oversized RSM could result in wheel
imbalance.

Single Receiver A single receiver can be shared between both the RKE and TPM systems since
the same transmitting format is used in both. The TPM function must use as little CPU time as pos-
sible and to achieve this, a highly integrated RF receiver such as the MC33591, also called Romeo 2, is
required.
This RF receiver was developed in order to provide a comprehensive RF link that is integrable in
RKE and TPM systems with Romeo 2 at one end, and the HCO8RF2 at the other end. Thanks to its
embedded RF decoding and data registers, the chip minimizes the communication with the receiver
microcontroller. The MCU is not called until a valid data frame is received, validated, and stored by the
Romeo 2 device.
Tyre Identification The simplest to perform tyre identification is the manual initialization per-
formed in the factory, or in the garage each time a tyre is replaced or moved (rotated). The second
method is by automatic identification. Using this method, the system locates each tyre automatically
by a learning procedure that is activated regularly, or upon request. Combining different information
sources could be the path taken to meet these needs. TPM is in fact, destined to become more inte-
grated into the vehicle architecture.

Review Questions

1. List the various units of pressure used in practice.


2. Discuss different pressure scales.
3. Describe the Bourdon-tube pressure gauge.
4. Describe the diaphragm pressure gauge with the help of neat figures.
492 Metrology and Measurement

5. Describe the bellows pressure gauge with the help of neat figures.
6. Discuss the construction, working and applications of a U-Tube manometer.
7. Justify the ststement, ‘Deadweight tester is the basic primary standard used worldwide for the
accurate measurement of pressure’.
8. Discuss the construction, working and applications of a McLeod gauge.
9. Compare different pressure gauges.
10. Write short notes on
a. Bourdon tube
b. Error in typical McLeod gauge measurements
c. Digital pressure gauges
d. Pressure transmitters
e. Pressure measurement at high temperatures
20 Temperature
Measurement

‘Temperature measurement is denoting physical condition of matter….’


WHAT IS TEMPERATURE? (RTD), thermocouples, pyrometers and
One of the most fundamental parameters optical temperature transducers.
by which its degree of hotness or cold-
ness is identified is the temperature of a Many methods have been developed
body. Heat is a form of energy associated for measuring temperature. Most of
with continuous motion of particles of these rely on measuring some physical
matter. This continuous motion is sensed property of a working material that
as heat, and temperature is a measure of varies with temperature. One must be
heat. Therefore, it is an expression denot- careful when measuring temperature to
ing physical condition of matter. Tempera- ensure that the measuring instrument
ture is one of the most frequently used (thermometer, thermocouple, etc.) is
parameters for measurement and control- really of the same temperature as the
ling industrial processes such as in case material that is being measured. Under
of metallurgical processes (e.g., heat- some conditions, heat from the mea-
treatment, melting, making alloys etc.), suring instrument can cause a temper-
forging, rolling industries, chemical indus- ature gradient, so the measured
tries, refrigeration and air-conditioning temperature is different from the actual
industries, etc. There are so many trans- temperature of the system. In such a
ducers available for temperature measure- case, the measured temperature will
ment starting from glass-filled mercury vary not only with the temperature of
thermometers, bimetallic strips, thermis- the system, but also with the heat-
tors, Resistance Temperature Detectors transfer properties of the system.

20.1 TEMPERATURE SCALES

Several temperature scales have been developed to provide a standard for indicating the temperatures
of substances. The most commonly used scales include the Fahrenheit, Celsius, Kelvin, and Rankine
temperature scales. The Fahrenheit (°F) and Celsius (°C) scales are based on the freezing point and
boiling point of water. The freezing point of a substance is the temperature at which it changes its
494 Metrology and Measurement

physical state from a liquid to a solid. The boiling point is the temperature at which a substance changes
from a liquid state to a gaseous state. To convert a Fahrenheit reading to its equivalent Celsius reading,
Réaumur, Kelvin, and Rankine readings, the following equations are used.

Conversion Formulas a °C = (4/5)a °Réaumur = [32 + (9/5)a]°F


b °Réaumur = (5/4)b °C = [32 + (9/4)b]°F
c °F = (5/9)(c − 32) °C = (4/9)(c − 32) °Réaumur
t °C = (t + 273.15) K
TK K = (TK − 273.15) °C = [1.80 ∗ (TK − 273.15) + 32] °F = 1.80 TK
°Rankine

Table 20.1 Temperature scales

°C °Réaumur °F K °Rankine
Boiling point of water 100 80 212 373.15 671.67
(at 1 atm = 101325 Pa)
Freezing point of water 0 0 32 273.15 491.67
(at 1 atm = 101325 Pa)
Interval freezing point–boiling 100 80 180 100 180
point of water
(at 1 atm = 101325 Pa)
Triple point of water 0.01 0.008 32.02 273.16 491.69
(solid–liquid–gas
equilibrium)

The Kelvin (K) and Rankine (°R) scales given in Table 20.1 are typically used in engineering calcu-
lations and scientific research. They are based on a temperature called absolute zero. Absolute zero is a
theoretical temperature where there is no thermal energy or molecular activity. Using absolute zero as
a reference point, temperature values are assigned to the points at which various physical phenomena
occur, such as the freezing and boiling points of water.

International Practical Temperature Scale For ensuring an accurate and reproducible


temperature measurement standard, the International Practical Temperature Scale (IPTS) was devel-
oped and adopted by the international standards community. The IPTS assigns the temperature num-
bers associated with certain reproducible conditions, or fixed points, for a variety of substances. These
fixed points are used for calibrating temperature-measuring instruments. They include the boiling point,
freezing point, and triple point.
Temperature Measurement 495

20.2 TEMPERATURE-MEASURING DEVICES

Temperature-measuring devices are classified into two major groups, temperature sensors and absolute
thermometers. Sensors are classified according to their construction. Three of the most common types of
temperature sensors are thermocouples, resistance temperature devices (RTDs), and filled systems. Typi-
cally, temperature indications are based on material properties such as the coefficient of expansion, tem-
perature dependence of electrical resistance, thermoelectric power, and velocity of sound. Calibrations for
temperature sensors are specific to their material of construction. Temperature sensors that rely on material
properties never have a linear relationship between the measurable property and temperature. The accuracy
of absolute thermometers does not depend on the properties of the materials used in their construction.
The temperature of an object or substance can be calculated directly from measurements taken
with an absolute thermometer. Types of absolute thermometers include the gas-bulb thermometer,
radiation pyrometer, noise thermometer, and acoustic interferometer. The gas-bulb thermometer is
the most commonly used. Temperature measuring devices can also be categorized according to the
manner in which they respond to produce a temperature measurement. In general, the response will be
either mechanical or electrical. Mechanical temperature devices respond to temperature by producing
mechanical action or movement. Electrical temperature devices respond to temperature by producing
or changing an electrical signal.

20.2.1 Factors Affecting Accuracy


There are several factors, or effects, that can cause steady-state measurement errors. These effects include

i. Stem losses and thermal shunting


ii. Radiation
iii. Frictional heating
iv. Internal heating
v. Heat transfer in surface mounted sensors

Table 20.2 Comparison of temperature scales

Comment Kelvin Celsius Fahrenheit Rankine Delisle Newton Réaumur Rømer


Absolute zero 0 −273.15 −459.67 0 559.725 −90.14 −218.52 −135.90
Fahrenheit’s
255.37 −17.78 0 459.67 176.67 −5.87 −14.22 −1.83
ice/salt mixture
Water freezes
(at standard 273.15 0 32 491.67 150 0 0 7.5
pressure)
Average
human body 310.0 36.8 98.2 557.9 94.5 12.21 29.6 26.925
temperature
(Continued )
496 Metrology and Measurement

Table 20.2 Comparison of temperature scales —Cont’d


Comment Kelvin Celsius Fahrenheit Rankine Delisle Newton Réaumur Rømer
Water boils
(at standard 373.15 100 212 671.67 0 33 80 60
pressure)

Titanium melts 1941 1668 3034 3494 −2352 550 1334 883

Surface of the
5800 5526 9980 10440 −8140 1823 4421 2909
sun

Temperature-Measuring Instruments In this chapter we discuss the following temperature-


measuring instruments:

i. Thermometer
ii. Thermocouple
iii. Resistance Temperature Detectors
iv. Thermistor
v. Pyrometers

20.3 THERMOMETER

One of the most common devices for measuring temperature is the glass thermometer. This consists
of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature
increases cause the fluid to expand, so the temperature can be determined by measuring the volume
of the fluid. Such thermometers are usually calibrated, so that one can read the temperature simply by
observing the level of the fluid in the thermometer. Another type of thermometer that is not really
used much in practice, but is important from a theoretical standpoint, is the gas thermometer.
The theoretical basis for thermometers is the zeroth law of thermodynamics which postulates that
if you have three bodies, A, B and C, and if A and B are at the same temperature, and B and C are at
the same temperature then A and C are at the same temperature. B, of course, is the thermometer.
The practical basis of thermometry is the existence of triple-point cells. Triple points are conditions
of pressure, volume and temperature such that the three phases(matter) are simultaneously present.
The temperature of the air near the surface of the earth is usually determined by a thermometer in a
Stevenson screen. The thermometers should be between 1.25 m (4 ft 1 in) and 2 m (6 ft 7 in) above the
ground as defined by the World Meteorological Organization ( WMO). The true daily mean, obtained
from a thermograph, is approximated by the mean of 24 hourly readings and may differ by 1.0 degree
C from the average based on minimum and maximum readings.
20.3.1 A Mercury-In-Glass Thermometer
A mercury-in-glass thermometer is a thermometer consisting of mercury in a glass tube shown in Fig. 20.1.
Calibrated marks on the tube allow the temperature to be read by the length of the mercury within the tube,
Temperature Measurement 497

which varies according to the temperature. To increase the sensitivity, there is usually a bulb of mercury
at the end of the thermometer which contains most of the mercury; expansion and contraction of this
volume of mercury is then amplified in the much narrower bore of the tube. The space above the mercury
may be filled with nitrogen or it may be a vacuum. The break in the column of mercury is visible.
A special kind of mercury thermometer, called a maximum
thermometer, works by having a constriction in the neck close to the
bulb. As the temperature rises, the mercury is pushed up through the
constriction by the force of expansion. When the temperature falls,
the column of mercury breaks at the constriction and cannot return to
the bulb, thus remaining stationary in the tube. The observer can then
read the maximum temperature over a set period of time. To reset the Fig. 20.1 Mercury-in-glass
thermometer, it must be swung sharply. This is similar to the design of thermometer
a medical thermometer.
Mercury will solidify (freeze) at –38.83°C (–37.89°F) and so may only be used at higher temperatures.
Mercury, unlike water, does not expand upon solidification and will not break the glass tube, making
it difficult to notice when frozen. If the thermometer contains nitrogen, the gas may flow down into
the column and be trapped there when the temperature rises. If this happens, the thermometer will be
unusable until returned to the factory for reconditioning. To avoid this, some weather services require that
all mercury thermometers be brought indoors when the temperature falls to −37°C (−34.6°F). In areas
where the maximum temperature is not expected to rise above −38.83°C (−37.89°F), a thermometer
containing a mercury–thallium alloy may be used. This has a solidification (freezing) point of −61.1°C
(−78°F ). The thermometer was used by the originators of the Fahrenheit and Celsius temperature scales.
Today mercury thermometers are still widely used in meteorology, however in other usage they are
becoming increasingly rare, as mercury is highly and permanently toxic to the nervous system and many
countries have banned them outright from medical use. Some manufacturers use a liquid alloy of gal-
lium, indium, and tin, galinstan, as mercury replacement.

20.3.2 Bimetallic Strip Thermometer


Bulb thermometers are good for measuring temperature accurately, but they are harder to use when
the goal is to control the temperature. The bimetallic strip thermometer, because it is made of metal, is
good at controlling things. Bimetallic thermometers use the differences in thermal expansion proper-
ties of metals to provide temperature-measurement capability. Strips of metals with different thermal
expansion coefficients are bonded together. When temperature increases, it causes the assembly to
bend. When this happens, the metal strip with the large temperature coefficient of expansion expands
more than the other strip. The angular position versus temperature relation is established by calibration
so that the device can be used as a thermometer.
The principle behind a bimetallic strip thermometer relies on the fact that different metals expand
at different rates as they warm up. By bonding two different metals together, you can make a simple
electric controller that can withstand fairly high temperatures. This sort of controller is often found in
ovens. Here is the general layout.
498 Metrology and Measurement

Rivet
Contact

Wire Metal B Wire

Base Bimetallic strip Metal A

Fig. 20.2 Schematic diagram of bimetallic strip thermometers

Two metals make up the bimetallic strip (hence the name). In Fig. 20.2, Metal B would be chosen to
expand faster than Metal A if the device were being used in an oven. In a refrigerator, we could use the
opposite set up, so that as the temperature rises, Metal A expands faster than Metal B. This causes the
strip to bend upward, making contact so that current can flow. By adjusting the size of the gap between
the strip and the contact, you control the temperature. We will often find long bimetallic strips coiled
into spirals. This is the typical layout of a backyard dial thermometer. By coiling a very long strip, it
becomes much more sensitive to small temperature changes. In a furnace thermostat, the same tech-
nique is used and a mercury switch is attached to the coil. The switch turns the furnace on and off.

20.4 THERMOCOUPLE

The thermocouple is a thermoelectric temperature sensor, which consists of two dissimilar metallic
wires, e.g., one chromel and one constantan, coupled at the probe tip (measurement junction) and
extended to the reference (known temperature) junction. The temperature difference between the probe
tip and the reference junction is detected by measuring the change in voltage (electromotive force, EMF)
at the reference junction. The absolute temperature reading can then be obtained by combining the
information of the known reference temperature and the difference of temperature between probe tip
and the reference. Thomas Seebeck made this discovery in 1821. This effect is called Seebeck effect.
All dissimilar metals exhibit this effect. The most common combinations of two metals are listed
in Table 20.4 along with their important characteristics. For small changes in temperature, the Seebeck
voltage is linearly proportional to temperature:

Metal A Metal C + Metal A


eAB

Metal B Metal B

The seebeck effect eAB = Seebeck voltage

Fig. 20.3 Two wire thermocouple layout


Temperature Measurement 499

eAB = α ΔT
where, α = the Seebeck coefficient, the constant of proportionality.

20.4.1 Measuring Thermocouple Voltage


We can’t measure the Seebeck voltage directly because we must first connect a voltmeter to the
thermocouple, and the voltmeter leads themselves create a new thermoelectric circuit. Let’s connect
(refer Fig. 20.4) a voltmeter across a copper–constantan (type T ) thermocouple and look at the voltage
output:

J3

Cu Cu
+

V

Cu
C
+

V1 J1
=
J2
Equivalent circuits

Cu + − Cu Cu
V3
J3

+ −
+
V1

J1
= +
V1

J1

+ −
Cu V2 C Cu C
V2
J2
J2

Fig. 20.4 Measuring junction voltage with a DVM

We would like the voltmeter to read only V1, but by connecting the voltmeter in an attempt to
measure the output of Junction J1, we have created two more metallic junctions: J2 and J3. Since J3 is a
copper-to-copper junction, it creates no thermal EMF (V3 = 0), but J2 is a copper-to-constantan junc-
tion which will add an emf (V2) in opposition to V1. The resultant voltmeter reading V will be propor-
tional to the temperature difference between J1 and J2. This says that we can’t find the temperature at J1
unless we first find the temperature of J2.
One way to determine the temperature of J2 is to physically put the junction into an ice bath, forcing its
temperature to be 0°C and establishing J2 as the reference junction. Since both voltmeter terminal junctions
are now copper–copper, they create no thermal emf and the reading V on the voltmeter is proportional to
the temperature difference between J1 and J2. Now the voltmeter reading is (see Fig. 20.5):
V = (V1 − V2) ≈ α (tJ1 − tJ2)
If we specify TJ1 in degree Celsius:
TJ1 (°C) + 273.15 = tJ1
500 Metrology and Measurement

then V becomes
V = V1 − V2 = α [(TJ1 + 273.15) − (TJ2 + 273.15)] Cu Cu
+ +
= α (TJ1 − TJ2) = α (TJ1 − 0) V V1 J1
− + − −
V = αTJ1 Cu Cu V2 C
Voltmeter
We use this protracted derivation to emphasize that
J2
the ice-bath-junction output, V2, is not zero volts. It
is a function of absolute temperature. By adding the
voltage of the ice-point reference junction, we have Ice bath
now referenced the reading V to 0°C. This method Fig. 20.5 External reference junction
is very accurate because the ice-point temperature
can be precisely controlled. The ice point is used by
the National Bureau of Standards (NBS) as the fundamental reference point for their thermocouple
tables, so we can now refer the NBS tables and directly convert from voltage V to temperature TJ1. The
copper–constantan thermocouple shown in Fig. 20.5 is a unique example because the copper wire is the
same metal as the voltmeter terminals. While using thermocouples for temperature we should follow the
two considerations:

a. Cold-Junction Compensation While using thermocouples, it is necessary to use cold junc-


tion compensation. Theoretically, in the thermocouple arrangement, one junction must be at 0°C refer-
ence, as all charts provide output voltage considering one reference junction at 0°C. The compensation
for the cold junction not being at 0°C but at an ambient temperature is taken care of within the indicator.
However, another compensation has to be included in order to compensate for changes in ambient tem-
perature about the cold junction, as the temperature under measurement is proportional to differential
temperature between hot and cold junctions. Hence, if the cold junction is not maintained at a constant
temperature, an error would be introduced unless cold junction compensation is applied.

b. Compensating Cables Consider that we are using platinum and a 90% platinum/10% rhodium
(refer Table 20.3) thermocouple. Its output is in mV. If ordinary copper cables are used to convey the signal
to the mV/mA transducer then the copper wires will form two more thermocouples with the two metals
of the thermocouples itself. This would give an enormous error in the measurement of temperature. To
overcome this problem, a compensating cable is used to convey the signal from the thermocouple to the
mV/mA transducer. These compensating lead wires have two wires, which are exactly identical in thermal
properties of platinum and platinum/rhodium wires of the thermocouple. It must however be remem-
bered that each thermocouple has a distinct compensating cable which can only be used for that thermo-
couple. Some of the examples of thermocouple wires and lead wires are given in Table 20.3.

20.4.2 Thermocouple Junctions


Sheathed thermocouple probes are available with one of three junction types: grounded, insulated
(ungrounded) and bare wire (exposed) as shown in Fig. 20.6. At the tip of a grounded junction probe,
the thermocouple wires are physically attached to the inside of the probe wall. This results in good heat
Temperature Measurement 501

Table 20.3 Comparison of thermocouple wires and lead wires

Thermocouple Wires Lead Wires


+ve −ve +ve −ve
Chromel Alumel Copper Constantan (up to 125°C)
Chromel Alumel Iron Copper–Nickel alloy
Chromel Alumel Chromel Alumel
Iron Constantan Iron Constantan
Copper Constantan Copper Constantan
Platinum–Rhodium Platinum Copper Copper–Nickel

transfer from the outside, through the probe wall to the thermocouple junction. In an ungrounded
probe, the thermocouple junction is detached from the probe wall. Response time is slowed down
from the grounded style, but the ungrounded offers electrical isolation of 1.5 M1/2 at 500 Vdc in all
diameters. The thermocouple in the exposed junction style protrudes out of the tip of the sheath and
is exposed to the surrounding environment. This type offers the best response time, but is limited in
use to non-corrosive and non-pressurized applications.
The grounded junction is recommended for the measurement of static or flowing corrosive gas and
liquid temperatures and for high-pressure applications. The junction of a grounded thermocouple is
welded to the protective sheath giving faster response than the ungrounded junction type.
An ungrounded junction is recommended for measurements in corrosive environments where it is
desirable to have the thermocouple electronically isolated from and shielded by the sheath. The welded
wire thermocouple is physically insulated from the thermocouple sheath by MgO powder (soft).

Two thermoelectriccally Insulation material


dissimilar metallic wires

+ − + − + −

Insulated Groundad
Bare wire
junction junction

Fig. 20.6 Three-wire layouts of typical thermocouples


502 Metrology and Measurement

An exposed junction is recommended for the measurement of static or flowing non-corrosive gas
temperatures where fast response time is required. The junction extends beyond the protective metallic
sheath to give accurate fast response. The sheath insulation is sealed where the junction extends to
prevent penetration of moisture or gas, which could cause errors.

20.4.3 Common Thermocouple Specifications


Common commercially available thermocouples are specified by ISA (Instrument Society of America)
types. Type E, J, K, and T are base-metal thermocouples and can be used up to about 1000°C (1832°F).
Type S, R, and B are noble-metal thermocouples and can be used up to about 2000°C (3632°F). Because
thermocouples measure in wide temperature ranges and can be relatively rugged, they are very often
used in industry. The following criteria are used in selecting a thermocouple:
• Temperature range
• Chemical resistance of the thermocouple or sheath material
• Abrasion and vibration resistance
• Installation requirements (may need to be compatible with existing equipment; existing holes may
determine probe diameter)
Table 20.4 provides a summary of basic thermocouple properties for more detailed specifications of
individual thermocouples for respective applications.

Advantages
i. Small units that can be mounted conveniently
ii. Rugged and inexpensive construction; hence, low cost
iii. No moving parts, less likely to be broken
iv. Wide temperature range from −270°C to 2800°C
v. Reasonably short response time
vi. Reasonable repeatability and accuracy
vii. The output is in electrical form, which is suitable for indicating and controlling devices. More-
over, these electrical signals can be transmitted over distance, and hence sensing and indicating
elements can be away from each other.

Limitations
i. Sensitivity is low, usually 50 μV/°C (28 μV/°F) or less. Its low-voltage output may be masked
by noise. This problem can be improved, but not eliminated by better signal filtering, shielding,
and analog-to-digital (A/V) conversion.
ii. Accuracy, usually no better than 0.5°C (0.9°F), may not be high enough for some applications.
iii. Requires a known temperature reference, usually 0°C (32°F) ice water. Modern thermocouples,
on the other hand, rely on an electrically generated reference.
iv. Non-linearity could be bothersome. Fortunately, detail calibration curves for each wire material
can usually be obtained from vendors.
v. They can’t be used bare in conducting fluid.
Temperature Measurement 503

Table 20.4 Common thermocouple specifications

Sensitivity
Temperature
Material @ 25 °C (77°F )
ISA Range °C Error∗ Applications
(+ and −) μV/°C
(°F )
(μV/°F )
E Chromel and Constantan −270∼ 1000 60.9 LT:±1.67°C(±3°F) I,O
(Ni–Cr and Cu–Ni) (−450∼ 1800) (38.3) HT:±0.5%

J Iron and Constantan −210∼1200 51.7 LT:±2.2∼1.1°C(±4∼2°F) I,O,R,V


(Fe and Cu–Ni) (−350∼2200) (28.7) HT:±0.375∼0.75%

K Chromel and Alumel −270∼1350 40.6 LT:±2.2∼1.1°C(±4∼2°F) I,O


(Ni–Cr and Ni–Al) (−450∼2500) (22.6) HT:±0.375∼0.75%

T Copper and Constantan −270∼400 40.6


LT:±1∼2% I,O,R,V
(Cu and Cu–Ni) (−450∼750) HT:±1.5% or
(22.6)
±0.42°C(±0.75°F)
R Platinum and 87% −50∼1750 6
LT:±2.8°C(±5°F) I,O
Platinum/13% Rhodium (−60∼3200) HT:±0.5%
(3.3)
(Pt and Pt–Rh)
S Platinum and 90% −50∼1750 6
LT:±2.8°C(±5°F) I,O
Platinum/10% Rhodium (−60∼3200) HT:±0.5%
(3.3)
(Pt and Pt–Rh)
B 70% Platinum/30% −50∼1750 LT:±2.8°C(±5°F) I,O
Rhodium and 94% (−60∼3200) 6 HT:±0.5%
Platinum/6% Rhodium (3.3)
(Pt–Rh and Pt–Rh)

∗: LT = Low-temperature range, HT = High temperature range,


∗∗: I = Inert media, O = Oxidizing media, R = Reducing media, V = Vacuum,
Constantan, alumel, and chromel are trade names of their respective owners.

20.5 RESISTANCE TEMPERATURE DETECTORS (RTD)

The application of the property of electrical conductors to increase electrical resistance with rise in
temperature was first descried by Sir William Siemens at Bakerian lecture in 1871 before the Royal
Society of Great Britain. Callender, Griffiths, Holborn and Wein established the necessary methods of
constructions from 1885 to 1990.

20.5.1 RTD’s Working Principle


RTD’s working principle is based on the fact that electrical resistance of a substance changes with
change in its temperature. This substance can be a metal, or non-metal like semiconductor. Hence, any
504 Metrology and Measurement

change in the temperature of a metal can be measured in terms of a change in its electrical resistance.
The electrical conductivity of a metal depends on the movement of electrons through its crystal lattice.
Due to thermal excitation, the electrical resistance of a conductor varies according to its temperature
and this forms the basic principle of resistance thermometry.
The effect is most commonly exhibited as an increase in resistance with increasing temperature, a
positive temperature coefficient of resistance. When utilizing this effect for temperature measurement,
a large value of temperature coefficient (the greatest possible change of resistance with temperature)
is ideal; however, stability of the characteristic over the short and long term is vital if practical use is
to be made of the conductor in question. The relationship between the temperature and the electrical
resistance is usually non-linear and described by a higher order polynomial:

R (t ) = R0(1 + At + Bt 2 + Ct 3 + ………)

where R(t ) = Resistance at temperature t °C,


R0 = Resistance at 0°C,
A, B, C,….. = Constants (coefficient of resistance).
Resistance temperature thermometers are slowly replacing thermocouples in many lower temper-
ature industrial applications (below 600°C). Resistance temperature thermometers come in a number
of construction forms and offer greater stability, accuracy and repeatability. The resistance tends to
be almost linear with temperature. A small power source is required. No special extension cables or
cold-junction compensations are required as the resistance of a conductor is related to its tempera-
ture.
Materials most commonly utilized for resistance thermometers are platinum, copper and nickel.
However, platinum is the most dominant material internationally.

20.5.2 Platinum-Sensing Resistors


Platinum-sensing resistors are available with alternative R0 values, for example, 10, 25 and 100 ohms.
A working form of resistance thermometer sensor is defined in IEC and DIN specifications and this
forms the basis of most industrial and laboratory electrical thermometers. The platinum-sensing resis-
tor, Pt100 to IEC 751 is dominant in Europe and in many other parts of the world. Its advantages
include chemical stability, relative ease of manufacture, the availability of wire in a highly pure form
and excellent reproducibility of its electrical characteristic. The result is a truly interchangeable sensing
resistor, which is widely commercially available at a reasonable cost.
This specification includes the standard variation of resistance with temperature, the nominal value
with the corresponding reference temperature, and the permitted tolerances. The specified temperature
range extends from –200 to 961.78°C. The series of reference values is split into two parts: –200°C to
0 and 0 to 961.78°C. The first temperature range is covered by a third-order polynomial.
R(t ) = R0(1 + A⋅t + B⋅t2 + C⋅[t – 100°C]⋅t3)
Temperature Measurement 505

For the range 0°C to 850°C, there is a second-order polynomial.


R(t ) = R0 (1 + A⋅t + B⋅t2)
The coefficients are as follows:
A = 3.9083 × 10−3 °C−1
B = −5.775 × 10−7 °C−2
C = − 4.183 × 10−12 °C−4
The value R0 is referred as the nominal value or the nominal resistance and is the resistance at 0°C.
According to IEC 751, the nominal value is defined as 100.00 ohms, and this is referred to as a Pt100
resistor. Multiples of these values are also used; resistance sensors of 500 and 1000 ohms are available to
provide higher sensitivity, i.e., a larger change of resistance with temperature. The resistance changes are
approximately
0.4 /°C for Pt100
2.0 /°C for Pt100
0.4 /°C for Pt100
An additional parameter defined by the standard specification is the mean temperature coefficient
between 0 and 100°C. R100 is the resistance at 100°C, R0 at 0°C. The resistance change over the range
0°C to 100°C is referred as the fundamental interval.

Resistance Ohm
400

350

300

250

200

150
138
100
0 100 200 300 400 500 600 700 800 900
Temperature °C
Fig. 20.7 Resistance/temperature characteristics of Pt100

The very high accuracy demanded of primary standard resistance thermometers requires the use
of a more pure form of platinum for the sensing resistor. This results in different R0 and alpha values.
Conversely, the platinum used for Pt100 versions is ‘doped’ to achieve the required R0 and alpha
values. Platinum is usually used due to its stability with temperature. The platinum-detecting wire needs
to be kept free of contamination to remain stable. A platinum wire or film is created and supported on
506 Metrology and Measurement

a former in such a way that it gets minimal differential expansion or other strains from its former, yet
is reasonably resistant to vibration.
Commercial platinum grades are produced which exhibit a change of resistance of 0.385 ohms/°C
(European Fundamental Interval). The sensor is usually made to have 100 ohms at 0°C. This is defined
in BS EN 60751:1996. The American Fundamental Interval is 0.392 ohms/°C. Resistance thermometers
require a small current to be passed through in order to determine the resistance. This can cause self-heating
and manufacturer’s limits should always be followed along with heat-path considerations in design. Care
should also be taken to avoid any strains on the resistance temperature thermometer in its application.
Lead wire resistance should be considered and adopting three- and four-wire connection strategies
can result in eliminating connection lead-resistance effects from measurements.
Resistance temperature thermometer elements are available in a number of forms. The most common
are wire wound in a ceramic insulator––high temperatures to 850°C; wires encapsulated in glass––resists
the highest vibration and offers most protection to the platinum; and thin film with a platinum film on
a ceramic substrate, which is inexpensive and hence mass production is possible. Constructional details
are shown in Fig. 20.8.

Connection
to leads Sheath
Resistance thermometer Connection leads Insulator
Fig. 20.8 RTD construction

These elements will nearly always require insulated leads attached. At low temperatures, PVC, silicon
rubber or PTFE insulators are common to 250°C. Above this, glass fiber or ceramic are used. The mea-
suring point and usually most of the leads require a housing or protection sleeve. This is often a metal
alloy, which is inert to a particular process. Often more consideration goes into selecting and designing
protection sheaths than sensors as this is the layer that must withstand chemical or physical attack along
with offering convenient process-attachment features.

20.5.3 Standard Resistance Temperature Thermometer Data


Temperature sensors are usually supplied with thin film elements. These are rated as follows:

Table 20.5 Rating of temperature sensors

Continuous operation −70 to +500°C


Tolerance class B −70 to +500°C
Tolerance class A (1/2B) −30 to +350°C
Tolerance class 1/3B 0 to +100°C
Temperature Measurement 507

Resistance temperature thermometer elements can be supplied which function up to 850°C. Sensor
tolerances are calculated as follows:
Table 20.6 Calculation of sensor tolerances

Class B change in t = +/− (0.3 + 0.005|t|)


Class A change in t = +/− (0.15 + 0.002|t|)
1/3 Class B change in t = +/− 1/3 × (0.3 + 0.005|t|)
1/5 Class B change in t = +/− 1/5 × (0.3 + 0.005|t|)
1/10 Class B change in t = +/− 1/10 × (0.3 + 0.005|t|)

Here |t| = absolute temperature in °C. If elements have a resistance of n × 100 ohms then the basic
values and tolerances also have to be multiplied by n.

20.5.4 Resistance temperature Thermometer Wiring Configurations


a. Two-Wire Configuration The simplest resistance temperature thermometer configuration
uses two wires. It is only used when high accuracy is not required as the resistance of the connecting
wires is always included with that of the sensor leading to errors in the signal. Using this configuration,
you will be able to use 100 metres of cable. This applies equally to balanced bridge and fixed-bridge
systems. The values of the lead resistance can only be determined in a separate measurement without
the resistance temperature thermometer sensor and, Therefore, a continuous correction during the
temperature measurement is not possible.

Resistance
R1 R2
element

V0 S

RT R3 Power
supply
Bridge output
Fig. 20.9 Two-wire configuration

b. Three-Wire Configuration In or- Resistance


der to minimize the effects of the lead resis- R1 R2
element
tances, a three-wire configuration can be used. V0 S
Using this method, the two leads to the sensor
Load resistance
are on adjoining arms. There is a lead resistance RT R3 Power
in each arm of the bridge and therefore the lead supply

resistance is cancelled out. High-quality con- Bridge output


nection cables should be used for this type of Fig. 20.10 Three-wire configuration
508 Metrology and Measurement

configuration because an assumption is made that the two lead resistances are the same. This configura-
tion allows for up to 600 metres of cable.

c. Four-Wire Configuration The four-wire resistance temperature thermometer configuration


even further increases the accuracy and reliability of the resistance being measured. In the following
diagram, a standard two-terminal RTD is used with another pair of wires to form an additional loop
that cancels out the lead resistance. The above Wheatstone bridge method uses a little more copper
wire and is not a perfect solution. Below is a better alternative configuration that we use in all our
RTDs. It provides full cancellation of spurious effects and cable resistance of up to 15 ohms can be
handled.

Resistance
R1 R2
element

V0 S

RT Lead resistance
R3 Power
supply
Bridge output
Fig. 20.11 Four-wire configuration

20.5.5 RTD Design Characteristics


Sensor Designs The sensing element of an RTD usually consists of a wire cut to a length that
ο
provides a predetermined resistance at 0 C. The wire may be coiled within or wound around an insulat-
ing material.

RTD Readout Instrumentation Temperature measurement with an RTD is actually a mea-


surement of the sensor’s resistance, using the sensor calibration to convert the measurement into tem-
perature. This is achieved by connecting the sensor to a transducer that has a bridge circuit, typically a
Wheatstone bridge or Mueller bridge.

RTD Accuracy Accuracy problems can occur when RTDs from different manufacturers are used
in the same system, or when an RTD from one manufacturer is replaced with an RTD from another
manufacturer. Self-heating can also affect accuracy.

RTDs for Specialized Applications Designs include averaging RTDs, annular element RTDs,
and combination RTD-thermocouples. An averaging RTD has long-resistance elements. In annular
element RTDs, the sensors are made with annular elements that provide a tight fit against the inner
wall of a thermo-well. Combination RTD-thermocouple designs are available with both an RTD and a
thermocouple enclosed in the same sheath.
Temperature Measurement 509

Advantages
i. No drift over long period
ii. Fast speed response
iii. High accuracy and good reproducibility
iv. Doesn’t require any ambient temperature compensation

Limitations
i. High cost
ii. Requires external electrical supply
iii. Bulb size is larger than that of a thermocouple and filled thermometer

20.6 THERMISTOR

Thermistors are made of solid semiconductor materials having a high coefficient of resistivity.
The relationship between resistance and temperature and linear current–voltage characteristics are
of primary importance. Typical thermistors are suitable for temperature measurements in the range
of –100οC to 300οC. However, some thermistors measure as high as 600οC. Thermistors are semi-
conductors formed from complex metal oxides, such as oxides of cobalt, magnesium, manganese, or
nickel. They are available with positive temperature coefficients of resistance (PTC thermistors) and
with negative temperature coefficients of resistance (NTC thermistors). NTC thermistors are used
almost exclusively for temperature measurement. Thus, any change in temperature around the therm-
istor can be measured in terms of change in its electrical resistance. Despite the non-linear nature of
thermistors, readout instrument circuits have also been developed to provide a nearly linear output
voltage versus temperature or resistance versus temperature. Their resistance temperature relation is
generally given by

R = R0 . e β [1/T − 1/T0]

where,
R is the resistance at the measured temperature, T
R0 is the resistance at the reference temperature, T0
β is the experimentally determined constant for a given thermistor material, generally in the order
of 4000.
T0 is the reference temperature generally taken as 298 K (25°C).
Thermistors can convert changes in ambient or contact temperatures into the corresponding change
in voltage as a current. The standard Wheatstone bridge circuit is used for the measurement of change
in resistance with change in temperature.
510 Metrology and Measurement

20.6.1 Construction
The bead thermistor is made of a small bead of thermistor material to which a pair of leads is attached.
The bead is usually enclosed in glass. A disc thermistor consists of a disc of thermistor material and
a pair of leads. The leads may be attached radially or axially to the top and or bottom of the disc.
Some disc thermistors have no leads, and are fabricated with metal-plated faces that can be clipped
or soldered in the circuit. A washer thermistor resembles a disc thermistor but has a centre hole and
metal-plated faces for contact. The centre hole enables the thermistor to be held by a mounting bolt or
stacked with other washer thermistors and electrical components. A rod thermistor is basically a stick
of thermistor material to which a pair of leads is attached. The leads may be attached axially or radially
to each end of the rod. The most common problem related to thermistor accuracy is interchangeabil-
ity. Thermistor accuracy can also be affected by several mechanical or chemical actions that change its
electrical resistance.

Lead

(a) Bead type


(b) Washer type

Spring washers
Fiber bushing
Fiber washer Lead washers

Terminal
Rod type

Disc type

Fig. 20.12 Thermistor types

20.6.2 Applications
Thermistors are used for dynamic temperature measurements, and their operating range is −100°C
to 300°C with an accuracy of ±0.01°C. Thermistors are used for protecting equipments, e.g., +ve
TCR thermistors are used for transformers from heavy current. When current exceeds the safe limit,
heat generated raises the temperature of the thermistor, which increases the resistance. This acts as a
feedback signal to reduce the current through circuit to safe value. Moreover, thermistors can also be
used for temperature compensation in complex electronic equipment, magnetic amplifiers, warning
devices, etc.
Temperature Measurement 511

Advantages
i. Low thermal capacity and high resistance value, also has ability to withstand electrical and
mechanical stress
ii. Available in small size, low cost and increased stability with age
iii. Narrow span can be obtained
iv. High sensitivity and fast response

Limitations
i. Non-linear response
ii. Unstable at high temperature
iii. Wide temperature can’t be obtained
iv. Interchangeability of individual elements often creates a problem

20.7 PYROMETERS

High-temperature, non-contact type of measurement by means of radiations coming out of the


hot body is called pyrometry, and the instruments used for this are called pyrometers. According to
Kirchoff’s law, any body in thermal equilibrium with its surroundings emits as much heat radiations
as it receives at any given wavelength and temperature. This heat energy, which is also called radiant
energy, is radiated by a hot body in the form of electromagnetic waves in the range of infrared, visible
light, ultraviolet, X-rays and gamma rays. Ideally, every individual radiates at a particular wavelength and
all of us at body temperature radiate infrared wavelength radiations. As the temperature of a material
increases, the wavelength of radiation reduces and when a material becomes very hot, i.e., of the order
of 3000 K or more, it starts glowing and radiates heat waves of shorter wavelengths: Pyrometers make
use of this for expressing temperature.
All temperature-measuring devices discussed earlier require to be brought into physical contact with
the body whose temperature is to be measured. It means that the device not only should be able to
measure the body temperature but also must be capable of withstanding this temperature when in
contact with the hot body, which in case of very hot bodies having corrosive vapours or liquids, creates
real problems. The solution for this is the use of pyrometers which are used primarily for measuring
high and very high temperature (ranging from 600°C to 2000°C) above the ranges normally covered by
thermocouples. Moreover, a pyrometer can also measure the temperature of a moving object such as
the temperature of molten metal, moving ingot of hot metals, etc.

20.7.1 Types of Pyrometers


Instruments employing radiation principles fall into two general categories, viz., selective radiation
(optical) pyrometry, total radiation pyrometry and infrared pyrometery. A radiation pyrometer consists
of optical components that collect the radiant energy emitted by the target object, a radiation detector
that converts the radiant energy into an electrical signal, and an indicator that provides readout of the
measurement. The optical pyrometer, also known as the brightness pyrometer, requires manual adjust-
ment based on what is viewed through a sighting window. Because it relies on what can be seen by the
512 Metrology and Measurement

human eye, an optical pyrometer is designed to respond to very narrow bands of wavelengths that fall
within the visible light portion of the electromagnetic spectrum.
Another type of pyrometer that is commonly used for industrial temperature measurement is the
total radiation pyrometer. A total radiation pyrometer responds to wavelengths in both the visible and
infrared portions of the spectrum. Ideally, it would measure all wavelengths within this range. How-
ever, the glass window filters out some wavelengths. Any gases or vapours between the target and the
pyrometer will also attenuate certain wavelengths. Total radiation pyrometers are based on the Stefan–
Boltzmann law, which states that total radiation is proportional to the fourth power of temperature.
These pyrometers are calibrated using a black body and, therefore, measure the temperature based on
the total radiation a black body would emit.

a. Selective Radiation Pyrometer/Optical Pyrometer The optical pyrometer is a


highly developed and well-accepted non-contact temperature measurement device with a long and
varied past from its origins more than 100 years ago. In spite of the fact that more modern automatic
devices have nearly displaced it, several makers still produce and sell profitable quantities each year.
This type of pyrometer is sensitive only to radiations of particular wavelengths.

Working of Selective Radiation/Optical Pyrometers Optical pyrometers work on the basic


principle of using the human eye to match the brightness of the hot object to the brightness of a calibrated
lamp filament inside the instrument. Hence, the temperature of the hot body can be measured in terms of
spectral radiant intensity at a certain wavelength. The optical system (as shown in Fig. 20.13) contains filters
that restrict the wavelength-sensitivity of the devices to a narrow wavelength band around 0.65 to 0.66
microns (the red region of the visible spectrum).

Red Lamp Range Objective


Field Erecting lens
Eye filter filter
lens lens
lens

Field Exit
stop stop Entrance
stop
Fig. 20.13 Disappearing filament principle

Figure 20.13 shows a schematic diagram of an optical pyrometer, which is similar to the telescope
having the objective at one end and the eyepiece at the other end. A red filter is placed in between the
eyepiece and the source of energy, which cuts out the shorter wavelengths and passes radiations. The
filament lamp acts as the standard source, which is placed exactly at the focus of the objective; so that
the image of the hot target is on the plane of the filament. Due to this, the target image and filament
lamp appear superimposed on one another when viewed through the eyepiece. A two-volt battery along
with a millimeter and rheostat is connected in series with the lamp. The intensity of a filament lamp can
Temperature Measurement 513

Image of filamer
(hotter)

Image of hot target


Pointer indicating the
center of the filament

Fig. 20.14 Target image when the filament is cool

Image of hot
target

Image of
filament
(cooler)

Fig. 20.15 Target image when the filament is hot

be varied by varying the current with the help of the rheostat. The procedure to match the intensity of
the filament lamp is discussed as follows:

i. An operator sights a hot target, and adjusts the range until its image is seen in red. The lamp
filament is initially cooler than the target and its image appears as a darker red or black spot
superimposed on the target’s image (see Fig. 20.14). What the operator sees when looking into
the eyepiece is the target in red, its surroundings in black (cooler) or red (hot) and superimposed
on the target, the filament. The view is circular because the optical system is made up of circular
lenses, apertures, etc.
ii. The lamp current is raised until the image of the filament becomes hotter than the target and it
appears brighter red than the target (refer Fig. 20.15).
514 Metrology and Measurement

iii. The lamp current is adjusted until the lamp filament’s


brightness temperature equals that of the target. The fila-
ment’s image blends into the image of the target. The filament
then ‘disappears’ as shown in Fig. 20.16. Brightness or radiance
temperature is the temperature that a black body would have
when it looks as bright as the target. It is almost always a lower
temperature than the true temperature because of the effect
of the target’s emissivity. However, if the target is an object
in a furnace or oven of about the same temperature, the true
and brightness temperatures are very close to the same value.
Also, if the target is in cooler surroundings and has a relatively Fig. 20.16 Target image when
filament’s brightness tempera-
high emissivity, the difference between the true and brightness ture equals that of target
temperatures may be small. The difference for a wide range of
conditions can be estimated from a table in ASTM Standard E1256.

Other filters reduce the intensity so that one instrument can have a relatively wide temperature range
capability. Needless to say, by restricting the wavelength response of the device to the red region of
the visible, it can only be used to measure objects that are hot enough to be incandescent, or glowing.
This limits the lower end of the temperature measurement range of these devices to about 700°C. Some
experimental devices have been built using light amplifiers to extend the range downwards, but the
devices become quite cumbersome, fragile and expensive.
Modern radiation thermometers provide the capability to measure within and below the range of the
optical pyrometer with equal or better measurement precision plus faster time response, precise emissiv-
ity correction capability, better calibration stability, enhanced ruggedness and relatively modest cost.

b. Total Radiation Pyrometer Temperature measurement with radiation pyrometers is based


on the fact that all objects emit radiant energy. Radiant energy is emitted in the form of electromagnetic
waves, considered to be a stream of photons traveling at the speed of light. The wavelengths of radiant
energy emitted by a hot object range from the visible light portion (0.35 to 0.75 microns) to the infrared
portion (0.75 to 20 microns) of the electromagnetic spectrum.
In the visible light portion of the spectrum, radiant energy appears as colours. The expression ‘red
hot’ is derived from the fact that a sufficiently hot object will emit visible radiation. Common examples
include a piece of red-hot steel and a tungsten filament lamp. Radiation pyrometers measure the tem-
perature of an object by measuring the intensity of the radiation it emits. The intensity and wavelength
of the radiation emitted by an object depends on the emittance and the temperature of the object.
Emittance is a measure of an object’s ability to send out radiant energy. It is inversely related to reflec-
tion of the object’s surface. Since emittance will differ from one object to another, a standard, called a
black body, is used as a reference for calibrating radiation pyrometers and serves as the basis for the laws
that define the relationship of the intensity of radiation and wavelength with temperature. A black body
is an object having a surface that does not reflect or pass radiation. It is considered a perfect emitter
because it absorbs all heat to which it is exposed and emits that heat as radiant energy.
Temperature Measurement 515

i. The intensity of radiant energy increases as temperature increases.


ii. The peak of radiation moves to lower wavelengths as temperature increases. In the visible light
portion of the spectrum, this effect can be seen by the change in colour of heated metals. They
change from red to yellow to white to blue-white as temperature increases.

c. Infrared Pyrometer Infrared pyrometers offer a great method for accurately and quickly
measuring temperature of objects at a distance and/or in motion. They offer the ability to measure
temperature of objects precisely without needing to touch the item being measured, and without needing
to be placed within what can be an extremely hot and dangerous environment (where most traditional
close-proximity thermometers will be destroyed). This is an image of a specialized industrial infrared
thermometer being used to monitor temperature of molten material (such as metal or glass) at a distance,
for quality control purposes within a manufacturing process. Portable, battery-operated devices using
similar technology are also available in the market.

Fig. 20.17 Infrared pyrometer

Infrared pyrometers measure temperature using electromagnetic radiation (i.e., infrared) emitted
from an object. They are sometimes called laser thermometers if a laser is used to help aim the
thermometer, or non-contact thermometers to describe the device’s ability to measure temperature
from a distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the
object’s temperature can be determined.
The most basic design consists of a lens to focus the infrared energy on to a detector, which converts
the energy to an electrical signal that can be displayed in units of temperature after being compensated
for ambient temperature variation. This configuration facilitates temperature measurement from a dis-
tance without contact with the object to be measured. As such, the infrared thermometer is useful for
measuring temperature under circumstances where thermocouples or other probe-type sensors cannot
be used or do not produce accurate data for a variety of reasons.
Some typical circumstances are where the object to be measured is moving; where the object is
surrounded by an electromagnetic field, as in induction heating; where the object is contained in a
vacuum or other controlled atmosphere; or in applications where a fast response is required. Infrared
516 Metrology and Measurement

pyrometers can be used to serve a wide variety of temperature-monitoring functions. A few examples
provided to this article include

i. Detecting clouds for remote telescope operation


ii. Checking mechanical equipment or electrical circuit breaker boxes or outlets for hot spots
iii. Checking heater or oven temperature for calibration and control purposes
iv. Detecting hot spots/performing diagnostics in electrical circuit board manufacturing
v. Checking for hot spots in fire-fighting situations
vi. Monitoring materials in process of heating and cooling, for research and development or manu-
facturing quality control situations

There are many varieties of infrared temperature-sensing devices available today, including configura-
tions designed for flexible and portable handheld use, as well as many designed for mounting in a fixed
position to serve a dedicated purpose for long periods. Typical sensor varieties include the following:

a. Spot Infrared Thermometers Also known as infrared pyrometers, designed for monitor-
ing a finite area or “spot” of space.

b. Infrared Line Scanning Systems Typically incorporating what is essentially a spot ther-
mometer pointed at a rotating mirror, for continuously scanning a wide area of space. These devices
are widely used in manufacturing involving conveyors or ‘web’ processes, such as large sheets of glass
or metal exiting an oven, fabric and paper, or continuous piles of material along a conveyor belt.

c. Infrared Cameras These are essentially infrared thermometers designed as a camera, moni-
toring a thousand points at once, output as a two-dimensional image, and with each pixel representing
a temperature. This technology is typically more processor and software intense than the items above,
and is used for monitoring large areas of space. Typical applications include perimeter monitoring used
by military or security personnel, inspection/process quality monitoring of manufacturing processes,
and equipment or enclosed-space hot or cold-spot monitoring for safety and efficiency maintenance
purposes.

Pyrometer Accuracy One technique for ensuring that emitted radiation rather than reflected
radiation is being observed is to drill a hole in the target object and aim the pyrometer into the hole.
It is recommended that the depth of the hole is about five times its diameter. Measurement accuracy
can also be affected by the presence of gases or vapours between the target and pyrometer. Gases and
vapours can filter out some radiation wavelengths. One technique for resolving this problem is to use
fans to disperse any gases or fumes. A film of dirt on the viewing window or lens will also affect mea-
surement accuracy. In some applications, it may be necessary to use a purge to prevent soot or other
particles from being deposited on the viewing window or lens.
Temperature Measurement 517

Review Questions

1. Explain the factors, that can cause steady-state temperature measurement errors.
2. Explain construction and working of bimetallic strip thermometer.
3. State and explain Seeback effect, Peltier effect and Thomson effect for thermocouple.
4. Explain principle, construction and working of thermocouple temperature measurement.
5. Explain cold-junction compensation and compensating cables.
6. Discuss common thermocouple specifications.
7. Explain principle, construction and working of Resistance Temperature Detectors (RTD).
8. Discuss resistance temperature thermometer wiring configurations.
9. What is the function of lead wires?
10. Explain the principle and working of a thermistor.
11. Explain what do you mean by term ‘pyrometry’?
12. Discuss common types of pyrometers and explain one in detail.
13. Explain selective radiation pyrometer/optical pyrometer.
14. Explain what do you mean by total radiation pyrometer?
15. Write short notes on
a. Temperature scales
b. International Practical Temperature Scale
c. Mercury-in-glass thermometer
d. Thermocouple junctions
e. Platinum-sensing resistors
f. Applications, advantages and limitations of thermistor
g. Infrared pyrometer
h. Pyrometer accuracy
21 Introduction to
Strain Measurement
Metrology

‘Strain has formed a basic component of our life applications which, has to be sensed,
measured and analysed….’
INTRODUCTION TO STRAIN suit a variety of applications. In either
GAUGE case the conductor is bonded to a back-
The strain gauge has been in use for ing sheet. In turn, the backing is
many years and is the fundamental sens- securely bonded to the structure to be
ing element for many types of sensors, measured such that a surface strain
including pressure sensors, load cells, also strains the conductor. They oper-
torque sensors, position sensors, etc. ate on the principle that as the foil is
The accurate measurement of strain can subjected to stress, the resistance of
be made using strain gauges. Given a the foil changes in a defined way.
measurement of strain, stress and load In general, wire gauges are used for high-
may also be calculated via the definition temperature applications, and foil gauges
of Young’s modulus as stress divided by are used for routine applications. Foil
strain, and the definition for stress as gauges offer the following characteristics.
force (or load) divided by area. The most
common form of strain gauge is the elec- i. High stability
trical resistance strain gauge––originally
ii. Good proportionality
invented by Lord Kelvin in circa 1856.
Kelvin observed that the resistance of a iii. Manufacturing process based on etch-
conductor varies deterministically when ing which is cheap and allows complex
the conductor is stretched (or strained). conductor designs to be obtained
Therefore, if a conductor is bonded to a
iv. Low price [per gauge is 25 p to £ 10
structure such that the change in length
(installation and calibration costs are
of the structure is equal to the change in
significantly higher)]
length of the conductor, the change of
resistance of the conductor is directly v. Low output voltage––requires ampli-
proportional to strain. fication.
The gauges are formed by either a A strain gauge’s conductors are very thin—
length of wire arranged in an axial grid if made of round wire, about 1/1000 inch in
pattern, or by etching a thin metal foil diameter. Alternatively, strain gauge con-
into the desired shape. The majority of ductors may be thin strips of metallic film
strain gauges are foil types, available in deposited on a non-conducting substrate
a wide choice of shapes and sizes to material called the carrier.
Strain Measurement 519

21.1 BONDED GAUGE

The name ‘bonded gauge’ is given to strain gauges that are glued to a larger structure under stress
(called the test specimen). The task of bonding strain gauges to test specimens may appear to be very
simple, but it is not. ‘Gauging’ is a craft in its own right, absolutely essential for obtaining accurate,
stable strain measurements. It is also possible to use an unmounted gauge wire stretched between two
mechanical points to measure tension, but this technique has its limitations.

21.2 UNBONDED STRAIN GAUGE

Unbonded strain-gauge elements are made of one or more filaments of resistance wire stretched
between supporting insulators. The supports can be attached directly to an elastic member used as a
sensing element or can be fastened independently using a rigid insulator to couple the elastic member to
the filaments of the resistance wire. The displacement (strain) of the sensing element causes a change
in the filament length. The change in length results in changes in resistance. Because they are fragile,
transducers that use unbonded gauges are becoming less popular.
Typical strain-gauge resistances range from 30 Ω to 3 kΩ (unstressed). This resistance may change
only a fraction of a per cent for the full force range of the gauge, given the limitations imposed by the
elastic limits of the gauge material and of the test specimen. Forces great enough to induce greater
resistance changes would permanently deform the test specimen and/or the gauge conductors them-
selves, thus ruining the gauge as a measurement device. Thus, in order to use the strain gauge as a
practical instrument, we must measure extremely small changes in resistance with high accuracy. Such
demanding precision calls for a bridge measurement circuit.

21.3 RESISTANCE OF A CONDUCTOR

The resistance, R, of a conductor is defined in terms of its resistivity ρ(Ωm), length L (m), and
ρL
ycross-sectional area A (m2) by R =
A
If we consider an elongation of the wire, L →L + ΔL , by Poisson’s effect there will also be reduc-
tion in cross-sectional area, A → A − Δ A. From the expression for the resistance, it can be seen that
both effects contribute to an increase in the resistance.
The gauge shown in Fig. 21.1 here is primarily sensitive to strain in the X direction, as the majority
of the wire length is parallel to the X axis. There will be a small amount of cross-sensitivity, i.e., the
resistance will change slightly for a strain in the Y direction. This cross sensitivity is typically < 2% of
the primary axis sensitivity.

21.3.1 Strain-Gauge Operation


Figure 21.2 shows how the strain-gauge resistance varies with strain (deformation).
520 Metrology and Measurement

Inculated backing

Solder tags —
for
attachment of
wires

Y
Gauge, approx. 0.025-mm thick wire / foil
X

Fig. 21.1 Strain-gauge schematic

Ω Ω + − Ω

Fig. 21.2 Strain-gauge resistance varies with strain


(deformation)
Strain Measurement 521

21.3.2 Relationship Between Resistance and Strain


We will start with the relationship
ρL
R=
A

If we consider a change in the conductor length, ΔL, then


ρΔL
ΔR =
A

If we divide this expression through out by, R we get


ρΔL
ΔR ΔL
= A =
R ρL L
A
Allowing the resistance to vary through the other dependent parameters (A and ρ) and noting that as
A is in the denominator, for a positive change in cross-sectional area A, the resistance will decrease:

ΔR Δρ ΔA ΔL
= − + (1)
R ρ A L
The change in area can be related to the change in length via Poisson’s effect. If the cross section is
circular of initial diameter D then as the area goes from A to A + ΔA, D → D + ΔD. If we define
ΔL ΔD
the axial strain as εa = , and the transverse strain as εt = then
L D
ΔD ΔL
εt = = −νεa = − ν (2)
D L
where ν is Poisson’s ratio.
We can also expand the term for the area change in Eq. 1 as

π⎡
( D + ΔD ) − D 2 ⎤⎦⎥
2
ΔA ⎢
⎣ D 2 + 2 DΔD + ΔD 2 − D 2 2ΔD
= 4 = 2
≈ (3)
A π 2
D D D
4
by neglecting terms with the square of small quantities.
Combining Eqs 2 and 3, wet get

ΔA ΔL
≈ −2ν
A L
This can be substituted into Eq. 1 to give
522 Metrology and Measurement

ΔR Δρ ΔL ΔL
= + 2ν +
R ρ L L

From the previous page,


ΔR Δρ ΔL ΔL
= + 2ν +
R ρ L L

⎜⎜
⎛ Δρ ⎞⎟

ΔR ⎜ ρ ⎟⎟⎟ ΔL
= ⎜⎜1 + 2ν +
R ⎜⎜ ΔL ⎟⎟⎟ L
⎜⎜ ⎟
⎝ L ⎟⎠
The bracketed term is defined as the gauge factor GF, giving
ΔR ΔL
= GF = GF εa
R L
This expression means that the change in resistance is directly proportional to the axial strain of
the sample. The gauge factor is approximately constant and for most types of strain gauges has a value
slightly more than 2.

21.3.3 Gauge Factor (Gf)


Gauge factor is defined as the fractional change in resistance to the fractional change in length (strain)
along the axis of a strain gauge. It is a dimensionless quantity that applies to the changes in the strain
gauge as a whole. Typical gauge factors are close to 2.0. Gauge factors for special strain gauges can be
significantly larger or even negative. The equation for gauge factor is
Gf = ( R/R)/( L/L) or
R = R × Gf × strain
This implies that when 10 microstrain is applied to a gauge with R = 120 ohms and Gf = 2.0, the
resistance will change by 0.0024 ohms.
Generally provided by the strain gauge manufacturer, the gauge factor (Gf) specification is valid only
for a specific excitation voltage and ambient temperature. As a result, Gf cannot always be used directly
to calibrate a measurement.
A sensitivity factor is sometimes specified. The sensitivity factor equals the gauge factor multiplied
by a correction based on current measurement conditions. If the current conditions are the same as
those under which the gauge factor was specified, the sensitivity factor and the gauge factor are identi-
cal.
The component manufacturer will supply the gauge factor value. The material most often used for
the conductor is constantin because this material has a nearly identical gauge factor in both the elastic
and plastic deformation regions of the stress–strain curve.
Strain Measurement 523

The measured values of strain vary between applications. The maximum measurable strain is typi-
cally 0.001, or 0.1%. Strain is most often expressed in microstrain, μstrain (10−6). Therefore, a strain of
0.001 is normally written as 1000 μstrain. (Note: Strain is dimensionless.)

21.3.4 Signal Conditioning


The output of a strain gauge is, therefore a change in resistance. This is normally detected as a change
of voltage in a type of bridge circuit. It would appear that if we apply a large voltage to the bridge and
have a large gauge factor, we would increase our sensitivity to strain. However, the gauges can with-
stand only a limited power, ≤25 mW. Therefore, we typically use low voltages and have to detect a small
change in voltage, which is proportional to the change in resistance. Therefore, we often need to have
an amplifier to increase the detected signal.

21.4 WHEATSTONE’S BRIDGE CIRCUIT

Wheatstone developed a bridge circuit containing four identical resistances, one of which was the strain
gauge, from Fig. 21.3.
Rgauge = R1

The excitation can be either dc or ac. Dc voltages are normally used for sensitive measurements. Ac
voltages are used in electrically noisy environments with an excitation frequency about a factor of 10
higher than the maximum strain variation frequency to be measured (typically excitations of > 8 kHz).
Nominal resistance values are between 120 Ω and 350 Ω.

R1 R2 R + ΔR R

A B
Vo Vo

R4 R3 R R

D
Excitation Excitation
voltage, V voltage, V
Fig. 21.3 Wheatstone’s Fig. 21.4 Output of a Wheat-
bridge circuit stone bridge

21.4.1 Output of a Wheatstone Bridge


The circuit shown in Fig. 21.4 is known as a ‘quarter bridge’ circuit as the strain-sensing gauge we are
interested in appears in only one position (out of four). Typically, the rheostat arm of the bridge is set
524 Metrology and Measurement

at a value equal to the strain-gauge resistance with no force applied. The two ratio arms of the bridge
are set equal to each other. Thus, with no force applied to the strain gauge, the bridge will be symmetri-
cally balanced and the voltmeter will indicate zero volts, representing zero force on the strain gauge.
As the strain gauge is either compressed or tensed, its resistance will decrease or increase, respectively,
thus unbalancing the bridge and producing an indication at the voltmeter. This arrangement, with a
single element of the bridge changing resistance in response to the measured variable (mechanical
force), is known as a quarter-bridge circuit. As the distance between the strain gauge and the three other
resistances in the bridge circuit may be substantial, the wire resistance has a significant impact on the
operation of the circuit.
We will consider a system with a constant dc excitation voltage, V, and where the input resistance
of the voltmeter is infinite, i.e., no current flows through CD. With ΔR = 0, the bridge is perfectly
balanced and hence the output voltage, Vo = 0.
The current flowing through the upper half of the bridge is given by
V
I ABC =
2 R + ΔR

Hence the potential difference across the strain gauge (R + ΔR) is


V ( R + ΔR )
V AC =
2 R + ΔR

The potential difference across AD is given by VAD = V/2.


The output voltage of the system, Vo is given by

V ( R + ΔR ) V V (2 R + 2ΔR − 2 R − ΔR ) V ΔR
V o = V AC − V AD = − = =
2 R + ΔR 2 4 R + 2ΔR 4 R + 2ΔR
Typically the change in resistance is low compared to the original resistance value, hence
V ΔR
Vo = ×
4 R
ΔR
Noting that previously we related change in resistance to axial strain = GF ε a ,
R
4V o 1
we now obtain εa = ×
V GF
Unlike the Wheatstone bridge, using a null-balance detector and a human operator to maintain a
state of balance, a strain-gauge bridge circuit indicates measured strain by the degree of imbalance,
and uses a precision voltmeter in the centre of the bridge to provide an accurate measurement of that
imbalance:
Strain Measurement 525

21.4.2 Characteristics of Quarter Bridge Strain Gauge Sensors


4V o 1
The strain is given by εa = ×
V GF
In most instruments, this will be pre-calibrated to allow for the gauge factor and supply voltage. The
major disadvantage of the quarter bridge circuit is that changes in resistance of the gauge due to tem-
perature cannot be differentiated from resistance changes due to strain. Several forms of temperature
compensation can be introduced into the quarter bridge arrangement.

Strain gauge
(unstressed)

R1

V0

R3

Strain gauge
(stressed)

Fig. 21.5 Quarter bridge strain-gauge circuit with temperature compensation

An unfortunate characteristic of strain gauges is that of resistance change with changes in tempera-
ture. This is a property common to all conductors, some more than others. Thus, our quarter-bridge
circuit as shown (either with two or with three wires connecting the gauge to the bridge) works as a
thermometer just as well as it does a strain indicator. If all we want to do is measure strain, this is not
good. We can transcend this problem, however, by using a ‘dummy’ strain gauge in place of R2, so that
both elements of the rheostat arm will change resistance in the same proportion when temperature
changes, thus canceling the effects of temperature change:
Resistors R1 and R3 (refer Fig. 21.5) are of equal resistance value, and the strain gauges are identical
to one another. With no applied force, the bridge should be in a perfectly balanced condition and the
voltmeter should register 0 volts. Both gauges are bonded to the same test specimen, but only one is
placed in a position and orientation so as to be exposed to physical strain (the active gauge). The other
gauge is isolated from all mechanical stress, and acts merely as a temperature compensation device (the
‘dummy’ gauge). If the temperature changes, both gauge resistances will change by the same percent-
age, and the bridge’s state of balance will remain unaffected. Only a differential resistance (difference
of resistance between the two strain gauges) produced by physical force on the test specimen can alter
the balance of the bridge.
526 Metrology and Measurement

Wire resistance doesn’t impact the accuracy of the circuit as much as before, because the wires con-
necting both strain gauges to the bridge are approximately of equal length. Therefore, the upper and
lower sections of the bridge’s rheostat arm contain approximately the same amount of stray resistance,
and their effects tend to cancel.

Strain gauge
(unstressed)
Rwire1

R1

Rwire3
V0

Rwire2
R3

Strain gauge
(stressed)

Fig. 21.6 Quarter–bridge strain gauge circuit with only one


responsive mechanical strain

Even though there are now two strain gauges in the bridge circuit shown in Fig. 21.6, only one is respon-
sive to mechanical strain, and thus we would still refer to this arrangement as a quarter-bridge. However, if
we were to take the upper strain gauge and position it so that it is exposed to the opposite force as the lower
gauge (i.e., when the upper gauge is compressed, the lower gauge will be stretched, and vice-versa), we
will have both gauges responding to strain, and the bridge will be more responsive to applied force. This
utilization is known as a half-bridge. Since both strain gauges will either increase or decrease resistance
by the same proportion in response to changes in temperature, the effects of temperature change remain
canceled and the circuit will suffer minimal temperature-induced measurement error.
An example of how a pair of strain gauges (shown in Fig. 21. 7) may be bonded to a test specimen
so as to yield this effect is illustrated here using Fig. 21. 8 and Fig. 21.9.
With no force applied to the test specimen, both strain gauges have equal resistance and the bridge
circuit is balanced. However, when a downward force is applied to the free end of the specimen, it will
bend downward, stretching gauge #1 and compressing gauge #2 at the same time:
In applications where such complementary pairs of strain gauges can be bonded to the test speci-
men, it may be advantageous to make all four elements of the bridge ‘active’ for even greater sensitivity.
This is called a full-bridge circuit’ which is shown in Fig. 21.10.
Both half-bridge and full-bridge configurations grant greater sensitivity over the quarter-bridge cir-
cuit, but often it is not possible to bond complementary pairs of strain gauges to the test specimen.
Thus, the quarter-bridge circuit is frequently used in strain-measurement systems.
Strain Measurement 527

Strain gauge
(unstressed)

R1

R3

Strain gauge
(stressed)

Fig. 21.7 Half-bridge strain gauge circuit

(+)

Strain gauge #1 R Rgauge#1

Test specimen V

Strain gauge #2 R Rgauge#2

(−)
Bridge balanced

Fig. 21.8 Pair of strain gauges bonded to a test section

When possible, the full-bridge configuration is the best to use. This is true not only because it is
more sensitive than the others, but also because it is linear while the others are not. Quarter-bridge
and half-bridge circuits provide an output (imbalance) signal that is only approximately proportional
to applied strain-gauge force. Linearity, or proportionality, of these bridge circuits is best when the
amount of resistance change due to applied force is very small compared to the nominal resistance of
the gauge(s). With a full-bridge, however, the output voltage is directly proportional to applied force,
with no approximation (provided that the change in resistance caused by the applied force is equal for
all four strain gauges!).
Unlike the Wheatstone and Kelvin bridges, which provide measurement at a condition of perfect
balance and, therefore, function irrespective of source voltage, the amount of source (or ‘excitation’)
528 Metrology and Measurement

(+)
FORCE
Strain gauge #1 R Rgauge#1
+ −
Test specimen V
R Rgauge#2
Strain gauge #1
(−)
Bridge unbalanced

Fig. 21.9 Pair of strain gauges bonded to a test section


with force applied

Strain gauge
Strain gauge (stressed) (stressed)

Strain gauge (stressed) Strain gauge


(stressed)

Fig. 21.10 Full-bridge strain gauge circuit

voltage matters in an unbalanced bridge like this. Therefore, strain-gauge bridges are rated in millivolts
of imbalance produced per volt of excitation, per unit measure of force. A typical example for a strain
gauge of the type used for measuring force in industrial environments is 15 mV/V at 1000 pounds.
That is, at exactly 1000-pounds applied force (either compressive or tensile), the bridge will be
Strain Measurement 529

unbalanced by 15 millivolts for every volt of excitation voltage. Again, such a figure is precise if the
bridge circuit is full-active (four active strain gauges, one in each arm of the bridge), but only approxi-
mate for half-bridge and quarter-bridge arrangements.

21.5 STRAIN-GAUGE INSTALLATION

To correctly install a strain gauge, all surfaces must be clean and free from grease before assembly. Strain
gauges can be protected from the environment in a number of ways. Techniques offering increasing
protection are
• Polyurethane varnish
• Varnish + silicone rubber
• Varnish + rubber + steel cover and sealed cable conduits
In electrically noisy environments, it is important that the wires leading to a gauge are made as a
twisted pair. Hence any ‘pick-up’ (by induction) is common to both wires and the voltage difference is
unaffected. Installing strain gauges is a skilled art, it is easy to install gauges––in tension (by stretching),
in compression, with poor adhesion.

21.6 AXIAL, BENDING AND TORSIONAL STRAIN MEASUREMENT

Particular combinations of gauges can be utilized in certain applications offering both increased
sensitivity––with 2 sensing gauges in a half-bridge––and simultaneously temperature compensated.
Further, a full bridge can be used, offering 2.6 times the sensitivity of a quarter bridge. Strain gauges
are frequently used in mechanical engineering research and development to measure the stresses
generated by machinery. Aircraft-component testing is one area of application, using tiny strain-
gauge strips glued to structural members, linkages, and any other critical component of an airframe
to measure stress.

Tension causes
resistance increase

Gauge insensitive Resistance measured


to lateral forces in between these points

Compression causes
resistance decrease
Fig. 21.11 Bonded strain gauge
530 Metrology and Measurement

21.6.1 Measurement of Bending Strain


Gauge in tension, T
Consider measuring the bending strain in a cantilever.
If the two gauges are inserted into a half-bridge
circuit as shown and remembering that in tension the Gauge in compression, C

resistance will increase by ΔR, and in compression Load


the resistance will decrease by the same amount, we Fig. 21.12 Measuring the bending strain
can double the sensitivity to bending strain and elimi- in a cantilever
nate sensitivity to temperature.

V ΔR
The output is given by Vo = ×
2 R T:R + ΔR C:R − ΔR
(i.e., the output is double that from a quarter bridge circuit).
Further, you can demonstrate that if the resistance of both Vo
gauges increases (due to temperature or axial strain) then the
output voltage remains unaffected (try it by putting the resistance
of gauge C as R + ΔR). R R

21.6.2 Measurement of Axial Strains


Excitation
In practice, four gauges are used, two of which measure the direct voltage,V
strain and are placed opposite each other in the bridge (thereby Fig. 21.13 Circuit diagram
doubling sensitivity). Two more gauges are mounted at right
angles (thereby, not sensitive to the axial strain required) or on an
unstrained sample of the same material to provide temperature compensation. The arrangements are
shown in Fig. 21.14. Care must be taken in the angular alignment of the gauges on the sample.

R1 R2

R3

R1
Vo

R4 R4 R3
R2

Excitation
voltage, V
Fig. 21.14 Measurement of axial strains
Strain Measurement 531

21.6.3 Measurement of Strain in Torsion


Torion produces a direct stress at an angle of 45° to the axis of application of the torque. Therefore,
strain gauges with the primary lengths of conductor at an angle of 45° are used. By the use of four
gauges, two on each side of the shaft, it is possible to obtain a signal where the sensitivity to torsion
is increased and the sensitivity to bending, direct strains and temperature is cancelled. The arrange-
ment is shown in Fig. 21.15. Again, care must be taken in the angular alignment of the gauges on the
sample.

R3

R4

Torque

R1

R2

R1 R2

Vo

R4 R3

Excitation
voltage, V
Fig. 21.15 Measurement of strain in torsion
532 Metrology and Measurement

21.6.4 Determination of Principal Strains


If the strain (and, therefore, stress) system is unknown, a strain gauge rosette is often utilized with
gauges at 0°, 45° and 90°. From these data and utilizing Mohr’s circle, it is possible to determine the
principal strains and therefore stresses.

21.6.5 Use of Particular Type of Strain Gauges for Specific Strain-


Measurement Applications
a. Strain Gauges for Steel Weldable strain gauges measure strain in steel. Typical applications
include the following:
• Monitoring stresses in structural members of buildings, bridges, tunnel linings and supports dur-
ing and after construction
• Monitoring the performance of wall anchors and other post-tensioned support systems
• Monitoring loads in strutting systems for deep excavations
• Measuring strain in tunnel linings and supports
• Monitoring strain level in the areas of concentrated stress in pipelines

b. Strain Gauges for Concrete Embedment strain gauges measure strain in concrete.
Typical applications include the following:
• Measuring strains in reinforced concrete and mass concrete
• Measuring curing strains
• Monitoring for changes in load
• Measuring strain in tunnel linings and supports

c. Spot-Weldable Strain Gauge It is designed to measure strain in steel; this vibrating wire
strain gauge is spot-welded to the surface of the steel. A sensor is then fixed atop the gauge. Readings
are obtained with a data logger.

d. Embedment Strain Gauge It is designed to measure strain in reinforced concrete or mass


concrete. This vibrating wire strain gauge is typically tied to a reinforcing cage. In mass concrete, gauges
are sometimes configured in a rosette. Readings are obtained with a VW readout or data logger.

21.7 GAUGE-SELECTION CRITERIA

The examples discussed above demonstrate that many different styles of gauges are needed. Manufac-
turer’s data sheets provide selection criteria; a few points are listed below.
a. Physical size and form––the strain gauge may be small (∼6 mm active gauge length) but this size
sets the spatial resolution limit of the measurement
Strain Measurement 533

b. Gauge resistance
c. Sensitivity––or the gauge factor
d. Component environment, especially temperature
e. Strain limits to be measured
f. Flexibility of gauge backing––affects whether a gauge can be bonded, e.g., to a circular shaft
g. Requirements for protection
h. Cost

Review Questions

1. Explain the working principle of electrical strain gauges.


2. Explain the characteristics that foil gauges offer.
3. Describe bonded and unbonded gauges.
4. Write short notes on
a. Strain gauge operation
b. Gauge factor
c. Signal conditioning in strain measurement
d. Wheatstone’s bridge circuit
e. Output of a Wheatstone bridge
f. Axial, bending and torsional strain measurement
5. Explain the relationship between resistance and strain wrt strain measurement.
6. Discuss the construction and working of Wheatstone’s bridge circuit.
7. Discuss the characteristics of quarter-bridge strain-gauge sensors.
8. Discuss the procedure and care to be taken during strain-gauge installation.
9. Discuss the use of particular types of strain gauges for specific strain measurement w.r.t. their
practical applications.
10. Explain the gauge selection criteria.
22 Flow Measurement

‘Industries require the determination of the flow quantity of a fluid either gas, liquid, oil,
power, chemical, food, water, steam and waste for their processing and control….’
NEED OF FLOW MEASUREMENT a check point, either a closed conduit or
Flow measurement is essential in many an open channel, in their daily process-
industries such as oil, power, chemical, ing or operating. The quantity to be
food, water, and waste-treatment indus- determined may be volume-flow rate,
tries. These industries require the deter- mass-flow rate, flow velocity, or other
mination of the quantity of a fluid, either quantities related to the previous three.
gas, liquid, or steam, that passes through

22.1 TYPES OF FLOWMETERS

i. Coriolis
ii. Differential pressure––elbow
iii. Flow nozzle
iv. Orifice
v. Pitot tube
vi. Pitot tube (averaging)
vii. Venturi
viii. Wedge
ix. Magnetic
x. Positive displacement nutating disc
xi. Oscillating piston
xii. Oval gear
xiii. Roots
xiv. Target
xv. Thermal
xvi. Turbine
Flow Measurement 535

xvii. Ultrasonic doppler


xviii. Transit time
xix. Variable area––movable vane
xx. Rotameter
xxi. Weir, flume
xxii. Vortex

The instrument to conduct flow measurement is called a flowmeter. The development of a flow-
meter involves a wide variety of disciplines including the flow sensors, the sensor and fluid interactions
through the use of computation techniques, the transducers and their associated signal-processing
units, and the assessment of the overall system under ideal, disturbed, harsh, or potentially explosive
conditions in both the laboratory and the field.

22.2 SELECTION OF A FLOWMETER

To select a flowmeter that suits one’s application, many factors need to be considered. The most impor-
tant ones are fluid phase (gas, liquid, steam, etc.) and flow condition (clean, dirty, viscous, abrasive, open
channel, etc.) The matching of fluid phase and flowmeter technology can be found in the flowmeter
selection page.
The second-most important factors are line size and flow rate (they are closely related). This infor-
mation will further eliminate most submodels in each flowmeter technology.
Other fluid properties that may affect the selection of flowmeters include density (specific gravity),
pressure, temperature, viscosity, and electronic conductivity. On the flow part, one needs to pay atten-
tion to the state of fluid (pure or mixed) and the status of flow (constant, pulsating, or variable).
Moreover, the environment temperature, the arrangements (e.g., corrosive, explosive, indoor, out-
door), the installation method (insertion, clamped-on, or inline), and the location of the flowmeter also
need to be considered, along with other factors which include the maximum allowable pressure drop,
the required accuracy, repeatability, and cost (initial set-up, maintenance, and training).

22.3 INSTALLATION OF A FLOWMETER

Flowmeters need to be integrated into the existing/planning piping system to be useful. There are
two types of flowmeter (as shown in Fig. 22.1) installation methods––inline and insertion. The inline
models include connectors to the upstream and downstream pipes while the insertion models insert the
sensor probe into the pipes.
Most flowmeters need to be installed at a point where the pipes on both sides remain straight for a
certain distance. For inline models, the inner diameters of the pipes have to be the same as the flowmeter’s
line size. Between the flowmeter and the pipes, there are two types of mostly used connecting methods—
flanged and wafer.
536 Metrology and Measurement

Wafer connection Flanged connection

Insertion flowmeter Inline flowmeter


Fig. 22.1 Types of flowmeter installation methods

Among different types of connection methods, insertion design is more flexible and more economi-
cal in larger line sizes while inline design is more confined and usually easier to calibrate. The wafer
connection is usually less expansive than flanged connection. However, it may require extra parts to
allow the threading with pipes at both ends.

22.4 CLASSIFICATION OF FLOWMETERS

Since flowmeters are integrated instruments that measure different flow quantities by different technologies,
many characteristics can be referred in categorizing flowmeters. Some of the references are listed below:

i. Technology employed
ii. Instrumentation configuration
iii. Physical quantity measured
iv. Flow quantity converted

The following are the common types of flowmeters discussed.

22.4.1 Differential Pressure Flowmeters


Differential pressure flowmeters (in most cases) employ the Bernoulli equation that describes the rela-
tionship between pressure and velocity of a flow. These devices guide the flow into a section with
different cross-sectional areas (different pipe diameters) that causes variations in flow velocity and pres-
sure. By measuring the changes in pressure, the flow velocity can then be calculated.
Many types of differential pressure flowmeters are used in the industry:

1. Orifice Plate A flat plate with an opening shown in Fig. 22.2 is inserted into the pipe and
placed perpendicular to the flow stream. As the f lowing f luid passes through the orifice plate, the
Flow Measurement 537

Upstream pressure tap Downstream pressure sensor

Flow direction Flow direction

Orifice plate
Fig. 22.2 Orifice plate

restricted cross-sectional area causes an increase in velocity and decrease in pressure. The pressure dif-
ference before and after the orifice plate is used to calculate the flow velocity.

2. Venturi Tube A section of tube forms a relatively long passage with smooth entry and exit.
A Venturi tube shown in Fig. 22.3 is connected to the existing pipe, first narrowing down in diameter
then opening up back to the original pipe diameter. The changes in cross-sectional area cause changes
in velocity and pressure of the flow, which is used to calculate the flow velocity.

Upstream pressure tap Downstream pressure tap

Flow direction Flow


direction

Venturi tube
Fig. 22.3 Venturi tube

3. Nozzle A nozzle with a smooth guided entry and a sharp exit is placed in the pipe to change the
flow field and create a pressure drop that is used to calculate the flow velocity.

4. Segmental Wedge A wedge-shaped segment as shown in Fig. 22.5 is inserted perpendicu-


larly into one side of the pipe while the other side remains unrestricted. The change in cross-sectional
area of the flow path creates pressure drops used to calculate flow velocities.
538 Metrology and Measurement

Upstream pressure tap Downstream pressure sensor

Flow direction Flow direction

Nozzle

Nozzle shrinks down the cross-sectional area of the pipe and create pressure differential.
Fig. 22.4 Nozzle

Upstream pressure tap


Downstream pressure tap
Wedge

Flow direction Flow directio

Fig. 22.5 Segmental wedge

5. V-Cone A cone-shaped obstructing element that serves as the cross-sectional modifier is placed
at the centre of the pipe for calculating flow velocities by measuring the pressure differential.

6. Pitot Tube A probe with an open tip (Pitot tube) is inserted into the flow field. The tip is the
stationary (zero velocity) point of the flow. Its pressure, compared to the static pressure, is used to cal-
culate the flow velocity. Pitot tubes can measure flow velocity at the point of measurement.

7. Averaging Pitot Tube Similar to Pitot tubes but with multiple openings, averaging Pitot
tubes take the flow profile into consideration to provide better overall accuracy in pipe flows.
Flow Measurement 539

Upstream preassure tap Downstream pressure sensor

Flow direction Flow direction

V-cone
Fig. 22.6 V-cone

Flow direction Stagnation point

Static pressure tap


Streamlines

Stagnation-point pressure tap


Fig. 22.7 Pitot tube

8. Elbow When a liquid flows through an elbow, the centrifugal forces cause a pressure difference
between the outer and inner sides of the elbow. This difference in pressure is used to calculate the flow
velocity. The pressure difference generated by an elbow flowmeter is smaller than that by other pressure
differential flowmeters, but the upside is an elbow flowmeter that has less obstruction to the flow.

9. Dall Tube A combination of venturi tube and orifice plate, it features the same tapering intake
portion of a Venturi tube but has a ‘shoulder’ similar to the orifice plate’s exit part to create a sharp
pressure drop. It is usually used in applications with larger flow rates.
Differential pressure flowmeters, although simple in construction and widely used in industry, have
a common drawback: They always create a certain amount of pressure drop, which may or may not
be tolerated in a particular application. Common specifications for commercially available differential
pressure flowmeters are listed below. (These specifications are for differential pressure flowmeters in
general. Individual numbers may vary from product to product.)
540 Metrology and Measurement

Downstream pressure tap

tion
ec
Upstream pressure tap dir
w
Flo

n
tio
ec
dir
w
Flo

Pitot tube with multiple probes


on both upstream and downstream sides

Fig. 22.8 Averaging pitot tube

Outer elbow pressure tab

Flow direction

Inner elbow pressure tab


Flow direction

Fig. 22.9 Elbow


Flow Measurement 541

tion
irec
wd
Flo

n
irectio
wd
Flo

Fig. 22.10 Dall tube

Types of Fluid Phases Cryogenic, gas clean, dirty, liquid (clean, dirty, viscous, corrosive), super-
heated, steam saturated, liquid slurry abrasive

Line Size 6 ∼ 300 mm (1/4 ∼ 12 inch)

Turndown Ratio 10:1

Advantages Low to medium initial set-up cost, can be used in wide ranges of f luid phases and
f low conditions, simple and sturdy structures

Limitations Medium to high pressure drop

22.4.2 Magnetic Flowmeters


They are also known as electromagnetic flowmeters or induction flowmeters; and they obtain the flow
velocity by measuring the changes of induced voltage of the conductive fluid passing across a con-
trolled magnetic field.

Operation Principle of Inline Magnetic Flowmeter A typical magnetic flowmeter


places electric coils around (inline model) / near (insertion model) the pipe of the flow to be measured
and sets up a pair of electrodes across the pipe wall (inline model) or at the tip of the flowmeter (inser-
tion model). If the targeted fluid is electrically conductive, i.e., a conductor, its passing through the pipe
542 Metrology and Measurement

Coils to generate magnetic field

Electric filed induced


by magnetic field

Magnetic field
Voltage gauge

Conductive fluid

n
tio
ec
dir
w
Flo

Fig. 22.11a Operation principle of magnetic flowmeters

Coils to generate magnetic field

Magnetic field

Flow direction
Conductive field

Electric field induced Voltage gauge


by magnetic field
Fig. 22.11b Schematic of magnetic field through a magnetic flowmeter

is equivalent to a conductor cutting across the magnetic field. This induces changes in voltage reading
between the electrodes. The higher the flow speed, the higher the voltage.
According to Faraday’s law of electromagnetic induction, any change in the magnetic field with time
induces an electric field perpendicular to the changing magnetic field:

d ( BA) dΦ
E = −N = −N
dt dt
Flow Measurement 543

where E is the voltage of induced current, B is the external magnetic field, A is the cross-sectional area
of the coil, N is the number of turns of the coil, Φ = ΒΑ is the magnetic flux, and finally the negative
sign indicates that the current induced will create another magnetic field opposing to the build up of
magnetic field in the coil based on Lenz’s law.
When applying the above equation to magnetic flowmeters, the number of turns N and the strength
of the magnetic field B are fixed. Faraday’s law becomes

dA dl
E = −NB = −NB D = −NBVD
dt dt

where D is the distance between the two electrodes (the length of conductor), and V is the flow velocity.
If we combine all fixed parameters N, B, and D into a single factor K =−NBD, we have

E
V =
K
It is clear that the voltage developed is proportional to the flow velocity.
A prerequisite of using magnetic flowmeters is that the fluid must be conductive. The electrical con-
ductivity of the fluid must be higher than 3 μS/cm in most cases. A lining of non-conductive material
is often used to prevent the voltage from dissipating into the pipe section when it is constructed from
conductive material. Common specifications for commercially available magnetic flowmeters are listed
below:

Types of Fluid Phases Liquid (clean, corrosive, dirty, viscous ), slurry (abrasive, fibrous), liquid
(non-Newtonian, open channel)

Line Size Inline models: 10 ∼ 1200 mm (0.4 ∼ 48 inch)

Insertion Models 75 mm (3 in) and up

Turndown Ratio 100: 1

Advantages Minimum obstruction in the flow path yields minimum pressure drop, low maintenance
cost because of no moving parts, high linearity, two and multibeam models have higher accuracy than
other comparably priced flowmeters, can be used in hazardous environments or measure corrosive or
slurry fluid flow

Limitations Requires electrical conductivity of fluid higher than 3 μS/cm in most cases, zero
drifting at no/low flow (may be avoided by low flow cut-off; new designs improve on this issue)
544 Metrology and Measurement

22.4.3 Positive Displacement Flowmeters


They are also know as PD meters, measure volumes of fluid flowing through by counting repeatedly the
filling and discharging of known fixed volumes. A typical positive displacement flowmeter comprises a
chamber that obstructs the flow. Inside the chamber, a rotating/reciprocating mechanical unit is placed
to create fixed-volume discrete parcels from the passing fluid. Hence, the volume of the fluid that passes
the chamber can be obtained by counting the number of passing parcels or equivalently the number of
rounds of the rotating/reciprocating mechanical device. The volume flow rate can be calculated from
the revolution rate of the mechanical device.
Many types of positive displacement flowmeters are used in the industry. They are named after the
mechanical device inside the chamber. They all share the same principle of operation and are volu-
metric flow-measuring instruments, viz., Fig. 22.13 shows the rotary vane schematically. It consists
of an eccentric drum, which rotates inside the meter casing. A number of spring-loaded retractable
rotor blades (vanes) also rotate maintaining contact with meter-casing and forming separate chambers
inside. While the ranges rotate because of the differential pressure across the meter, a fixed quantity
of liquid is trapped inside the chamber, which is ultimately pushed out through the outlet. Through
the shaft of the eccentric drum, a counter mechanism may be connected, which can be calibrated in
terms of the volume of liquid discharged. Similarly, in case of type (b) and (c), a nutating disc and
oscillating piston are the mechanical components, which are helping to measure the flow. And in case
of impeller-type meters shown in Figs 22.14 to 22.18, two rotors or impellers or toothed gears are
properly meshed for measuring the flow as the fixed quantity of liquid will get trapped in between
them right from the inlet and is guided on to the outlet during their rotation. With the revolution
counter fitted on the shaft of the rotor, the flow can be ascertained.

a. Rotating Vane

Rotor Retractable rotor blade

Rotor rotating direction

Flow direction

Flow direction

Fig. 22.12 Rotating vane


Flow Measurement 545

b. Nutating Disc

Disc-nutating direction

Disc-rotating direction

Disc-oscillating direction

Fl
ow
n
io

di
ct

re
re

ct
di

io
n
ow
Fl

Fig. 22.13 Nutating disc

c. Oscillating Piston

Piston
Measuring chamber

Piston hub
Control roller
(centre of piston)
(centre of chamber)

Inlet Outlet

Piston gliding direction:


The piston glides around the control roller like a hula hoop and
oscillates/spins around the hooper in a circular motion.
Fig. 22.14 Oscillating piston
546 Metrology and Measurement

d. Oval Gear
Oval gear

Flow direction Flow direction

Oval-gear
rotating
direction

Fig. 22.15 Oval gear


e. Roots (Rotating Lobe)
Root/ lobe rotating direction
Root/ lobe

Flow direction Flow direction

Flow direction

Fig. 22.16 Roots (rotating lobe)


f. Birotor
Rotor-rotating direction
Rotor

Flow direction Flow direction

Flow direction

Fig. 22.17 Birotor


Flow Measurement 547

g. Rotating Impeller
The accuracy of positive displacement flowmeters relies on the integrity of the capillary seal that sepa-
rates incoming fluid into discrete parcels. To achieve the designed accuracy and ensure that the positive
displacement flowmeter functions properly, a filtration system is required to remove particles larger
than 100 μm as well as gas (bubbles) from the liquid flow.
Positive displacement flowmeters, although simple in principle of operation and widely used in the
industry, all cause a considerable pressure drop which has to be considered for any potential application.

Impeller

Flow direction Flow direction

Impeller
rotating
direction

Fig. 22.18 Rotating impeller

Types of Fluid Phases Phase condition of flowing liquid may be clean, viscous, corrosive, dirty
and this flowmeter is recommended for limited applicability. Its line size is 6 ∼ 300 mm
(1/4 ∼ 12 inch) and turndown ratio is 5 ∼ 15 : 1, and might go as high as 100 : 1.

Advantages Low to medium initial set-up cost, can be used in viscous liquid flow

Limitations Higher maintenance cost than other non-obstructive flowmeters, high pressure drop
due to its total obstruction on the flow path, not suitable for low flow rate, very low tolerance to sus-
pension in flow ( particles larger than 100 μm need to be filtered before the liquid enters the flowmeter)
gas ( bubbles) in liquid could significantly decrease the accuracy.

22.4.4 Coriolis Flowmeters


These are relatively new compared to other flowmeters. They were not seen in industrial applications
until the 1980’s. Coriolis meters are available in a number of different designs. A popular configuration
548 Metrology and Measurement

Flow direction Flow direction

Flow tube

Driving unit,
e.g., driving coils

Displacement sensors,
e.g., pickoff coils
Fig. 22.19 Coriolis flowmeters

consists of one or two U-shaped, horseshoe-shaped, or tennis-racket-shaped (generalized U-shaped)


flow tube with an inlet on one side and an outlet on the other, enclosed in a sensor housing connected
to an electronics unit.
The flow is guided into the U-shaped tube. When an oscillating excitation force is applied to the
tube causing it to vibrate, the fluid flowing through the tube will induce a rotation or twist to the tube
because of the Coriolis acceleration acting in opposite directions on either side of the applied force.
For example, when the tube is moving upward during the first half of a cycle, the fluid flowing into the
meter resists being forced up by pushing down on the tube. On the opposite side, the liquid flowing out
of the meter resists having its vertical motion decreased by pushing up on the tube. This action causes
the tube to twist. When the tube is moving downward during the second half of the vibration cycle, it
twists in the opposite direction. This twist results in a phase difference (time lag) between the inlet side
and the outlet side and this phase difference is directly affected by the mass passing through the tube. A
more recent single straight-tube design is available to measure some dirty and/or abrasive liquids that
may clog the older U-shaped design.
An advantage of Coriolis flowmeters is that it measures the mass flow rate directly, which eliminates
the need to compensate for changing temperature, viscosity, and pressure conditions. Please also note
that the vibration of Coriolis flowmeters has very small amplitude, usually less than 2.5 mm (0.1 in), and
the frequency is near the natural frequency of the device, usually around 80 Hz. Finally, the vibration is
commonly introduced by electric coils and measured by magnetic sensors.

Advantages Higher accuracy than most flowmeters, can be used in a wide range of liquid flow
conditions, capable of measuring hot (e.g., molten sulphur, liquid toffee) and cold (e.g., cryogenic
helium, liquid nitrogen) fluid flow, low pressure drop, suitable for bi-directional flow
Flow Measurement 549

Limitations High initial set-up cost, clogging may occur and difficult to clean, larger in overall size
compared to other flowmeters, limited line-size availability

22.4.5 Ultrasonic Flowmeters


There are two types of ultrasonic flowmeters, viz., Doppler ultrasonic flowmeters and transit-time
ultrasonic flowmeter.

1. Doppler Ultrasonic Flowmeters It relies on the Doppler effect to relate the frequency
shifts of acoustic waves to the flow velocity. It usually requires some particles in the flow to reflect the
signals. The rule of thumb is 25 PPM suspended solid or bubbles with diameters of 30 micron or larger
for 1 MHz or higher transducers. Lower frequency transducers may require ‘dirtier’ fluid conditions.

Transducer

Transmitted waves
Reflected waves

Flow direction Flow direction

Dispersed particles
Fig. 22.20 Doppler ultrasonic flowmeters

The Doppler formula for a sound or light source moving toward the observer at a velocity V is
f
f ref =
V
1−
c
Since the input signal from the transducer forms an angle θ with the flow direction, the velocity V
should be replaced by the projected velocity V cos θ. The acoustic waves traveling upstream and down-
stream will have the observed frequencies

⎪ f

⎪ fu =
⎪ V cos θ

⎪ 1−
⎪ c


⎪ f
⎪ fd =

⎪ V cos θ
⎪ 1+ c



550 Metrology and Measurement

The difference in frequency is


V cos θ
2f
Δf = f u − f d = c
2
⎛V cos θ ⎞⎟

1−⎜⎜ ⎟
⎝ c ⎟⎠
V cos θ
≈2f
c

since the flow velocity V is much smaller than the speed of sound c in the fluid. By re-arranging the
above equation, the flow velocity can be written as
c Δf
V =
2 f cosθ

Flow Measurement of Fluid Gas Dirty and liquid, corrosive, dirty, clean, viscous recom-
mended for limited applicability:-
Line Size: Inline: 10 ∼ 1200 mm (0.4 ∼ 48 inch)
Clamped-on model: 75 mm (3 in) and up
Turndown Ratio: 100: 1

Advantages No obstruction in the flow path, no pressure drop, no moving parts, low mainte-
nance cost, can be used in corrosive or slurry fluid flow, portable models available for field analysis and
diagnosis

Limitations Higher initial set-up cost

2. Transit-Time Ultrasonic Flowmeter A pair (or pairs) of transducers, each having its
own transmitter and receiver, are placed on the pipe wall, one (set) on the upstream and the other
(set) on the downstream. The time for acoustic waves to travel from the upstream transducer to the

Downstream
transducer
td
Flow direction Flow direction
L
D
θ
tu

Upstream
transducer
Fig. 22.21 Transit-time ultrasonic flowmeter
Flow Measurement 551

downstream transducer td is shorter than the time it reqires for the same waves to travel from the
downstream to the upstream tu. The larger the difference, the higher the flow velocity. And flow is
measured in terms of this difference.
td and tu can be expressed in the following forms:


⎪ L

⎪td =

⎪ c +V cos θ


⎪ L
⎪tu =


⎩ c −V cos θ
where c is the speed of sound in the fluid, V is the flow velocity, L is the distance between the transduc-
ers and θ is the angle between the flow direction and the line formed by the transducers.
The difference of td and tu is

L L
Δt = t u − t d = −
c −V cos θ c +V cos θ
2V cos θ
=L
c −V 2 cos2 θ
2

2VL cos θ
=
c −V 2 cos2 θ
2

2VX
= c2
2
⎛V ⎞⎟

1−⎜⎜ ⎟⎟ cos2 θ
⎝c ⎠

where X is the projected length of the path along the pipe direction (X = L cos θ).

To simplify, we assume that the flow velocity V is much smaller than the speed of sound c, that is,
2
⎛V ⎞⎟
V � c ⇒ ⎜⎜⎜ ⎟⎟ ≈ 0 � 1
⎝c ⎠
We then have

2VX
Δt ≈
c2
or,

c 2 Δt
V =
2X
552 Metrology and Measurement

Note that the speed of sound c in the fluid is affected by many factors such as temperature and den-
sity. It is desirable to express c in terms of the transit times td and tu to avoid frequent calibrations:

⎪ L

⎪c +V cos θ

⎪ td


⎪ L
⎪c −V cos θ =

⎪ tu

The speed of sound c becomes

1 ⎡ ⎛ 1 1 ⎞⎤ (t d + t u ) L
c= ⎢ L ⎜⎜ + ⎟⎟⎥ =
2 ⎢ ⎜⎝ t
⎢⎣ d t u ⎟⎟⎠⎥⎦⎥ tdt u 2

The flow velocity is now only a function of the transducer layout (L, X) and the measured transit
times tu and td.
2
c 2 Δt ⎡⎢ (t u + t d ) L ⎤⎥ Δt
V = =
2X ⎢ t t 2 ⎥⎦ 2 X
⎣ ud
2
L2 ⎡⎢ (t u + t d ) ⎤⎥
= Δt
8 X ⎢⎣ t u t d ⎦⎥

L2 ⎡⎢ (t u + t d ) (t u − t d ) ⎤⎥
2

= ⎥
8 X ⎢⎢ t u2 t d2 ⎥⎦

The above formula can be further simplified by utilizing the following approximation:

⎛ t + t d ⎞⎟ ⎛ t u + t d ⎞⎟ ⎛ Δt ⎞⎛ Δt ⎞
⎟⎟ = 4 ⎜⎜⎜t u − ⎟⎟⎟⎜⎜⎜t d + ⎟⎟⎟
2
(t u + t d ) = 4 ⎜⎜⎜ u ⎟⎟ ⎜⎜⎜
⎝ 2 ⎠⎝ 2 ⎠ ⎝ 2 ⎠⎝ 2 ⎠
⎡ Δt Δt 2 ⎤
= 4 ⎢t u t d + ( t u − t d )− ⎥
⎢ 2 4 ⎥⎦

⎡ Δt 2 ⎤
= 4 ⎢t u t d + ⎥
⎢ 4 ⎥
⎣ ⎦
≈ 4t u t d
The flow velocity can therefore be written as

L2 ⎡ (t + t )2 (t − t ) ⎤
⎢ u d u d ⎥
V = ⎢ 2 2 ⎥
8X ⎢⎣ t t
u d ⎥⎦
2
L Δt

2 Xt u t d
Flow Measurement 553

Types of Fluid Phases Clean gas, liquid––clean, corrosive, dirty; Gas––Dirty; liquid––open chan-
nel, viscous, recommended for limited applicability
Inline model: 10 ∼ 1200 mm (0.4 ∼ 48 inch)
Clamped-on model: 75 mm (3 in) and up
Turndown Ratio: 100: 1

Advantages No obstruction in the flow path, no pressure drop, no moving parts, low mainte-
nance cost, multi-path models have higher accuracy for wider ranges of Reynolds number, can be used
in corrosive or slurry fluid flow, portable models available for field analysis and diagnosis

Limitations Higher initial set-up cost, single path (one-beam) models may not be suitable for flow
velocities that vary over a wide range of Reynolds number

Review Questions
1. Justify that flow measurement is essential in many industries
2. Discuss the types of flowmeters and explain any one in detail.
3. Discuss the criteria for selection of a flowmeter.
4. Explain any one type of differential pressure flowmeters.
5. Explain the working and applications of magnetic flowmeters.
6. What do you mean by positive displacement flowmeters?
7. Explain any one positive displacement flowmeter in detail using a sketch.
8. Write short notes on
a. Rotating vane
b. Coriolis flowmeters
c. Doppler ultrasonic flow meters
d. Orifice plate
e. Pitot tube
f. Magnetic flowmeters
9. Explain the working of ultrasonic flowmeters.
Index

A C
Accelerometers, 464 Calibration, 5, 35, 37, 38, 220, 414
Accelerometers capacitive, 467 Equipments used for Calibration, 37
Accuracy, 4, 7, 392 Standard Procedure, 36
Addendum, 304 Calibration of Gauge Block, 41
Air Gauge, 259 Calipers, 49
Alignment tests, 109 Centre-measuring calipers, 51
Allowance, 133 Machine travel calipers, 51
Amplification, 412 Rolling-mill calipers, 50
Amplifier, 422 Sliding calipers, 50
Differential, 422 Capacitors, 404
Differentiating, 424 Ceramic gauge-block, 65
Integrating, 424 CMM probes, 371
Inverting, 422 Coaxiality, 108
Summing, 423 Cold-Junction Compensation, 516
Analog-to-Digital Converter, 418 Combination set, 209
Angle Dekkor, 218 Comparator, 236
Angle Gauges, 203 Composite Error, 334
Attenuation, 413 concave radius, 368
Autocollimator, 90, 116, 213 Concentricity, 109
Average Roughness, 284 Constant Chord, 339
Conversion, 461
B Logarithmic, 417
Base tangent, 341 Ratiometric, 416
Beam, 443 Coordinate Measuring Machines, 367
Beam comparator, 80 Coriolis flowmeters, 547
Bearing ratio, 294 Creep, 438
Bench Micrometer, 307 Crest, 313
Best Size Wire, 312 CRO, 426
Bias, 4 Cross Sensitivity, 401
Bonded gauge, 535 Cylindrical Convex Radius, 370
Bourdon Tube, 476 Cylindricity, 105
Index 555

D Calibration Error, 14
Data Acquisition System, 409 Characteristic Error, 13
Data Presentation, 391 Controllable Error, 14
Data Transmission, 390 Dynamic Error, 14
Dead Zone, 394 Environmental Error, 13
Deadweight Tester, 481 Loading Error, 13
Dedendum, 304 Random Error, 14
Definitions, 2 Reading Error, 12
Dial Calibration Tester, 39 Relative Error, 11
Dial gauges, 110 Static Error, 12
Dial Indicator, 239 Stylus Pressure Error, 14
Dial Thickness Gauges, 249 Excitation, 413
Diaphragm Pressure Gauge, 477
Digital Universal Caliper, 72 F
Digital-to-Analog Converter, 421 Filtering, 413
Displacement Measurement, 406 Fits, 133
Dovetail, 356 Clearance, 138
Drift, 5, 393 Interference, 138, 140
Span, 393 Transition, 138, 142
Zero, 393 Flank Error, 317
Zonal, 393 Flanks, 303
Drilling Machine, 116 Flatness, 80
Dynamic, 396 Floating Carriage Micrometer, 308
Dynamometers, 452 Flowmeters, 534
Foil Gauges, 443
E Force-measurement, 436
Eccentricity, 104 Force-ring, 446
Eddy – Current Transducer, 405 Form, 269
Electromagnetic flowmeters, 541 Form-tester, 109, 382
Electro-mechanical Gauges, 88 Fundamental Deviations, 132
Electro-mechanical Transducer, 404
Electronic comparator, 260 G
End Standard, 27 Gauge Factor, 522
End Bar, 28 Gauge length, 231
Slip Gauges, 28 Gauges, 167
Errors, 5, 397 Air, 169
Gross, 397 Bore, 174
In Gear, 343 Filler, 178
Loading, 398 Plug, 167
Misuse, 398 Radius, 177
Random, 398 Ring, 167
Systematic, 397 Snap, 169
Errors in measurement, 10 Splined, 177
Absolute Error, 10 Taper Limit, 175
Alignment Error, 12 Thread, 175
Avoidable Error, 14 gear inspection centres, 351
556 Index

Gear Rolling Tester, 344 Limit Gauges, 162


Gear tooth micrometer, 343 Line Standard, 25
Gear Tooth Profile Measurement, 335 Linear Metrology, 48
Gear Tooth Vernier, 338 Linearity, 394
Gears, 326 Linearization, 413
Generalized measurement, 389 Load Cells, 441
Groove Comparator, 256 Load Sensors, 439
LVDT, 269, 406, 495
H
Hole-basis, 149 M
Hysteresis, 395, 446 Machine Vision, 290
Manometer U-Tube, 495
I Material standards, 24, 34
Impact, 488 Primary standards, 35
Included Angle, 367 Secondary standards, 34
Indexing tables, 222 Tertiary standards, 34
Inductive Dial Comparator, 272 Working standards, 34
Infrared, 451 McLeod Gauge, 498
Inline gauging machines, 395 Measurand, 5
Inspection templates, 181 Measurement, 7, 386
Interchangeability, 127 Measurement standard, 22
Interference, 222 MEMS, 486
Band, 224 Methods of measurements, 8
Patterns, 227 Metric units, 19
Principle, 222 Metrology, 2
Interferometry, 84, 222 Micrometer clinometer, 212
Internal Threads, 329 Micrometers, 59
Inverse Transducer, 403 Depth Micrometers, 70, 357
IPTS, 510 Digital Micrometer, 65
Isolation, 413 Inside Micrometer, 69
Outside Micrometer, 60
J Thread Micrometers, 68
Johansson Mikrokator, 254 Milling Machine, 114
Monochromatic light source, 225
Motor, 469
L Multiplexing, 413
Lag, 396
Laser, 82, 86
Laser probe, 394 N
Laser vision, 376 Natural frequency, 463
Lathe, 118 Natural Roughness, 283
Lay, 277 Newton’s second law, 462
Lead, 314 NPL flatness, 230
Least count, 55, 62 Numerical control, 367
LED, 430 Nutating Disc, 561
Index 557

O Resolution, 6
Operational Amplifier (op-amp), 421 Resolution or Discrimination, 394
Optical Flats, 224 Response time, 6
Optical square, 88 Root, 303
Optical-Electrical Comparators, 256 Rotary transformer, 451
Optical-Mechanical Comparators, 254 Rotating Vane, 544
Orifice plate, 536 Roughness, 268
Oscillating Piston, 545 Roundness, 97
RTD, 503
P
Parallelism, 84 S
Performance test, 110 Screw Threads, 300
Photoresistors, 403 Selective assembly, 129
Piezoelectric, 455, 465 Sensitivity, 6, 393
Piezoelectric Devices, 404 Shaft-basis, 143
Pirani Gauge, 405 Shakers, 469
Piston Diameter Tester, 99 Shear-cell, 440
Pitch Errors, 304, 333 Shrinkage, 349
Pitch Measurement, 334 SI, 19
Pitot tube, 539 Sigma Mechanical Comparator, 248
Pneumatic Comparator, 256 Signal-conditioning, 407, 411
Pocket Surf, 294 Sine Bar, 205
Pre-amplifier, 468 Sine Centre, 210
Precision, 5, 7, 392 Skid, 277
Pressure gauges, 476 Slip gauges, 27
Primary Sensing, 389 Wringing of slip gauges, 29
Profile projector, 253, 335 Slip Ring, 450
Protractor, 199 Solex Comparator, 257
Proving Rings, 443 Spherical Convex Radius, 358
Pyrometers, 511 Spirit level, 75, 110
Optical, 512 Square Master, 90
Total Radiation, 514 Squareness, 87
Selective Radiation, 512 Squares, 200
Static Calibration, 395
Q Static, 391
Quartz force sensors, 445 Steel rule, 48
Stiffness, 461
R Straight Edges, 76, 110
Radian, 197 Straightness, 75
Radius gauges, 357 Strain gauge, 442, 456, 519
Rake angle, 304 Stroboscope, 468
Range, 6 Styli, 373
Readability, 6 Surface Plate, 83
Repeatability, 393, 438 Surface Roughness, 267
Reproducibility, 6, 393, 438 ‘S’, 443
558 Index

T Units Of Measurement, 14
Talysurf, 278 Units, 19
Taper, Measurement, 354 Universal Measuring Instrument, 71
Taylor’s Principles, 162 Universal Measuring Machine, 363
Temperature Scales, 493
Terminologies, 4 V
Test Mandrels, 110 Vacuum Gauge, 484
Thermistor, 403, 509 Vacuum Measurement, 482
Thermocouple, 498 Variable Conversion, 389
Thermometer, 496 Variable Manipulation, 390
Three-Wire Method, 313 Velocity Pickups, 463
Threshold, 395 Venturi Tube, 537
Tolerance, 133 Vernier bevel protractor, 201
Class, 133 Vernier Caliper, 51
Grade, 133 Vernier Clinometer, 210
Zone, 133 Vernier Depth Gauge, 58
Tolerances, 135 Vernier Height Gauge, 56
Bilateral, 135 Vibration, 463
Unilateral, 135
Tomlinson Surface Meter, 278 W
Torque Measurement, 448 Wavelength Standards, 31
Inline torque, 449 Waviness, 268
Reaction torque, 450 Wear Allowance, 183
Tow-Wire Method, 310 Wheatstone’s Bridge, 523
TPMS, 489
Traceability, 6
X
Transducers, 400
X–Y Plotter, 425
Transfer gauge, 112
Types, 2
Z
Zesis Ultra Optimeter, 255
U
Ultrasonic Flowmeters, 549
Uncertainty, 7
Plate - 1

(b) (c)
Fig. 2.2(b) International Standard Prototype Metre (c) Historical Standard platinum–iridium
metre bars

Fig. 2.3 End bars

(a) (b)
Fig. 2.4 Set of slip gauges: (a) Set of ceramic slip gauges (b) Set of cast-steel slip gauges
Plate - 2

Fig. 2.9 Computer-aided gauge calibration


(Mahr Gmbh Esslingen, Germany)

Fig. 2.10 Accessories for calibration


Plate - 3

Height measurement Measurement of grooves Depth measurement

Measurement in two Measurement using Squareness measurement


coordinates square probe in X-Y plane
Fig. 3.11 Applications of height gauges
(Courtesy, Trimos SA Inc.)
Plate - 4

(i) (ii) (iii)

(iv) (v) (vi)


(a) Spirit level (b) Types of bubbles
Fig. 4.2 Spirit level and types of bubbles

Straightness beam-splitter

Straightness
reflector

(a) System (b) Short-range straightness optics


Fig. 4.5 Laser measurement system
Plate - 5

(a) System

(b) Flatness mirrors and bases

(c) Angular optics


Fig. 4.12 Laser measurement system
Plate - 6

Fig. 4.52 Piston profile tester


(Courtesy, Kudale Instruments Pvt Ltd., Pune)

(a) Roundcyl 500 (b) User-friendly optional menu

Cylindricity Measurement Flatness Measurement Concentricity Measurement Roundness Measurement

Fig. 4.53 Roundcyl-500 system


(Courtesy, Kudale Instruments Pvt Ltd., Pune)
Plate - 7

Turf

(a) Cylindrical (b) Turf (c) Flat

(d) Conical (e) Convex (f) Ridge or a valley

(g) Convex (h) Oval

Fig. 8.10(a) Different interference patterns.


(Courtesy, Metrology Laboratory, Sinhgad COE, Pune University, India)
Plate - 8

Light Condenser Straight beams Projection Screen


source lens Test object
Lens

Magnified image
of test object
(a)

Screen

Projector
lens

Condenser
lens

(b) (c)

Main
scale

Vernier
scale

(d)
Fig. 9.19 (a) Principle of profile projector, (b) Magnified image of small dimension plastic threads
(c) Magnified image of small-sized gears of rack, (d) Enlarged view of profile projector screen
(Courtesy, Metrology lab, Sinhgad College of Engg., Pune University, India)
Plate - 9

Fig. 9.25 Velocity differential-type air gauge with bar graph and digital display
(Figure shows measurement with air bore plug gauge)
(Courtesy, Mahr GMBH Esslingen)

Fig. 9.26 Digital height measuring instrument


Plate - 10

(a) (b)
Fig. 10.27 (a) Ultra powers form Talysurf series surface roughness and form-measuring systems
(b) For measuring small parts with outside diameters up to 25 mm
(Courtesy, Mahr Gmbh Esslingen)

(a) (b)
Fig. 10.28 (a) Universal stand—a heavy duty, multipurpose stand equipped with an
adjustable clamp and bracket for measuring a wide variety of parts, up to 231 mm/8.375
in tall, with or without fixturing. (b) Height adjustment from 0 mm to 300 mm
(0 in to 11.81 in), of PFM mounting device by means of a hand wheel table
surface = 400 mm x 250 mm (15.75 in x 9.84 in), granite
(Courtesy, Mahr Gmbh Esslingen)
Plate - 11

Measuring wires

Micrometer
thimble
graduated in
0.002 mm

Fludicial
indicator Centre

Base
Top slide
Lower
slide

Fig. 11.13 Floating carriage micrometer


Plate - 12

Screen

Magnified
profile of
test gear

Test gear

Fig. 12.8 Optical projection method of gear profile checking


Plate - 13

Fig. 14.3 Universal measuring machine (828 Model)


(Courtesy, Mahr GMBH Esslingen)
Plate - 14

(a)

(b) (c)

(d)
(a) Inspection of dial gauges (b) Internal measurement on plain ring gauges with one pair of calipers (c) Measurement of
a limit plug gauge located on a self-centering support (d) Universal one-coordinate measuring instrument for accurate
internal and external measurements.

Fig. 14.6 Different models of universal measuring machines (Model 828)


(Courtesy, Mahr GMBH Esslingen)
Plate - 15

Fig. 14.9 New horizontal arm-type CMM inspecting the profile of the car body
(Courtesy, Mitutoyo Company)

Fig. 14.10 CNC CMM provides a huge measuring range


(Courtesy, Mitutoyo Company)
Plate - 16

Fig. 14.19 Seam tracking of welded blanks

Fig. 14.20

Fig. 14.21 Optical sensor


(Courtesy: Mahr GMBH Esslingen)
Plate - 17

Fig. 14.22 Touch probe

Fig. 14.23 Laser probe


(Courtesy, Mahr GMBH Esslingen)
Plate - 18

Fig. 14.24 Contour scanning

Fig. 14.25 Optical 3D measuring machines


(Courtesy, Mahr GMBH Esslingen)
Plate - 19

Fig. 14.26 Overview picture of inspection cell

Inspection
robot

Fig. 14.27 Close-up of laser camera

Das könnte Ihnen auch gefallen