Sie sind auf Seite 1von 20

PSG COLLEGE OF TECHNOLOGY

DEPARTMENT OF MEHANICAL ENGINEERING

METROLOGY AND INSTRUMENTATION-15M501

CALIBRATION OF INSTRUMENTS AND QUALITY


STANDARDS

Submitted by:
Ganesh Kumar (16M211)
Gaushik.C (16M212)
Gokulanand (16M213)
Gopikrishnan (16M214)
Harish (16M215)
Kannan.W (16M216)
Krishnaraju (16M217)
Karthikeyan (16M218)
Linkesh (16M219)
CALIBRATION
A set of operations that establish, under specified conditions, the
relationship between values of quantities indicated by a measuring
instrument or measuring system, or values represented by a material
measure or a reference material, and the corresponding values realized by
standards.
 Calibration is the comparison of a measurement device (an unknown)
against an equal or better standard.
 A standard in a measurement is considered the reference; it is the one in
the comparison taken to be the more correct of the two.
 Calibration finds out how far the unknown is from the standard.
Instrument error can occur due to a variety of factors:
 Drift, environment, electrical supply, addition of components to the
output loop, process changes, etc.
 Since a calibration is performed by comparing or applying a known
signal to the instrument under test, errors are detected by
performing a calibration.
 An error is the algebraic difference between the indication and the
actual value of the measured variable.

CHARACTERISTICS OF A CALIBRATION
Calibration Tolerance: Every calibration should be performed to a specified
tolerance. The terms tolerance and accuracy are often used incorrectly
Accuracy Ratio: This term was used in the past to describe the relationship
between the accuracy of the test standard and the accuracy of the instrument
under test.
Calibration tolerances should be determined from a combination of factors.
These factors include:
• Requirements of the process
• Capability of available test equipment
• Consistency with similar instruments at your facility
• Manufacturer’s specified tolerance
Accuracy: The ratio of the ratio of the error to the output, expressed in percent
reading.
Traceability: The property of a result of a measurement whereby it can be
related to appropriate standards, generally national or international standards,
through an unbroken chain of comparisons.
Eg: National Bureau of Standards (NBS), maintains the nationally recognized
standards.
IMPORTANT CONDITIONS FOR CALIBIRATION
 Calibration should be done using measurement system with proper
accuracy ,stability and range.
 Measuring equipment should have desired accuracy and precision.
 All test and measuring instrument should be securely and durably
labelled.
 Record should be maintained for all test and measuring equipment
included in calibration system.
 It should be ensured the environmental condition(temperature, relative
humidity, cleanliness, vibration, electromagnetic interference, power) are
suitable for calibration and maintained.

CALIBRATION OF MEASURING INSTRUMENTS:


We have to calibrate some of the measuring instruments like Vernier caliper,
micrometer, slip gauge, gauge blocks etc..., for better results ,good reliability
and accuracy.

CALIBRATION OF VERNIER CALIPER:


Vernier calipers fall under the category of high precision measuring instruments
(provide measurements very accurate).this is possible because of the Vernier
scale attached to the main scale.

The following procedure should be followed for calibration:

•Caliper jaws should be cleaned and make sure they are free of dirt

•Make sure that the movement of gear is proper without any hindrance.

•Bring the jaws in contact and check the dial for zero error if it is not
manually set it for zero

•Insert a 0.5inch(12.7mm) standard gauge block between the jaws to measure the
outer diameter ,make sure that the jaws are in contact with the block, record the
readings accurate to three decimal places.(take three readings for reducing the
inconsistency.

•Same procedure is repeated for 1 inch (25.4mm) and 4 inch (101.6mm) standard
gauge blocks. Note the readings and compare, with the help of the results we will
calibrate the Vernier caliper.

•To calibrate the internal jaws, set them to 0.5 inch (12.7mm) and use the locking
screw to lock their position. Then use another calibrated Vernier caliper to
measure the distance between the jaws. Record the readings and compare.

•The above step is repeated for 25.4mm and 101.6mm standard gauge blocks.

•For calibrating height measurement, set a12.7mm gauge block on flat surface
then the caliper is placed vertically so that its bottom flat surface rests on top of
the gauge block. Now extend the depth measuring stick to touch the ground and
note the reading.
MICROMETER:
A micrometer is a precision measuring instrument, used to obtain very
fine measurements and available in metric and imperial versions.

CALIBRATION OF MICROMETER:
In order to calibrate micrometer the following procedures should be followed:

•Obtain standard gauge blocks.

•Keep the micrometer in stable room temperature before starting the calibration
process

•Move the spindle towards the anvil until they make contact to check that the
primary pointer on the thimble scale is in line with the reference line of the barrel
scale. If not so, use the adjusting wrench to achieve this zero point alignment.

•Spin the ratchet anticlockwise until enough space is present between the anvil
and the spindle to accommodate the gauge block.

•Place the gauge block between the anvil and the spindle.

•Keep spinning the ratchet gently until 3 clicks are heard.

•Check that both anvil and spindle are touching the gauge block evenly.
•Set the lock nut while the micrometer is still on the gauge block.

•Read off the value from the barrel scale to obtain a reading to the nearest half
millimeter.

•Read off the value from the thimble scale that is parallel with the reference line
of the barrel scale.

•Add the barrel scale value and the thimble scale value to obtain the total
measurement reading.

•Compare the reading obtained with the actual Value of the gauge block to
determine whether a lead error exists.

•Repeat the calibration procedure with different successive gauge blocks to


calibrate the micrometer at several points throughout its range.

• If readings varies by more than +/-0.0003mm.to 0.0005mm

•Record this lead error.

•Compensate any readings obtained with the micrometer by this relevant lead
error if micrometer cannot be immediately replaced of repaired.

Calibration of feeler gauge


A feeler gauge is a tool used to measure gap widths. Feeler gauges are mostly
used in engineering to measure the clearance between two parts. A feeler gauge
is typically made up of a series of metal blades (often tempered steel) each
ground to a specific thickness which is marked in thousandths or millimeters.
Many models are designed so that a technician can stack multiple blades
together to determine clearance.
The feeler gauge blade which is to be calibrated is cleaned
The flatness of the blades are tested usin the monochromatic light and the
interference paterns are checked to detect the flatness of the blade
The feeler gauges are calibrated using a fine dial indicators .A reference point is
obtained from a standard blade and the dial is maintained at the same position
now the blade which is to be calibrated is placed in the dial indicator which was
standardized and the reading in the dial is taken
And the difference in reading between the readings of the standard and the
calibrated blades is conside4ed as the error and the zero correction is done
Hence now the feeler blade is calibrated to measure the clearances

FEELER GAUGE
Calibration of dial indicators

1. In various contexts of science, technology, and manufacturing indicator is


any of various instruments used to accurately measure small distances
and angles, and amplify them to make them more obvious. The name
comes from the concept of indicating to the user that which their naked
eye cannot discern; such as the presence, or exact quantity, of some small
distance (for example, a small height difference between two flat
surfaces, a slight lack of concentricity between two cylinders, or other
small physical deviations).
2. Many indicators have a dial display, in which a needle points to
graduations in a circular array around the dial. Such indicators, of which
there are several types, are often called dial indicators.

The dial gauges can be calibrated with the help of quick jack with various types
of dials
Put your dial gauge in the dial stand and set it up to a certain height and make it
touch with a the quick jack and mark zero on dial and move the stand
horizontally to read a value in the quick jack and the dial will generate a value
due to the changes in height of the quick jack
Reverse the direction of horizontal dial movement to the same position started
with to obtain again the zero value and now the zero error can be noticed if it’s
present
And the zero correction is done
Hence the dial indicators are calibrated in this format

DIAL INDICATOR
Slip Gauges
Slip gauges are the universally accepted ‘standard of length’ in industries. These
are the simplest possible means of measuring linear dimensions very accurately.

Need for slip Gauges

For tool-room and other precision work, the ordinary methods of measurement
are not always accurate. Micrometre and Vernier’s calliper can be used to check
tolerance fine within 0.002 to 0.02 mm, but for fine tolerance they are not
effective. Thus there is a need of instrument which can measure fine tolerance
limit.The means to do so are ‘slip gauges’. They can be used to measure
tolerances in the range of 0.001 to 0.0005 mm very accurately.

Slip Gauges Sets:


Gauge blocks are available in sets with steps with steps of 10, 1, 0.1, 0.01 and
0.001 mm. on small size blocks, the size is marked on the measuring face, and
large blocks are marked on a side surface.

The sets are available in ‘Metric’ and ‘English’ units. Letter ‘E’ is used for inch
units (English units) and

Letter “M’ is used for mm units (Metric units). The number of pieces in a set is
given by the number followed by letter E or M.

For Example, E 81 refers to a set whose blocks are in inch unit and 81 in
number. Similarly M 45 refers to a set whose blocks are in mm units and are 45
in number.

Protective Slips:
Apart from these above, two extra gauges of 2.5 mm each are also supplied as
protective slips. The purposes of protective slips are to prolong the life of slip
gauges. These are often made of the same material as the rest of the sets or
sometimes they may be made from tungsten carbide, which is a wear resistant
material.Protective slips identified by letter ‘P’ marked on one face. These are
placed at each end of the assembled blocks, to ensure that any wear or damage
is confined to these two blocks.

Wringing Process:
If two blocks are twisted together under certain pressure, it will be found that
due to molecular attraction and atmospheric pressure they will adhere to each
other quite firmly. This process is known as wringing. This Process is very
useful to produce a required size by assemble several gauge blocks.

Before wringing of blocks; wipe them clean using a cloth, chamois leather, or a
cleansing tissue. Vaseline, grease or dust should be removed by petroleum.

Start wringing with the largest sizes first. Place two faces together at right
angles as shown in figure, and with pressure, twist through 90°. This action
should be smooth and with constant pressure.

When the largest gauges have been assembled, follow same process with the
others in order of decreasing size of blocks.

Their uses are:

1. They universally accepted as a “Standard of length”.


2. They used for direct precise measurement where accuracy of work piece
being measure is high.

3. They used with high-magnification comparators, to establish the size of the


gauge blocks.

4. They are used for checking the accuracy of measuring instruments.


5. They are used to setting up a comparator to specific dimension.

6. They are used to check a batch of components quickly and accurately.

Following points should be kept in mind regarding the care of slip


gauges:
 Before wringing the blocks together, ensure that their faces are perfectly
clean.
 Never tamper with the blocks
 Gauges should not be wrung together for a long time.
 Use minimum number of gauges for a combination.
 Check accuracy at appropriate intervals
 Use 2.5mm protector slips whenever possible.

General Cares and Rules in Measurement

 The most common form of error associated with measuring instruments is


the parallax error. Parallax error occurs when an object is observed from
an angle. This makes the object appear at a slightly different position than
it really is and can lead us to take a wrong reading on a measuring scale.
 Care should be taken that all readings taken during the measurement are
in the same unit system.
 Make sure that the instrument does not have zero error.
 The surface of the object which needs to be measured should be cleaned
and dried with a cloth soaked in cleaning oil.
 Keep the instrument in a cover to protect against corrosion.
 Calibration ensures that your equipment performs within approved
tolerances and delivers on safety and quality.
 Never put precision measuring tools together with hand tools, such as
cutting tools, files, hammers and drills for the fear of bumping and
damaging the precision measuring tools.
 Precise measurement of work pieces should ideally be carried out with
the temperature being about 70ºF but since it’s not always possible to be
in an ideal situation one can minimize any error by having the work piece
and the measuring tool share the same temperature some time prior to the
measurement. Precision measuring tools should not be put under direct
sunshine or any other heat source because accurate measurements will not
be achieved as the temperature increases.
 Precision measuring tools should never be put near any magnetic material
such as a magnetic worktable, to avoid being magnetized.
 Tools should be cleaned after use. Perspiration in your hands can be a bit
caustic and react slowly with metallic materials so it is a good practice to
lightly oil the tools to minimize any chemical reaction that might take
place.

ISO 9000 Quality Standards


ISO 9000 was first published in 1987 by ISO .ISO 9000 is a set of international
standards on quality management and quality assurance developed to help
companies effectively document the quality system elements to be implemented
to maintain an efficient quality system. They are not specific to any one industry
and can be applied to organizations of any size. ISO standards are reviewed
every five years and revised if needed. This helps ensure they remain useful
tools for the marketplace.

The ISO 9000 family contains these standards:

 ISO 9001:2015: Quality management systems - Requirements


 ISO 9000:2015: Quality management systems - Fundamentals and
vocabulary (definitions)
 ISO 9004:2009: Quality management systems – Managing for the sustained
success of an organization (continuous improvement)
 ISO 19011:2011: Guidelines for auditing management systems

What does “conformity to ISO 9001 mean?

This means that your supplier has established a systematic approach to quality
management. A statement of conformity to ISO 9001 should not, however, be
considered a substitute for a declaration or statement of product or service
conformity.

Can suppliers claim that their products or services meet ISO 9001 ?

No. The reference to ISO 9001 indicates that the supplier has a quality
management system that meets the requirements of ISO 9001.

The ISO 9000 series are based on seven quality management principles (QMP)
The seven quality management principles are:

 QMP 1 – Customer focus


 QMP 2 – Leadership
 QMP 3 – Engagement of people
 QMP 4 – Process approach
 QMP 5 – Improvement
 QMP 6 – Evidence-based decision making
 QMP 7 – Relationship management

List of Mandatory documents required for ISO 9001:2015:

 Monitoring and measuring equipment calibration records


 Records of training, skills, experience and qualifications
 Product/service requirements review records
 Record about design and development outputs review
 Records about design and development inputs
 Records of design and development controls
 Records of design and development outputs.
 Design and development changes records
 Characteristics of product to be produced and service to be provided
 Records about customer property
 Production/service provision change control records
 Record of conformity of product/service with acceptance criteria
 Record of nonconforming outputs
 Monitoring and measurement results
 Internal audit program
 Results of internal audits
 Results of the management review
 Results of corrective actions

Optical Comparators
A comparators is a precision instrument used to compare the Dimensions of given
working component with the actual working standard.
An optical comparator is one of the types of a comparator which is made up of
optical means.

In optical comparator, a light source and a reflecting surface (Mirror) are used as
the optical means

An incident ray will hit the mirror and gets reflected. This ray will be projected
on the scale.

In the below picture an incident ray OA is projected with an angle θ on the mirror
and it gets reflected with the same angle θ.

The mirror can be tilted. where the tilting of the mirror can be controlled by the
measuring plunger. This movement will be projected on the graduated scale.

So that if the mirror tilts an angle α, then the movement in the reflected ray will
be angle 2α. This is the working principle of the optical comparators
Mechanical-Optical Comparators
Mechanical-Optical Comparators are same as the Optical comparators but the
Plunger in mechanical-Optical comparator will be replaced with the Pivoted
levers.

The Plunger is replaced with the lever mechanism.

Working Principle of Mechanical-Optical Comparator

1. The lever acts as the plunger.


2. The mirror is connected to the lever mechanism.
3. The lever is held by the pivot point.
4. The lengths L1 and L2 make the Magnification .The L2 should be greater
than the L1 so the more magnification is achieved.
5. When the measuring tip is contacted the work piece, then the lever starts
rotates about the pivot.
6. The mirror will tilts accordingly about the Pivot point.
7. The reflected ray will be projected on the graduated scale and shown as the
reading.

Advantages of Optical Comparators

1. High accuracy will be achieved since it is having very less moving parts.
2. Parallax error will be avoided.
3. Less weight compared to other comparators due to fewer parts.
4. Very Suitable for precession measurements due to high magnification can be
achieved.

Disadvantages of Optical Comparators

1. Need a separate electrical source.


2. Optical means are expensive.
3. Not suitable for continuous use due to the scale to be view thru eyepiece.
4. Suitable to use in Darkroom only.

Cares in handling Gauges:

 Never leave your gages in contact with dirt or oil for long periods of time.
Oils can corrode the polished surface of your gage. This includes skin oils.

 Wash your hands before handling gages. The natural acids and alkalinity
on our skin can cause rusting or corrosion. Hold the gages by the ends only,
to minimize contact with skin.

 Use a soft, non-abrasive and clean cloth to wipe your gages clean, before
and after use. A dirty, chip-filled cloth can mar the highly finished surface.

 After calibration, use a good cleaning solvent to remove dirt, oils and
fingerprints, then wipe dry with another soft, clean cloth.

Electric Comparators
An electrical comparator consists of a base a stand, power unit, measuring unit,
indication unit and amplification unit. In this comparator, the measuring contact
movement is changed into an electrical signal and then this signal is recorded by
a device that can be adjusted in terms of plunger movement. For this, an AC
Wheatstone bridge circuit including a galvanometer is used.

This electric comparator comprises of a tough stylus, an iron armature that breaks
against the W and spring and W1.If the armature is located between the coils W
& W1, then the inductance of these coils is equal; the Wheatstone bridge is stable
and forms the datum line.
When the work piece is located under the stylus for the measurement purpose due
to the difference in datum. The armature, component size would either be raised
up or down. It defeats the Wheatstone bridge balance that results the unbalanced
current flow. This current is directly adjusted into difference in size of the
component which is expanded by an amplifier specified by the galvanometer
.These comparators have a precision of 0.001 mm. The main advantages of these
comparators are no moving parts, sensitivity and accuracy over long periods.
Linear Variable Differential Transformer:
An LVDT (linear variable differential transformer) is an electromechanical sensor
used to convert mechanical motion or vibrations, specifically rectilinear motion,
into a variable electrical current, voltage or electric signals, and the reverse.
Actuating mechanisms used primarily for automatic control systems or as
mechanical motion sensors in measurement technologies.
In short, a linear transducer provides voltage output quantity, related to the
parameters being measured, for example, force, for simple signal conditioning.
LVDT Sensor devices are sensitive to electromagnetic interference. Reduction of
electrical resistance can be improved with shorter connection cables to eliminate
significant errors. A linear displacement transducer requires three to four
connection wires for power supply and signal power delivery.

Physically, the LVDT construction is a hollow metallic cylinder in which a shaft


of smaller diameter moves freely back and forth along the cylinder’s long axis. The
shaft, or pushrod, ends in a magnetically conductive core which must be within the
cylinder, or coil assembly, when the device is operating.
MECHANICAL COMPARATORS:
Mechanical comparator employs mechanical means for magnifying small
deviations. The method of magnifying small movement of the indicator in all
mechanical comparators are effected by means of levers, gear trains or a
combination of these elements. Mechanical comparators are available having
magnifications from 300 to 5000 to 1. These are mostly used for inspection of
small parts machined to close limits.
Dial indicator:
A dial indicator or dial gauge is used as a mechanical comparator. The essential
parts of the instrument are like a small clock with a plunger projecting at the
bottom as shown in fig. Very slight upward movement on the plunger moves it
upward and the movement is indicated by the dial pointer. The dial is graduated
into 100 divisions. A full revolution of the pointer about this scale corresponds
to 1mm travel of the plunger. Thus, a turn of the pointer b one scale division
represents a plunger travel of 0.01mm.
Experimental setup:
The whole setup consists of worktable, dial indicator and vertical post. The dial
indicator is fitted to vertical post by on adjusting screw as shown in fig. The
vertical post is fitted on the work table; the top surface of the worktable is finely
finished. The dial gauge can be adjusted vertically and locked in position by a
screw.
Mechanism:
The stem has rack teeth. A set of gears engage with the rack. The pointer is
connected to a small pinion. The small pinion is independently hinged. I.e. it is
not connected to the stern. The vertical movement of the stem is transmitted to
the pointer through a set of gears. A spring gives a constant downward pressure
to the stem.

Procedure:
Let us assume that the required height of the component is 32.5mm. Initially this
height is built up with slip gauges. The slip gauge blocks are placed under the
stem of the dial gauge. The pointer in the dial gauge is adjusted to zero. The slip
gauges are removed.

Now the component to be checked is introduced under the stem of the dial gauge.
If there is any deviation in the height of the component, it will be indicated by the
pointer.

Das könnte Ihnen auch gefallen