Sie sind auf Seite 1von 3

Evaluating Calibration Tolerances Can Save You Trouble

Is Losing Business Worth the Risk?


Having an instrument not meeting specifications is bad
for the owner or operator of an instrument and bad for
the manufacturer. It can be costly. The owner may need
to take special action to mitigate the problem, perhaps
even recall work done. The manufacturer may lose
customers’ trust. Clearly, it is a situation to be avoided.
Without due consideration, out-of-tolerance conditions
may be much too likely to occur.

A calibration is a comparison of measuring equipment


against a standard instrument of higher accuracy to
detect, correlate, adjust, rectify and document the
accuracy of the instrument being compared.

Typically, calibration of an instrument is checked at several points throughout the calibration range of the instrument.
The calibration range is defined as “the region between the limits within which a quantity is measured, received or
transmitted, expressed by stating the lower and upper range values.” The limits are defined by the zero and span values.
The zero value is the lower end of the range. Span is defined as the algebraic difference between the upper and lower
range values. The calibration range may differ from the instrument range, which refers to the capability of the
instrument. For example, an electronic pressure transmitter may have a nameplate instrument range of 0–750 pounds
per square inch, gauge (psig) and output of 4-to-20 milliamps (mA). However, the engineer has determined the
instrument will be calibrated for 0-to-300 psig = 4-to-20 mA. Therefore, the calibration range would be specified as 0-to-
300 psig = 4-to-20 mA. In this example, the zero input value is 0 psig and zero output value is 4 mA. The input span is 300
psig and the output span is 16 mA. Different terms may be used at your facility. Just be careful not to confuse the range
the instrument is capable of with the range for which the instrument has been calibrated.

What are the Characteristics of a Calibration?

Calibration Tolerance: Every calibration should be performed to a specified tolerance. The terms tolerance and accuracy
are often used incorrectly. In ISA’s The Automation, Systems, and Instrumentation Dictionary, the definitions for each
are as follows:

Accuracy: The ratio of the error to the full scale output or the ratio of the error to the output, expressed in percent span
or percent reading, respectively.

1
Tolerance: Permissible deviation from a specified value; may be expressed in measurement units, percent of span, or
percent of reading.

As you can see from the definitions, there are subtle differences between the terms. It is recommended that the
tolerance, specified in measurement units, is used for the calibration requirements performed at your facility. By
specifying an actual value, mistakes caused by calculating percentages of span or reading are eliminated. Also,
tolerances should be specified in the units measured for the calibration.

All test equipment readouts are accurate only to a certain level of uncertainty, or tolerance. The best test equipment
has LOW uncertainties, but NO test equipment can give you a 100% correct output or input reading - there is a tolerance
for all equipment.

For example, the Bird 43 series wattmeter and wattmeter elements have uncertainties of ±2% of the full scale on the
meter and ±5% of FULL SCALE on the elements. As an example of how this can affect your accuracy and readings, let us
assume that you have a 15 RMS FM transmitter in front of you. You have a Bird 43 wattmeter, a 50 watt FULL SCALE
element and a 100 watt dummy load to connect to the output of the Bird 43 to measure the output power of your
transmitter. You connect the transmitter to the input of the Bird 43, drop the element in and orient properly, and
connect the ‘dummy load’ to the output. Energizing the transmitter, you see that the needle comes to rest at 14.9
watts on the Bird Wattmeter dial.

Many field technicians I know would consider this 14.9 watt reading to ‘gospel’ in that the transmitter is obviously
putting out approximately 1/10 watt low.

But is this necessarily the case?

The answer is no, this is NOT necessarily the case, if you examine the uncertainty (measurement tolerance) of the Bird
43 and element used. For the sake of argument, let us assume the Bird 43 and this element you are using were
calibrated together, so the 2% uncertainty of the meter proper is “washed out”. This leaves us with an uncertainty of up
to ± 5% of full scale on any reading taken with this wattmeter and element combination. What does this mean to your
14.9 watt reading? This means that the actual power being put out by the transmitter could be anything from 12.4
watts to 17.4 watts. (All readings are ±5% of FULL SCALE - that is the uncertainty of any reading is ± 2.5 watts.)

One time, we had calibrated a communications test box - one of those pieces similar to an IFR 1200 or Motorola 2600
communications test set. The customer called up, stating that he was seeing up to a 3 dB difference between the
transmit levels and the receive levels he had in the unit.

I reprinted the test data from the calibration so that I had it in front of me when I spoke with the customer. While on
the phone, the customer told me the frequencies he had been testing at, and the differences he was seeing. I noted to
myself that I was seeing the exact same differences he was seeing at these frequencies - but the unit was in tolerance,
and had passed calibration. I then asked the customer if he was aware that the receiver level specification for this unit
was ± 4 dB, and the generator specification for this unit was ±4 dB. Also, was he aware that the generator and the
receiver were specified independently of one another?

2
Silence on the other end of the line for a moment. Then, “No, I did not realize this. In other words, you are telling me
that to perform the measurements I desire to perform, I need better test equipment?” I agreed with the customer. I
asked him if he had a specification sheet for his equipment, he did not have one - so I faxed him the specifications for his
communications test box from the service manual we had in the technical library.

With the receiver being ± 4 dB uncertainty, and the generator being ± 4 dB uncertainty, there could be a difference in
readings between the generator and the receiver of 7.99 dB and the communications test box would still be within
manufacturer’s calibration tolerances.

As you can see, it is of critical importance to know the uncertainty (accuracy of measurement) of the test equipment you
are using. Calibration verifies, or adjusts the equipment until the inputs/outputs are at or within those aforementioned
manufacturer’s levels of uncertainty.

Thus - know the capabilities and limitations of the equipment you are using to perform whatever task is before you.

Mark Price, Sr. RF Technician


JM Test Systems, Inc.
800.353.3411

Das könnte Ihnen auch gefallen