Sie sind auf Seite 1von 20


Lecture 6
Instructor : Dr Alivelu M Parimi
Previously discussed
Concept of understanding function of any instrument in terms
of functional elements has been discussed.

Classification of instruments as Null/deflection, contact/non-
contact, manual/automated, intelligent/dumb, analog/digital
has also been discussed with help of many examples.

Identification of various inputs affecting output and methods
to remove effect of spurious inputs has also been discussed.
Outline in this chapter
Static calibration

Static characteristics

Dynamic characteristics
Detailed specifications of the functional characteristics of an
instrument are termed as performance.
Instrument performance has been divided into two sub areas:
Static Characteristics and Dynamic characteristics.
Both these types of performance is based on the response of
an instrument to a particular input.
For Static characteristics it is assumed that instrument is not
subjected to time varying inputs like acceleration, vibration,
shock and the measurand is changing slowly.
The dynamic performance parameters specify how the output
changes with time in response to time-varying inputs.
Static calibration refers to a situation in which all inputs
(desired, interfering and modifying) except one are kept at
some constant values.

Then the input under study is varied over some range of
constant values, which causes the output to vary over some
range of constant values.

The input-output relations developed in this way comprise a
static calibration valid under the stated constant conditions of
all the other inputs.

If overall rather than individual effects were desired, the
calibration procedure would specify the variation of several
inputs simultaneously.

Static Characteristics
Range and Span,
Resolution and Threshold,
Drift and Hysteresis.
Static Characteristics:
The accuracy is defined as the closeness of the agreement
between the result of a measurement and the true value of
the measurand
Accuracy is measured by the absolute and relative errors.
Absolute error is the difference between a measurement and its
true value.
Relative error is the ratio of an absolute error to the
true/specified /theoretically correct value of the quantity.

Accuracy is expressed in the following ways:
Point accuracy: This is the accuracy of the instrument only at one
point on its scale. The specification of this does not give any
information about the accuracy at other points on the scale and it
does not give any information about the general accuracy of the
Accuracy as a percentage of Full Scale range: when an instrument
has uniform scale, its accuracy may be expressed in terms of scale
range. The accuracy of thermometer having a range of 500
C may be
expressed as 0.5% of scale range meaning reading may be in error
by + 2.5
% of Full Scale deflection =

Accuracy as a percentage of the True value: Accuracy is specified in
terms of the true value of the quantity being measured.
% of true value =

) (
Value Scale Maximum
TrueValue Value Measured
) (
Value True
TrueValue Value Measured
Accuracy: Example
A Voltmeter having a range 0 - 2V, 0 - 50V and 0 - 100V makes
measurement with an accuracy of 1% FSD (full scale
division). What will be the range of the readings when
voltmeter is used to determine voltage of i) 1V on 2V scale ii)
5V on 50V scale iii)5V on 100V scale
= 0.02 So well read 1V 0.02V
= 0.5 So well read 5V 0.5V
= 1.0 So well read 5V 1V
Note: Using a scale whose maximum value is close to
measurand gives better accuracy.

Static Characteristics :
It is the measure of the reproducibility of the instruments i.e., given a
fixed value of a quantity, precision is a measure of how close the
readings are to each other. Two terms closely related to precision are:
Repeatability and Reproducibility.

Repeatability describes closeness of output readings when the same
input is applied repeatedly over a short period of time, with the same
measurement conditions, same instrument and observation, same
location and same conditions of use maintained throughout. This is also
known as the inherent precision of the measurement equipment.

Reproducibility describes closeness of output readings for the same
input when there are changes in the method of measurement, observer,
measuring instrument, location, condition of use and time of
measurement. It is the degree of closeness with which a given value
may be repeatedly measured. Perfect reproducibility means that the
instrument has no drift. No drift means that with a given input the
measured values do not vary with time.

Precision does not guarantee accuracy but
accuracy guarantees precision.
Comparison Between
Accuracy and Precision
Example 3.3
50 V is measured with certain voltmeter. Four readings taken
are 53, 52, 51 and 52 V. Find the accuracy and precision.
Maximum deviation from true value of 50 V is 3 V, so accuracy
is not better than 6%.
Mean reading is 52 and maximum deviation from mean value
is 1V, so precision is =

% 2
Static Characteristics :Range
and Span,
The region between the limits within which the instrument is
designed to operate is called range of instrument, expressed
by stating lower and upper values.

Span represents algebraic differences between the upper and
lower range values.


The instrument span is given by x


For a pyrometer calibrated between 0 and 1000
C ,
the range is 0 to 1000
C and span is 1000
For a thermometer calibrated between 200C and 500C
the range is 200C to 500C and span is 300C.

Static Characteristics:
Resolution and Threshold,
Threshold is the minimum value of input necessary to cause a
detectable change from zero output.
The range of values for which the instrument does not respond is
called dead-zone.
In digital systems it is the input signal which is necessary to cause LSD
of output readings to change.

Resolution of an instrument is defined as the minimum resolvable
value of the measurand.
It is the least count of instrument which is the smallest change in
input for which there will be a change in output.
Both are not zero because of friction, backlash, and inertia of moving
parts and spacing of graduations.

Example: potentiometer

Resolution is effectively the smallest change a meter can display. For
instance a 4 digit voltmeter displaying the 12 volt system battery voltage
could display, as it's smallest change, 0.01 volts. It isn't possible for it to
display anything smaller than this because there simply aren't enough
digits available.

All measurements contain some error. The accuracy (or error) is usually
defined as the percentage error. So a meter with a specified accuracy of
1% displaying 12.50 volts, would mean that the actual voltage is within
1% of this 12.5 volts, i.e. within the range 12.375 to 12.625 volts.

On top of this is the fact that, in the case of digital instruments, there is
another error. That of the last digit. A typical specification would state
something like 1% +/-1 LSD. This means the meter will read accurately
to within 1% of the actual voltage (giving the range in the above
example of 12.375 to 12.635 volts) plus or minus 1 Least Significant
Digit. As a 4 digit meter would not have the last digit (the 5) in the
above example, it would display somewhere in the range of 12.37 to
12.64 plus or minus 1 LSD giving a final range of 12.36 to 12.65 volts.
The scale of temperature measurement has 100 uniform
divisions, full scale reading is 200 C and one tenth of a scale
division can be estimated with fair degree of accuracy.
Determine the resolution.
Full Scale Reading = 200
There are 100 divisions; therefore each division reads 2C. One
tenth of scale division corresponds to resolution so Resolution
= 0.2 C.

Static Characteristics:
Static sensitivity or scale factor or gain is the ratio of change in
instrument output to change in magnitude of measurand.
The reciprocal of sensitivity is called deflection factor or inverse
Higher sensitivity implies better resolution/threshold.
A Wheatstone bridge requires a change of 7 in the unknown arm
of the bridge to produce a change in deflection of 3mm of the
galvanometer. Determine the sensitivity. Also determine the
deflection factor.
Sensitivity =
= = 0.429 mm /
Inverse sensitivity or Scale factor = = 7 / 3mm = 2.33 /mm.

input of magnitude
response output of magnitude
mm 3
Static Characteristics :
The linearity is defined as the departure of the calibration
point from a straight line.
This is the closeness to a straight line of
the relationship between the true
process variable and the measurement.

Lack of linearity does not necessarily
degrade sensor performance.

If the nonlinearity can be modeled and
an appropriate correction applied to the
measurement before it is used for
monitoring and control, the effect of the
non-linearity can be eliminated.
Static Characteristics: Drift
and Hysteresis.

Static Characteristics: Drift