You are on page 1of 33

CH154K PChem Lab Manual Page (i)

Chapter 3: The Error Chapter

Glossary: This will help identify terms used in this chapter. When they first appear,
they will occur in blue.
Absolute Error: similar to absolute precision, it is an expression of the level of uncertainty of
either a single measurement or group of measurements in terms of fixed units
Absolute Precision: a statement of the degree to which a value agrees with replicate
measurements expressed in terms of a fixed unit, i.e. 5.070.02 cm
Accuracy: how closely a measurement agrees with the real value
Average (aka mean, arithmetic mean): the expected normative value of a series of numbers
Average Deviation: the expected normative value of a series of measurement uncertainties
Bins/Cells: equal-size divisions in the data range; used in creating a histogram
Coefficient of Variation: the RSD when expressed as a percentage
Confidence Limit: the percentage of data points expected to be included in a particular
statistical treatment; alternatively, a percentage value expressing the likelihood that a statistical
operation is valid for the data set in question
Data Reduction: a technical term for data analysis including statistical treatments
Dependent Variable: usually the y-variable; a variable whose value is determined by the value
of another variable
Deviation: how far a number is from the average value of the data set to which it belongs
Distribution: the pattern of values in a data set. There are certain standard patterns such as
Gaussian, Poisson, Rayleigh, etc. Gaussian is the most typical for scientific data and resembles
the traditional bell curve.
Error: the uncertainty in any measurement or series of measurements; not necessarily
indicating a mistake in the experimenterror includes all uncertainties, so a larger term than
accuracy and precision
Gaussian Curve (Normal Error Curve): a specific shaped curve that gives the empirically
expected distribution of data values for an entire population. A good sample will approximate a
Gaussian; aka, the traditional bell curve.
Gross Error: error due to a mistake which arises in most instances from the carelessness,
ineptitude, laziness, or bad luck of the experimenter. Typical sources include transposition of
numbers in recording data, spilling of a sample, using the wrong scale on a meter, accidental
introduction of contaminants, and reversing the sign on a meter reading.

Histogram: a graph indicating the frequency which data values occur; gives a graphical picture
of the distribution of data over its range
Independent Variable: usually the x-variable; a variable whose value is determined by the
CH154K PChem Lab Manual Page (ii)
Instrumental Bias/Error: systematic offset in a device, causing a shift in the data higher or
lower than the actual values
Median: the middle value in a series of numbers
Method Bias/Error: systematic offset in an experiment due to less than perfect conditions,
causing a shift in the data higher or lower than the actual values
Midrange: the value midway between the highest and lowest values in a number set
Mode: the value in a number set that occurs with the highest frequency
Outlier: a data point that falls outside a reasonable grouping of the data
Personal Bias/Error: systematic offset in reading or interpreting a measurement due to human
weakness/tendency/habit, causing a shift in the data higher or lower than the actual values; can
vary in direction and magnitude from person to person
Population Mean (): the average value for an entire population; the true average
Population Standard Deviation (): the standard deviation for an entire population
Population Variance (
): the variance for an entire population
Precision: how closely replicated measurements agree
Random Error (Indeterminate Error): scatter in the values of a series of measurements; a
measure of imprecision in the experiment
Range (Spread): the data set, expressed as the minimum to maximum values
Relative Error: the most common way of reporting overall uncertainty; also known as percent
error, it is the ratio of the uncertainty to the result.
Relative Precision: often expressed as a percentage, it is the ratio of the uncertainty to its value;
the result is unitless
Relative Standard Deviation (RSD): similar to percent error, it is a ratio of the standard
deviation to the mean. It gives a good sense of how broad/narrow the Gaussian curve is.
Reproducibility: how reliably repeated measurements provide the same value
Residual: the difference between a value and what it was predicted to be (same as deviation)
Root-Mean-Square (rms): exactly what it saysa way of treating numbers where each value
is squared, all squared values are averaged, and then the square root taken. The advantage of rms
treatments is that it prevents positive and negative values from cancelling each other
Sample Mean (): the average value for the data set.
Sample Standard Deviation (s): the standard deviation for the data set.
Sample Variance (s
): the variance for the data set.
Squared Residual: the square of the residual; when this is minimized the best line fit is
obtained. This is the source of the Least Squares Method of fitting lines.
Standard Deviation: the root-mean-square of individual deviations for each datum in the set;
also, the positive square root of the variance

CH154K PChem Lab Manual Page (iii)
Standard Error of the Mean: if several replicate sets of data are averaged, then these averages
have uncertainties and the standard deviation of these can be determined
Systematic Error (Determinate Error, Bias): a shift in the values in a data set due to a
specific influence that is constant in sign and magnitude.
Uncertainty: the inherent limit to our ability to properly measure a phenomenon
Variance: similar to the average deviation, it is an indication of how far values are from the
mean. The variance is the squared standard deviation (since half of the value are above the mean
and half below, they would cancel otherwise). Statisticians favor this value to express precision,
whereas chemists prefer the standard deviation because it has the same units as the data.

3.1 General
3.1.1 Opening Thoughts
There is an idea that proper statistical analysis and error propagation is about as much
fun and as tedious as an Army private peeling a mountain of potatoes for the mess hall.
(Of course, in todays Army that job has been subcontracted out to Haliburton, whos
hired some lucky third-world person to do it for cheap, but still 50 times the income he
would have made at home, and he will eat much better too. This too is an example of the
contextual nature of statistics. If you were offered $2 per hour and room and board to
peel potatoes for 12 hours a day, 6 days a week, youd be insulted. To our third-world
employee, this offer is a gold mine, and his friends are jealous over how successful he
is. They make $2 a week, no room and board.) Yet, performing statistical analysis
properly is just as important as fixing food for the troops. Without food, an army cant
fight. Without statistical analysis, there is no way to tell if scientific data have any
bearing on reality, much less any significance.

For example, if you measure the mass of a paper clip to be 15 kg, the only reason you
know that this an unreasonable number is because you are familiar enough with the
measurement system that you automatically do the statistical analysis in your head,
comparing the 15 kg paper clip to the fact that your 5 pound hand weights at the gym
are 2.2 kg, and the paper clip is far lighter than the weights. If you dont have that basis
for comparison, you wouldnt know if the measurement makes sense. For instance, if
you measured the mass of a single protein molecule to be about 10
kg, would you
have any idea if that number is even in the ballpark? Would your friend the music

3.1.2 Its Not That Bad, Really
While it is tedious, statistical analysis is not hard, again, like peeling potatoes.
Relatively few equations are needed. They are just used over and over. The hardest part
is determining when to use which one and keeping track of all the pieces through each
step of the data analysis. A proper error analysis simply involves performing each data
analysis step on the uncertainty of each measurement, just as you performed each step
of the data analysis on the measurement itself. You may also hear statistical analysis
called data reduction.

This chapter has four sections: data, error, graphing and presenting results. A
homework assignment in Section 9 allows you to practice using this material before
using it in your experiments. Keep in mind that while you are only required to do three
formal error analyses, all experiments must have some level of an informal analysis to
Statistical analysis provides
a basis for comparison to
allow a determination of
significance to the data.
CH154K PChem Lab Manual Page (iv)
give a sense of how well your results replicate accepted values. An informal analysis
includes uncertainty in each measurement or type of measurement, and an approximate
uncertainty in final results based on proper use of significant figures.

3.1.3 The Almighty Literature Value?
Something must also be said about the concept of the right answer and the literature
value. No device made by humans can truly obtain Gods value ordained at the
Beginning from on High. We have neither the blueprints of the universe nor the
omniscience to know every jot and tittle to the nth place. We can only measure what we
observe. This may seem like an overly pessimistic view of our ability to accurately
observe the universe, however, it is a humility we must maintain to preserve the
integrity of our measurements. Overconfidence leads to mistakes, and this is all the
more important to remember as our measuring technology becomes increasingly

Therefore, the values in the literature to which we compare our results are not the right
answer, but values scientists agreed upon and deemed reasonable. To show the truth of
this, look up a data value in a number of different sources and see how it varies. Even
the CRC Handbook indicates this by listing the technique used to obtain every value and
giving some level of uncertainty and the source from which the tabulated value was
3.2 DataThe Meat for the Peeled Potatoes
Science is the art of taking measurements of physical phenomena and deriving
conclusions as to the nature and mechanisms of the phenomena. Data is a group of such
measurements. Datum is the singular form, the value of a single measurement, but the
term is rarely used. Using data in a singular sense is never correct. Unfortunately,
sloppiness in this area is commonbut will not be allowed in this course.

Data arises from quantitatively (usually) measuring phenomena. With infinite time and
resources, one could theoretically take all possible measurements of the phenomenon in
question. This infinite data set is known as the population. Realistically, we only have a
relatively small, finite set of the population. This is the sample, which for a properly
designed and performed experiment is assumed representative of the population.
Therefore, it makes sense that as the sample size (N) increases, the correspondence of
sample properties/behavior to that of the population should increase. In fact, for most
scientific data we will encounter, when N approaches 20-30, the correspondence is
essentially identity.

The data in Table 3.1 is from an arbitrary simple experiment where a bunch of
medium sized binder clips were placed one at a time on a Mettler-Toledo Classic Plus
digital analytical balance, and the balance tared between each measurement. The sample
included two new boxes of twelve each and a dozen used clips, most from the same
manufacturer. Most were made in the same country. One of the used ones was picked at
random to get a large number of replicate measurements of the same clip. Then each of
the thirty-six clips was measured once in a random order. Irregularities in
measurements/procedures are noted, just as should be done in any experiment. This data
will be used throughout the chapter to illustrate the principles discussed. The sample is
assumed to be representative of all medium-size binder clips currently manufactured
(the population).

Table 3.1: The Mass of Medium-Sized Binder Clips
Be sure to record
uncertainty in ALL
measurements in lab at
the time the
measurements are
Population: the infinite
data set

Sample: the finite portion
of the population actually

CH154K PChem Lab Manual Page (v)
Replicate Measurements of the Same Clip Measurements of Different Clips
Trial Mass (g) Notes Trial Mass (g) Notes
1 9.709 1 9.709 Manufacturer 2, Country 2
2 9.710 2 8.475
3 9.709 3 8.532
4 9.708 4 8.482
5 9.709 5 8.490
6 9.710 Did not tare 6 8.404
7 9.707 Left balance door open 7 8.557
8 9.710 8 8.500
9 9.710 9 8.698
10 9.710 10 8.053 Manufacturer 3, Country 1
11 9.706 Leaned hard on bench
11 28.669 LARGE Binder Clip
12 9.710 12 8.437
13 9.706 Leaned hard on bench
13 8.431
14 9.708 14 8.131
15 9.709 15 8.537
16 9.710 16 8.230
17 9.709 17 8.611
18 9.708 18 8.361
19 9.709 19 8.239
20 9.709 20 8.562
21 9.709 21 8.502
22 9.709 22 8.597
23 9.709 23 8.628
24 9.710 24 8.561
25 9.708 25 7.886 Manufacturer 1, Country
1, used
26 9.710 26 8.382
27 9.709 27 8.420
28 9.709 28 8.330
29 9.709 29 8.341 Manufacturer 2, Country 1
30 9.709 30 8.531
31 9.709 31 8.422
32 9.706 32 8.128
33 9.709 33 8.286
34 9.709 34 8.464
35 9.710 35 8.425
36 9.710 36 8.462
Avg 9.70889 Avg 9.01314
Std Dev 0.00114 Std Dev 3.38074
Avg 8.45154 w/o Large Binder Clip
Std Dev 0.27846 w/o Large Binder Clip
3.2.1 MeasurementPrecision and Accuracy
Since measurement is the basic act of science, it must be done right. There are two
aspects to a measurement: precision and accuracy. No measurement is better than the
instrument used to obtain the measurement. In our experiment, the balance displays
CH154K PChem Lab Manual Page (vi)
measurements to the thousandth of a gram (a milligram). For purposes of this exercise,
this degree of precision is sufficient.

Precision is how closely a group of repeated measurements agree with each other.
Accuracy is an indication of how closely the measurement agrees with accepted values
(which we hope to be close to the real value). Both of these involve a combination of
instrument quality and user skill.

All measurements have a level of uncertainty in them because 1) no measuring device is
truly perfect, and 2) our ability to read the data from a device is subject to interpretation.
Expressing this uncertainty can be done in several ways. One is by stating the absolute
precision of a measurement: 0.3 g. The problem here is that 0.3 g in a metric ton is a
very tiny uncertainty, but 0.3 g in a measurement of grams is significant. Therefore,
usually a more helpful expression is that of relative precision; (i.e. 3% or similar).
This will clue you in on the overall precision of any given measurement is, and if you
know the base unit in the measurement, then it is easy enough to convert to absolute.
Regardless, the heart of precision rests in the reproducibility of results. Instrument Precision
The characteristics of precision are equally true for analog and digital devices, but for
different reasons. In an analog device, it is obvious that a measurement is dependent on
our ability to discern the reading.

With a digital device, it is the instrument that assesses the reading and displays it. To do
this well, it must be calibrated, which is subject to experimental uncertainty. Secondly,
many devices give readings to more decimal places than are reasonable for the
technique. In some cases, it is obvious how precise a reading can be, but in many others
this knowledge is only gained through the experience of taking many readings and
determining how many digits are reliable. There are several reasons for this.

One is that often the digital reader is a general type device that is connected to an analog
instrument sending electric signals that the digital device translates to a digital signal.
Therefore, it converts any signal it receives, including noise into the display. This is
partly why some displays cannot seem to settle on a value. (Of course, another reason
could be a malfunctioning digital device.)

Another major reason a digital device may display a reading to more digits than are
appropriate is due to what I call the calculator catastrophe. Many devices take some
reading of a signal and then mathematically process it as a calculator would, and display
the entire answer it calculates. Many students for some reason think that if the calculator
gives a value, the entire value is correct. When you divide two numbers, the answer
often has an infinite remainder, like 1/9 or 4/7. This does not mean that your data is
accurate to an infinite degree. The same is true for any digital device that does
mathematical signal processing. It is up to the scientist (you) to determine what part of
the displayed value is usable.

There is a principle of measurement that makes both philosophical and mathematical
sense: The more times a measurement is taken, the smaller the uncertainty in the value
obtained by the measurement. This makes sense philosophically in that you understand
a phenomenon the more you study it. Statistically it makes sense because the right

CH154K PChem Lab Manual Page (vii)
answer should turn up more often than any other for a large sample of measurements,
so the variation in the measurement decreases. Keep in mind however, that the precision
of the instrumentation hasnt changed! There is still the same uncertainty in each
measurement due to physical limitations. However, a statistical treatment can impose a
confidence in the measurement higher than it is capable of achieving itself. This is a
powerful thing, and it is important to make sure this imposed confidence has been
obtained properly and source acknowledged, otherwise it can be very misleading.

So how does one determine the uncertainty of a measurement? It depends. Often it is an
honest self-assessment of how accurate your eye is. For me, through good eyesight and
practice, I can often read a meter stick to a tenth of a millimeter. Other people only feel
comfortable reading to half a millimeter. The important thing is to assess it honestly
being too conservative in your abilities is just as much a problem as being too confident.
Another way to determine uncertainty is by comparison to standard values. These values
have been defined by scientists as absolute bases for comparison. Later in this chapter is
a table that gives the uncertainty expected in glassware based on the manufacturers
comparisons to standards governed either by the US NIST (National Institute of
Standards and Technology) or by BIPM (International Bureau of Weights and
Measures, the governing body for the metric system), or by some other authoritative
body. Another general rule of thumb is +/1 in the smallest reliable digit in a digital

Generally, documentation accompanying a measuring device will give some indication
of the uncertainty in its measuring ability, and how to maintain that level of reliability
over its lifetime! Any instrument can be damaged, and most lose accuracy over time, so
periodic calibration is required. When needed, you will be given instruction on how to
do this. For those who want to maximize the precision of their work (which we
encourage!), learn general calibration principles and how to calibrate rapidlyyou are
welcome to do as much as you want, as long as you ultimately finish the experiment in a
timely manner. While we may be impressed by your improving the accuracy of all our
equipment, and grateful for saving us the work, your grade depends on your completed
data collection and analysis. Limits on Instrument Precision Thermal Limits
As implied by the 3
Law of Thermodynamics, the only time there is no thermal motion
in a substance is at 0K. We know even this is not true, and the problem only gets worse
as things heat up, even to room temperature. This random thermal motion presents a
limit to the precision of an instrument, as the thermal motion of electrons and molecules
in electronic components creates noise in electrical signals. The smaller the size of the
signal, the worse the impact on instrumental precision. Signal to Noise Limits
Leading from thermal issues is the idea that to try to compensate for thermal and other
noise, we can employ amplification of the signal. The limit here is that the noise is also
amplified. With the proper design of electronics, the signal can be amplified
CH154K PChem Lab Manual Page (viii)
preferentially; however, the weaker the base signal is, the more effort it requiresa
factor of x improvement in signal strength requires an amplification factor of x
. This
can quickly overwhelm either technical or economic capacity for improvement, or both. Heisenberg Would Like to Address the Issue, Maybe
Another limit on instrument precision was suggested by Heisenberg and embodied in
the Heisenberg Uncertainty Principle. He pointed out that the act of measuring a
physical system has an influence on the system. For macroscale measurements, this
impact is usually negligible, but on the chemical scale, it can be significant. To illustrate
his point, say an item fell under your bed. When reaching your arm under the bed and
searching for the item, you hit the item and it moves, requiring you to keep looking for
it until you are able to grab it without its moving away. This is a type of measurement
determining the position of the object under the bed. Your arm is the probe searching for
the item. Upon contact with the probe, the item moved because of being probed. This
generated an error in the measurement of the items location. It was at the first location,
but due to the probing action, it moved to a new location that is unknown to the probe
until it is relocated. How hard the probe hit the object affects the magnitude of the

Now, measuring the length of a board with a tape measure has almost no impact on the
item or the value of the measurement, but examining a molecule with electromagnetic
energy whose wavelength is on the order of magnitude of the molecule is actually very
similar to the arm under the bed. The closer in size the probe and sample are, the higher
the impact of probing. In designing your experiment, this must be taken into account
and circumvented if possible.

Heisenberg quantified the minimum amount of impact that a measurement can have to

dp dq > (3.1)
In English, this means that if you are measuring both the momentum (p) and position (q)
of a moving object, the product of the uncertainties in each measurement cannot be
smaller than , where
= and h is Plancks constant. The idea is thisa moving
object is in a specific location only at an instant in time. If you specify what the position
is at a given point in time, then you dont have any information about its momentum,
and vice versa. In other words, the smaller the uncertainty in one measurement, the
larger the uncertainty in the other. Think of a race car. When you take a photo of it, it is
frozen at that location, but the photo contains no information about its speed. Likewise,
if using a radar gun to clock the speed, the device can tell you very little about its
position other than it is in the range of the detector. Again, in the macroscale, this seems
a bit trite, but when making nanoscale measurements, this is vital. See a quantum
textbook for a full treatise on the subject. The point here is to introduce the idea that
since measurement affects the item observed, there is an intrinsic uncertainty in the
measurement that cannot be reduced past a certain point, so error occurs in every
experiment. Accuracy and the ModelModel Perturbation
Somewhere in all of this, we need to ask why we are taking all of these measurements.
As discussed in the Introduction, the scientific method begins with an observation,
Note the spellingthis is
not Planks constant,
but Plancks constant,
named after Max Planck, a
German physicist who
noticed that this number
kept appearing in

CH154K PChem Lab Manual Page (ix)
asking a question, developing a hypothesis, and then performing an experiment to test it.
Another term for hypothesis is model. Scientists create artificial models as analogies to
the natural world. They then try to break the model to see where it fails, so they can
improve it to more closely match the behavior of the universe.

Often such models can be initially expressed as a simple equation (relationship of
physical parameters and constants). As the model is refined, the equation is often
modified by adding a correction term to the original to account for the change. As our
understanding increases, we add more and more terms to the relationship. Usually each
new term is smaller in magnitude than the previous by a power or exponential factor.
The generic name for this approach is perturbation theory, meaning the model has
been perturbed or adjusted by a series of successively smaller correction factors. These
are often expressed in the model equation in the form of an infinite Taylor series,
implying mathematically the idea that we will never have the model exactly conform to
the entirety of the real universe.

In your work therefore, you must determine how many of these terms are needed to give
the desired precision to your results. Most work is reasonably limited to the main term
or up to usually one or two perturbation terms. Only certain areas really need to include
every term we can throw at it. However, there is often a bit of geeky competition among
scientists: I measured the molecular weight of water to two more significant digits than
you! This is largely harmless, though not completely insignificant as it is important to
develop a habit of precision and to improve our measuring abilities for those areas for
which it does matter. An example of what we can do versus what we need to do is the
piston and cylinder in a car engine. We know the value of pi to thousands of decimal
places. It turns out that in order for a piston and cylinder to be round enough that they
do not seize and destroy the engine, we only need to calculate the circumferences using
seven decimals of pi. If we use six, they will seize up. If we use eight, there is not
enough improvement in efficiency to justify the extra work required for an order of
magnitude increase in roundness accuracy.

3.2.2 Basic Data Analysis The Histogram
Once data are collected, they need to be organized and analyzed. A common first step is
to determine the range of the data set.

max min
R x x = (3.2)
Then divide the range into a number of equally spaced portions called bins. Count the
number of data points in each bin and create a bar graph (called a histogram) comparing
frequency versus bin. Notice the shape of the curve implied by the data from our sample
in Figure 3.1. If the entire population were to be measured, then the curve would smooth
out and become explicit as the bin size shrank, as shown by the line graph superimposed
on the histogram.

How is the bin size chosen? Generally, the larger N is, the narrower the bins are. The
objective is to balance details of the profile versus manageabilityit is not helpful to
have bins so narrow that there are gaps in the data with empty bins.

This curve shown is special, called a Gaussian curve or a Normal Error curve. Notice
the bell shape to the linethis gives its common name with which you are probably
more familiarthe bell curve. It shows how frequently each datum occurs in the set and
Note: When describing a
graph, the convention is to
say y versus x.
CH154K PChem Lab Manual Page (x)
shows the distribution of the data over the range. Note that the peak of the curve is at or
near the middle of the distribution. This peak value is the average or arithmetic mean
(often just mean) of the datait is the value that any given measurement of the
phenomenon is normally expected to reveal. The more measurements that are taken, the
more likely the mean actually corresponds to the true value.

In a perfect world, every measurement would give this value exactly, and the curve
would collapse to a single line. In case you didnt yet know it, we are not in a perfect
world, so measurements deviate from the mean. However, it is still true that the smaller
the overall deviations, the narrower the peak and vice versa, while still maintaining the
same shape. Furthermore, it is expected that on average, most deviations have smaller
values rather than larger ones, hence the high population of measurements around the

It is often helpful to understand which fraction of the sample or population (dN/N) is in
a certain range of values noted as x+dx. The normal error law conveniently gives this
fraction based on the number of values (N), standard deviation (), mean (), and value
of interest (x). We get:

( )
e dx


= (3.3)

Because the amount of deviation of each point from the mean is indicative of the
precision of the experiment, analyzing means and deviations is very important. A key
term to introduce here is standard deviation. It is, in a sense, the average deviation of all
data points. A helpful way to look at a range of values is by determining how many
standard deviations they are from the mean. Thus, we look at say one standard deviation
above and below the mean, or two above and below, and so on. Statistically, the area
under the curve between one standard deviation above and below the mean is 68.3% of
the entire curve. In other words, any random data point has a 68.3% chance of being
within one standard deviation of the mean. Two standard deviations encompass 95.5%,
three encompass 99.7% and so on. This is known as the three-sigma rule since sigma
is a symbol associated with standard deviations, as well see next. In Figure 3.1, 3 is
off the graph. The value of 2 has a special notation of .

These values are important because they define certain common confidence limits in our
data. To operate at the 95% confidence level, one is asserting that 95% of the time a
given measurement will fall within about 2 of the mean (1.96 to be exact). A bit more
will be said about this later.

Figure 3.1 Histogram of Binder Clip Masses (excluding outlier) with Normal Error Curve Superimposed

CH154K PChem Lab Manual Page (xi)

Part of this analysis involves distinguishing between the means and deviations for the
population and those of the sample. A whole branch of statistics deals with the
differences between the population and the sample, and treatment depends on how big
or small the sample is and whether or not it is truly representative of the population.
Thus, we have the population mean (), the population standard deviation (), and the
sample mean () and sample standard deviation (s). There is also a quantity called
variance (including population variance (
) and sample variance (s
)), which
statisticians prefer over standard deviation, so it is included here for completeness. Mean and Deviations
To facilitate this discussion, lets first acknowledge that graphically calculating mean
and standard deviation every time is not the ideal way to do this. It would be much
easier if we could just take a formula and plug in data values and get a numerical
answer. There are such formulae and here they are to make our lives easier. The sample
and population averages are

x x


= (3.5)

whereas the sample and population standard deviations are

( ) ( )
1 1 1
i i
x x x x
| |

| |
= =
\ .
\ .


( )




CH154K PChem Lab Manual Page (xii)
There are several things to note about these equations, especially Eqn 3.6. The
population values are somewhat theoretical since we do not usually have a way to
measure an entire population. Note that the x x term in Eqn 3.6 is the mathematical
definition of deviation, therefore the standard deviation is essentially the average
deviation over the range. For the sample standard deviation, we divide by N-1 because
calculating the sample average uses one of the degrees of freedom. (This makes more
sense if you view a data set as an array or matrix of data and you use up a dimension of
flexibility when you perform a calculation on the set and use that result in another
calculation on the set.) Note that the second form of Eqn 3.6 is more computationally
involved, but easier to program into a spreadsheet (though most spreadsheets calculate it
as a standalone function).

As you can see from the symbols for variance, they are just the square of the standard
deviations and can be obtained easily from the above formulas. The Relative Standard
Deviation (RSD) is analogous to the percent error calculations done many times in
your past, and should be learned and retained as it will likely show up again in your life
because it is quick to calculate and easy to use:
10 10
z z

| |
| |
= ~
| |
\ .
\ .
where z is an integer, usually 2 or 3. If z=2, then the RSD is expressed as a percentage,
and has the special moniker, the coefficient of variation. If it is 3, then the RSD is
expressed as parts per thousand, and so on.

Note that while the average is probably the most common way of expressing a middle-
type value for a dataset, it is not the only one. There are other types of midpoint
values that are used, but typically are much less important in physical chemistry. As
mentioned earlier, synonyms for average are mean or arithmetic mean. All three
describe the same thing in common usage. There is also a geometric mean, which will
not be discussed.

Another value is the median, which is literally the middle value (i.e. the third value in
a list of five, etc). There is no calculation other than counting which value in a series is
the middle one. If there is an even number of data points, the median is the average of
the middle two. Typically, it is assumed that the series is ordered low to high.

Next in our mediocre rogues gallery is the midrange, which very simply is the
average of just the highest and lowest value:

max min
x x
= (3.9)

The last member in this lineup is the mode, or value in a series that occurs with the
greatest frequency. This value may truly be far from the average. For example, given the
dataset x:{1,5,3,8,2,2,9,10}, the average is 5, the median is 4, the midrange is 5.5, and
the mode is 2.

3.2.3 Rejection of Bad Data
Once the range, mean and deviation of a data set are known, it is helpful to know if any
points in the set are so far off, they can be eliminated from further consideration. Such
outliers, or discordant data, must usually be tested to be sure they are beyond
usefulness. It is good to be cautious about throwing out data, as the outliers can

CH154K PChem Lab Manual Page (xiii)
sometimes lead to the most interesting discoveries. Because of this, while there are a
variety of tests, it is poor practice to rely on just one, or make either too general or too
strict of a rule about discarding or keeping outliers. Common Sense
This is probably the most dangerous of the techniques as what may be obviously
errant data may actually be good data. If there are one or more points that appear to be
grossly out of line (or even among the good data) and for which there is evidence in
the lab notebook of irregularities in the collection of those values, then it is worth
considering whether to keep or discard the data. Be very judicious about this. For
example, in the test data, there was a measurement about 3.5 times the average clip
mass. In the lab notebook was the notation that this value was the mass of a large binder
clip instead of a medium binder clip. Since the experiment was to measure the mass of
medium clips, this outlier can be thrown out because it does not belong to the studied
population. That is pretty obvious.

But what about the first measurement of 9.709 g? It is heavier by over a gram from the
rest of the data. Similarly, point 25 with a mass of 7.886 g is well-over half a gram from
the mean and most values. Can we just drop these values? They are a different
manufacturer than any of the other clips, but the stated population of the experiment was
all medium clips. As these are medium clips, their manufacturer or country of origin is
irrelevant. This is all the more demonstrated by the presence of other clips by the same
manufacturers with masses within normal expected values. Other tests must be used. Standard Deviation Test
Earlier it was shown that the area under the Gaussian curve bounded by 3 includes
99.7% of the data or in other words there is only a 0.3% chance of valid data being
outside the 3 area. So one possible criterion is to throw out any data further than 3
from the mean. The challenge here is that the data set has to be large enough that the
standard deviation is fairly well known. In our case, the mean is 8.45 g and the standard
deviation is 0.28 grams, so 3 gives a range of 7.61 g 9.29 g. Clearly, we cannot throw
out point 25 as it is within the 3 range. Point 1 is almost half of a gram outside of the
range, so it is still a candidate. What next? The Q-Test
The final common tests are statistical tests, more rigorous than the previous one. Of
these, the most common is the Q-test. It calculates how far the potential outlier (x
) is
from its nearest neighbor (x
) and divides this by the range of the entire data set to get a
ratio called Q
. Then based on the number of measurements (N), and what
confidence level you are operating under (typically 90%, 96% or 99%), a table is
consulted, and if Q
> Q
(the critical value), then the point can be excluded with a
confidence equal to the level selected. A version of the table is given in Table 3.2.

q n
x x

< = (3.10)
With 35 data points, we obtain:
CH154K PChem Lab Manual Page (xiv)

90% 95% 98% 99%
9.709 8.698 1.011
9.709 7.886 1.823
0.260 0.298 0.341 0.372 0.554

= = = < =

= = = <
(3.11, 3.12)
Since 0.554 is greater than 0.372, then we can reject the first data point with greater than
99% confidence.

Note that the Q-test table used only goes up to N=30, not 35. However, as N increases,
the Q value decreases, so we are safe in using the smaller N for our data. In fact, it is
important to note that most Q-tables referenced in similar textbooks only go up to N=10
at most. This gives you a certain amount of confidence that the Q-test is reasonably
valid for fairly small sample sizes. Note that the Q-test is allowed but not recommended
for sample sizes N< 4, as there are too few points to have any confidence that one is an
outlier apart from a priori evidence written at the time in the lab notebook (per Section

While there are other statistical tests for outliers, this one will serve most needs in this

3.2.4 Validation and Calibration
Note that the fact that Clip 1 can be discarded may call into question the entire first data
set, which is an equal number of measurements of just that clip. If we can throw out its
value, is the data taken just from that clip worth anything? Again, it depends. The thirty-
six replicate measurements of Clip 1 do show several useful things. It proves the relative
precision of the balance, which shows that variations in mass beyond that of a milligram
are truly significant. In other words, for Clip 1s replicate measures is 0.001, and for
the larger clip sample is 0.278, almost 300 times greater. Therefore, the variation in the
masses is real, and not subject to a limitation in the balance. Indeed, it shows that having
an even more precise balance would not give any more real information about the clip
masses. Even though Clip 1 is excluded from the bulk data, the measurements taken just
with that clip are useful because they answer different but related questions. In other
words, the Clip 1 experiment shows more about the balance than about binder clips,
and, in fact, serves to validate the use of the balance in the larger experiment. Another
way to validate the balance would be to take triplicate (or more) measurements of each
clip, average those and then use that average mass in the larger data set. This can be
much more tedious as many more calculations are involved. By validating the balance
ahead of time, subsequent measurements can be taken with confidence in the precision
of the balance.

This validation only serves to verify the precision of the balance, but says nothing about
its accuracy. To verify accuracy requires calibrationusing the balance to measure an
object whose mass is very well known or defined. If we were to put a certified 10.000g
mass on the balance, it would reveal how accurate the balance is and whether it needs
maintenance. By repeatedly measuring the mass, both accuracy and precision could be
verified. This calibration should be done on a regular basis, according to the schedule in
the instrument manual. Verification may be done more frequently, and is best done with
an object similar in mass to the objects being investigated. Typically, only three to ten
replicates are needed for a well-maintained balancethirty-six is a bit of overkill, but I
trust I have made my point.
NB: You may only use a
Q-test to eliminate one
errant data point. It is
not valid for multiple

CH154K PChem Lab Manual Page (xv)
Table 3.2 1 Q-Test Critical Values (Rorabacher, D.B., Anal. Chem. 63, (1991), 139.) (Note highlighted row used in
example calculation)
Confidence Level
N 90% 95% 98% 99%
3 .941 .970 .988 .994
4 .765 .829 .889 .926
5 .642 .710 .780 .821
6 .560 .625 .698 .740
7 .507 .568 .637 .680
8 .554 .615 .683 .725
9 .512 .570 .635 .677
10 .477 .534 .597 .639
11 .576 .625 .679 .713
12 .546 .592 .642 .675
13 .521 .565 .615 .649
14 .546 .590 .641 .674
15 .525 .568 .616 .647
16 .507 .548 .595 .624
17 .490 .531 .577 .605
18 .475 .516 .561 .589
19 .462 .503 .547 .575
20 .450 .491 .535 .562
21 .440 .480 .524 .551
22 .430 .470 .514 .541
23 .421 .461 .505 .532
24 .413 .452 .497 .524
25 .406 .445 .489 .516
26 .399 .438 .482 .508
27 .393 .432 .475 .501
28 .387 .426 .469 .495
29 .381 .419 .463 .489
30 .376 .414 .457 .483

3.2.5 Standard Deviation of the Mean
If instead of measuring one clip many times, we took the other path and weighed each
clip multiple times, then the mass of each clip could be given as a mean a standard
deviation. Then when data from all thirty-plus clips are combined, we obtain a mean of
all the means, and a standard deviation based on the spread of the means. This is called
the standard deviation of the mean or standard error of the mean, given as
. It is
related simply to by the following:

m m
or s
o = = (3.13)
3.3 ErrorNot Really Wrong

Error is the technical term for uncertainty. It is not a moral judgment or indication of a
mistake. However, one source of error can be mistakes in performing the experiment.
Basically, everything said previously about uncertainty applies to error.
CH154K PChem Lab Manual Page (xvi)

The challenge for any scientist is to accurately assess sources of error and their
magnitude. Three mistakes students make are (1) to list every conceivable source of
error in this universe or any other, regardless of likely impact and (2) to misassign the
impact of real errorsthat is, too much influence on a minor source and not enough on
a major source, and (3) to assign all error to the experimenters technique. This section
will try to help establish some perspective.

Typically, error comes from inherent limitations in experimental technique. Therefore,
the ability to evaluate each part of an experiment for its weaknesses and limitations, and
how to quantify them, is important. Equally important is the ability to determine how
limitations from different parts of the experiment interactdo they reinforce and
maximize error or do they offset each other, at least partially? This is where the idea of
error propagation comes in. Statisticians have developed equations to calculate the
maximum probable error and also the maximum error. The first allows for some
offsetting of error, while the second assumes all error is additive. Understanding both is
helpful in refining procedures as the goal is to minimize error and to design experiments
where they cancel each other as much as possible.

3.3.1 Types of Error

It is now appropriate to discuss the types of error, gross, systematic and random. Gross
error is simply user erroractual mistakes. These need to be avoided and/or eliminated. Systematic Error
Systematic error is uncertainty in a measurement due to a constant effect. In other
words, a speedometer that reads 5 miles per hour fast exhibits a systematic error of +5
mph. Systematic errors are caused by poor calibration, influence of a constant external
contaminant, or by a consistent mistake by the experimenter. A good experiment design
eliminates known systematic errors and looks for methods to reveal those that are
unknown. Thus, the best way to uncover and remove these determinate errors is by
using two different methods to obtain the data. Such orthogonal methods lead to the
same information from paths completely independent of each other. This acts as a
check. If the results are basically the same, then determinate errors are probably
minimized. If they are significantly different, then both methods must be examined
carefully, and every assumption reassessed.

Systematic errors have the advantage of being invisible when taking difference
measurements or similar. If all measurements are off by the same amount, then when
subtracting one measurement from another, the systematic error is subtracted out. There
are also systematic errors that are not constant but occur in a constant proportion. For
example, a speedometer that has a 10% systematic error is always off by 10%-- at
10mph, it reads 11, but at 100mph it reads 110. Regardless, the error is always in the
same direction (i.e., it doesnt read fast sometimes and slow other times) and the effect
is constant in some way. This is known as instrumental bias or error. Thus, synonyms
for systematic error are determinate error or bias.

Systematic errors tend to reflect errors in experimental accuracy, because they take what
would otherwise be accurate data and shift the results by the magnitude of the error,
Table 3.3 Tolerances for Volumetric Glassware (ASTM E288-06, Standard Specification for
Laboratory Glass Volumetric Flasks,and ASTM E969-02(2007), Standard Specification for Glass

CH154K PChem Lab Manual Page (xvii)
Volumetric (Transfer) Pipets) (Class A items are marked and made to higher tolerances. Class B may or
may not be marked.)
Capacity Volumetric Flasks Volumetric Pipettes
(mL) (mL) (mL)
Class A Class B Class A Class B
2 0.006 0.012
5 0.02 0.04 0.01 0.02
10 0.02 0.04 0.02 0.04
25 0.03 0.06 0.03 0.06
50 0.05 0.10 0.05 0.10
100 0.08 0.16 0.08 0.16
200 0.10 0.20
500 0.20 0.40
1000 0.30 0.60

reducing the accuracy, but without impacting the precision. Consequently, as
contradictory as it seems, it is better for random error to be larger in magnitude than
systematic error. This is because the statistical methods discussed later in the chapter
address random error. If systematic error is larger, then the methods are far less
effective. Therefore, while we want to eliminate as much error as possible of any kind,
systematic needs to be the smaller of the two main types. Random Error
Random error is just what it sounds like. The reading can be off by some amount that is
either higher or lower than the actual value, and the amount may vary, so it is also
known as indeterminate error. This is the effect of noise in the signal or random
inconsistencies in measurement, which is why it is important when taking measurements
to take them exactly the same way always, so that any errors in reading (personal
bias/error) are performed systematically and have the exact same impact on the data.

Since random error is a reflection of the fuzziness in our ability to take and read
measurements, it tends to be associated with the precision of the measurement. The
more precise the measure, the smaller the random error should be.

An in-lab example of increased random error would be using a graduated cylinder for a
series of dilutions instead of volumetric flasks. As shown in Table 3.3, different types of
glassware are designed to have different amounts of accuracy. Keep in mind that the
glassware must be absolutely clean for these tolerances to be accurate.

In an ideal world, the only error is random error, and it is small enough to be Heisenberg
limited. In the real world, not even all systematic error can always be eliminated.
Therefore, your job is to determine (1) what error is present, (2) what type it is, and (3)
how important that source is.

For instance, a student once said in a report for an experiment in a temperature-
regulated oil bath that she had a very large error in her results because the room
temperature changed by 1/10th of 1 F over the course of 3 hours. Is this a possible
source of experimental error? Yes. Is it a systematic or random error? As presented in
her paper, it is likely a systematic error, unless it fluctuated back and forth by that tenth
of a degree. Is it a reasonable source, especially a reasonably large source? No! An oil
Four questions to ask
about error in an
(1) What sources of error
are present?
(2) Are they gross,
systematic or random?
(3) Based on the
procedure performed, is
this source likely to have
a significant impact on
(4) If so, how big is that
impact quantitatively?
CH154K PChem Lab Manual Page (xviii)
bath, especially one that is temperature-regulated insulates the sample from the air
temperature. Unless doing highly precise work, which this course does not do, a 1/10th
of a degree change is not even worth mentioning, especially over the course of 3 hours.
The procedure for the oil bath experiment calls for the temperature of the bath to be
changed by 30C or more during that same 3 hours. How could that small of a room
temperature change have any kind of impact?

As you consider your experiment, certain types of error should be obvious to the
specific procedure and equipment (method bias/error). Beyond that, brainstorming is a
great way to make sure nothing has been missed. In either case, ask the four questions
above for each to determine if it is a significant source of experimental uncertainty.

It is also important when reporting your work to discuss not only the significant sources
of error, but also sources that one would expect to be significant but werent. When
reading a lab report or journal article, a scientist is internally asking about different
sources, and if your paper does not even mention an expected source, then your results
will be questionable. Therefore, a good error analysis discusses important sources
whether or not they are significant and why/why not. Perhaps you designed the
experiment to eliminate a specific source. Say so. Perhaps some serendipitous condition
negated the effect of a common source. Explain that also. The ability to competently
analyze error is what allows major discoveries to take place from noticing small
irregularities and not ignoring them.

To sum up this section, Table 3.1 contained several notes made about irregularities.
Table 3.4 lists them and gives the type of error and significance. Note that in your
reports, this discussion should take place in the text, rather than in a table.

3.3.2 Absolute and Relative Error
Just as with precision, error is expressed either in terms of absolute error (uncertainty)
or relative error (uncertainty). For example, the uncertainty in any given mass
measurement in the binder clip experiment is given by the error in the scale reading.
Clip 33 has a mass measured to be 8.286g. The uncertainty in any measurement on that
scale is 0.001g. So in absolute terms, the mass of clip 33 is 8.2860.001g. To find the
relative error, divide the uncertainty by the mass: 0.001/8.286 = 0.00012 or 0.01%,
which is expressed as 8.286g 0.01%. Again, this way is more informative because a
grain of wheat may have a mass of a milligram and there the absolute error is the same,
but the relative error is now 100%!

3.3.3 Error Propagation
In section 3.3, the idea of propagating the effects of measurement error through all
stages of a data analysis was mentioned. Here we will go into more detail. It should
make sense that just recording uncertainty in measurements is merely a first step. As we
perform calculations on the data, the uncertainty does not go away, but has an impact on
the validity of each step of the calculation. To add insult to injury, since each variable
and each constant has some level of uncertainty, all of them must combine in some way
to affect the final results.
Table 3.4 Sources of Binder Clip Experiment Error and Significance
Error Type Significance
Did not tare Gross
In this case, the deviation was

CH154K PChem Lab Manual Page (xix)
Left balance door open Gross Same
Leaned hard on bench top (2x) Gross/Systematic
The deviation was measurable and
constant, worth quantifying
Manufacturer 2, Country 2 Random
Normal data variation given our
population, however, should be Q-
tested to see if an outlier
LARGE Binder Clip Gross
If testing medium clips, then using a
large one is a clear mistake and its data
should be rejected out right.
Manufacturer 1, Country 1, used Random
Normal data variation given our
population, however, should be Q-
tested to see if an outlier
Manufacturer 3, Country 1 Random Normal data variation
Manufacturer 2, Country 1 Random Normal data variation
Not wearing gloves to prevent
fingerprints on clips/cleaning
clips before putting on balance
Method error,
Not significant. The clips are
sufficiently massive that minor finger
oils and dust are negligible. For smaller
objects, can be an issue Partial Derivatives
Before going farther, it will be helpful to review basic partial differential equations. If
you have a simple equation such as y = mx + b, then taking the derivative is pretty easy:
= . If, on the other hand, the equation is multivariate, say,
2 3
( , ) 3 42
f x y x xy e = + , we might have a bit more of a problem. However, it turns out
that like in regular algebra, we can ignore the parts we dont like by redefining them
temporarily. In this case, we take partial derivativestake the derivative of the entire
equation with respect to one variable, and treating the other(s) as constants, then take the
derivative with respect to each of the remaining variables and treating previous variables
as constants. To wit:

3 2
( , ) ( , )
6 42 and 126 +e
f x y f x y
x y xy
x y
| | c c | |
= + =
| |
c c
\ .
\ .
A partial derivative is denoted by the character rather than the normal d. Also, the
variable(s) kept constant are denoted by the subscript outside of the parenthesis. This
practice is a fundamental part of data and error analysis. It will likely be very helpful to
have a table of derivatives and integrals handy. The CRC Handbook has a fairly
extensive list of the ones most likely to be found in chemistry. Maximum Propagated Error

At the very minimum, it is reasonable to expect that a result cannot be any more certain
than the variable with the largest error. To be cynical, we could expect all errors to
combine in the worst possible way, known as the maximum propagated error. For an
arbitrary physical phenomenon h, that depends on variables x, y, z, we have that

( , , )
yz xy
h h h
h f x y z and dh dx dy dz
x y z
c c c
= = + +
c c c
For example, the Coulombic interaction between an electron and a helium nucleus
separated by 1.000.01 pm is found by using Coulombs Law:
1 2 1 2
2 1 1 2
1 2
1 2 2
1 2
leading to
o o
o o
q q r q q o o
q r q r q q r
q q F F F F F
F dF d d dq dq dr
r q q r
c tc
tc tc t
t c
tc t c
c c c c c
= = + + + +
c c c c c
CH154K PChem Lab Manual Page (xx)
(3.16, 3.17)
1 2 1 2 2 1 1 2
1 2 2 2 2 2 2 2 3
1 1 1 1 1
4 4 4 4 2
o o o o o
q q q q q q q q
dF d d dq dq dr
r r r r r
t c
t c tc tc tc tc
| | | | | | | | | |
= + + + +
| | | | |
\ . \ . \ . \ . \ .

Note that according to Table 3.5, the electric constant is 8.854187817x10
F/m and
that it is defined to be exact. Since it is an exact number, its uncertainty is 0 (zero), so
the second term in equation 3.18 is zero and can be dropped. For pi, the uncertainty is
given to be 1 in whatever the last decimal place we use. (ie 3.140.01) In this case, q

is equal to the charge on an electron and that is the elementary charge given as
C, which has a relative uncertainty of 8.5x10
C, so it would probably
be better to state pi to ten to twelve places. Note also that relative uncertainties must be

Table 3.5 Fundamental Physical Constants with Associated Relative Uncertainties
(Caption Note: pasted from
Accessed June 16, 2008)*
* Note that the Gas Constant value and uncertainty is: R = 8.314 471 0.000 014 J/(molK). To convert
to other units will require propagation of the uncertainty through the conversion. (Reference: (Accessed June 22, 2008))
converted back to absolute uncertainties. The helium nucleus contains two protons, so q

= 2e. The distance and its uncertainty are given above. Putting everything together, we
get (and using pi to ten places and dropping the uncertainty term):

CH154K PChem Lab Manual Page (xxi)
( )
( ) ( )
( )
( )
( )
( )
19 19
12 12
38 2
24 2
1.60217653 10 2 1.60217653 10
4(3.1415926535 898) 8.854187817 10 1.00 10
5.13393927 10
4.61415450 10
1.00 10
1.11265006 10
x C x C
x x m
x C
F x N
x m

= =

I leave the resolving of the units for the reader. Now for the uncertainty:
( )
( )
( )
( )
( )
( )
( )
( )
10 12
10 12

2 1.60217653 10
0 8.5 10
1.11265006 10 1.00 10
1.60217653 10
8.5 10
1.11265006 10 1.00 10
1.60217653 10 2 1.6021

8 10
x C
dF x
x x m
x C
x x m
x C

| |
= +
\ .
| |
\ .
( )
( )
( )
7653 10
1.00 10
1.00 10
x C
x m

| |
\ .

11 11 6
0 2.44793956 10 1.22396978 10 9.228309007 10
9.228345726 10
dF x x x
dF x

= + + +

This gives a final answer of

4 6
4 6
4 4
4.61415450 10 9.228345726 10
4.614 10 9.2 10
4.614 10 0.092 10
F x x N
F x x N
F x x N

(3.24-3.26) Relative Error
While the above is a typical way of expressing the error, another way is the relative
error, which is

9.228345726 10
100% 2%
4.61415450 10
dF x
F x

= = (3.27)
This amount is reasonable, but it is still helpful to evaluate the experimental procedure
to see if increased precision can be obtained to lower the uncertainty. Probable Propagated Error
The maximum propagated error assumes that all of the errors are additive, but in reality,
it can be expected that some errors will offset, so that the actual uncertainty is
somewhere between the maximum propagated error and the minimum error (assumed to
be the largest single uncertainty in the variables. The probable propagated error actually
depends on the mathematical operations of the function, and for complex functions
requires an iterative process. The key requirement is that all variables MUST be
independent and the uncertainties random.


CH154K PChem Lab Manual Page (xxii)
Note that in y=abc, a, b, and c are all independent. In y=ax
, a and x are independent
from each other, but when expressed as the following, y=axx, we can see that x is not
independent of itself, so that is where Section will come in. Additive/Subtractive Functions
If f(a,b,c)= 2ab+3c, then the uncertainty in the function is

( ) ( ) ( )
2 2 2
f da db dc c = + + (3.28)
assuming that da, db, and dc are the absolute uncertainties of a, b, and c. This method is
known as the quadrature rule. Multiplicative/Divisive Functions
For a function where the variables combine to form products and/or quotients with
known absolute uncertainties in each variable, we can use a similar quadrature formula.
( )
2 2 2
, , then
a df da db dc
f a b c
bc f a b c
| | | | | |
= = + +
| | |
| | |
\ . \ . \ .
(3.29) Exponential Functions
For functions where the variables are raised to a power, such as f(a) = a
, then the
uncertainty is given by

df da
f a
= (3.30) Log/Antilog Functions
Next consider functions such as f(x) = log x. The uncertainty in these is simple:
= (3.31)
where the dx/x term is simply the relative error in the variable. For natural logs, the
formula is the same, but without the 0.434 factor.
The formulas for antilogs are as follows:

( )
( )

10 2.303
f x e x
f x x
= = c
= = c
(3.32, 3.33) Trigonometric Functions
There are many trig functions, and as the error functions are basically derivatives of
them, only one will be presented here, with the rest obtained from a table of trig
derivatives. Keep in mind that these formulas work explicitly only for values in radians,
not degrees.
( ) sin cos f x x f x x = c = c (3.34)

CH154K PChem Lab Manual Page (xxiii) Normal Functions Often Combine One or More of These Functions
When a needed equation uses several of the above operations in it, the complexity
increases, but it is still manageable. The trick is to handle each type of calculation
individually, then use the result in each subsequent calculation. To illustrate this, use our
Coulombic attraction example, which has an exponential and several products and

1 2
q q
r tc
= (3.35)
To get the uncertainty in F, lets first redefine 1/r
to be A:

1 2 1 2
1 1
4 1 4
o o
q q A q q A
tc t c
| |
= =
\ .
This leads to
2 2 2 2
1 2
1 2
2 2 2 2 2 2
1 2 1 2
1 2 1 2

o o
o o
d dq dq dF dA
F q q A
r dr
d d dq dq dq dq dF dF
F q q F q q r
c c
c c
| | | | | | | |
= + + +
| | | |
| | | |
\ . \ . \ . \ .
| |

| | |
| | | | | | | | | | | |
| | = + + + = + + +
| | | | | |
| | | | | |

\ . \ . \ . \ . \ . \ .
\ .
\ .
( ) ( ) ( ) ( )
2 2 2 2
27 27 14
19 19 12 12
2 2 2
8 8 2
8.5 10 8.5 10 0 1 10
1.60217653 10 2 1.60217653 10 8.854187817 10 1.00 10
5.305283058 10 2.652641529 10 0 4 1 10
2.81460283 1
dF x x x
F x x x x
x x x

| | | | | | | |
| | | | = + + +
| | | |

\ . \ . \ . \ .
= + + +
= ( )
15 16 4
4 2
0 7.03650708 10 0 4 10
4 10 2 10 2% relative error
x x
x x

+ + +
= = =

Note that even though
is exact, we do not ignore it, but include it, giving it an
uncertainty of 0.

The probable propagated error should never be larger than the maximum, and the more
independent variables in the calculation, the greater the difference between the two
methods, in general. In this case, r is not independent, and it had the largest error,
resulting in the negligible difference between maximum and probable errors. Combining Systematic and Random Error
If a scientist is successful in eliminating all systematic error, then the only error that is a
factor is the random error. This is highly unusual, especially in the undergraduate lab.
The final bit of the propagation puzzle is to then propagate the systematic errors (which
must keep the sign of each error since it does not randomly fluctuate about the value),
and use the quadrature equation to combine the systematic and random errors to give the
total error:
( ) ( )
tot random syst
e de de c = + (3.44)

(3. 37)

(3. 38, 3.39)

(3. 40)

(3. 41)

(3. 42)

(3. 43)
CH154K PChem Lab Manual Page (xxiv)

3.4 GraphingA Picture Is Worth 1000 Words, IF It Is Clear
A graph is a pictorial display of your data. It is designed to display clearly the
relationships of the investigated variables (linearity or not, maxima/minima, inflection
points). Just as the histogram provided the graphical basis for means and uncertainties
for raw data, an xy graph (often called a scatter plot) can do the same thing for both raw
and calculated data, even showing how closely the trends in the data are supported by
the data. In some cases, graphical analyses are easier to perform than rigid numerical
analyses, and even though both should lead to the same results, the graphical methods
tend to be less accurate. This section will therefore discuss proper formatting of graphs
and then using them to show the significance of your data analysis via a method known
as least squares analysis.

3.4.1 Graph Format
First, ALL graphs must be produced on the computer. The computer can make a
graph faster and more accurately from data than is possible by hand. It is also able to
update the graph in real time as the parameters are manipulated.

Key formatting features:
- Descriptive but concise title (in several reports you will have to make numerous
similar graphsmake sure that each is easily identified in English rather than
jargon or code)
- Proper labels on axes (x or y is a no-nouse a descriptive title with words;
in parentheses, give any symbolic notation and units)
- DO display the trendline. Include the equation and R
value. Position it in a
blank area of the graph rather than in the margins, so the graph can be as large as
- Change the variables in the equation from x and y to the actual variables for
which they stand.
- Unless specifically needed, do NOT draw a line that connects the dots.
- Unless multiple data sets are shown on the same graph, do NOT display the
legend. If only one thing is being graphed, the legend is redundant.
- Do NOT have the graphs origin at (0,0). Zoom in on the data of interest so that
it fills the entire useful area of the graph. The goal is to highlight data, so make it
- Do NOT have a shaded background for the graph. This is the Excel default
setting. Change it.
- Do NOT show gridlines. This is also default and unacceptable.
- Make all font sizes legible and match font to report text
- See Figure 3.2 to compare a graph using Excel defaults and poor labeling to an
acceptable graph.

CH154K PChem Lab Manual Page (xxv)

3.4.2 Least Squares Analysis

The range has limited usefulness because it says nothing about how the data are
arranged about the average. Therefore, we often look at the deviation or residual of
each value from the average:

i i
r x x = (3.45)

r r = ~

because the residuals above the average should cancel out the residuals below the
average. Since that is not terribly helpful, there are two ways of getting around the

r r =

or more commonly the use of the squared residual:
( ) ( )
2 2
2 2
i i i i
r r y y y mx b = = =

The last equality shows that if the data can be expressed in the form of a linear equation,
then the actual y-values (dependent variable) are compared to the value obtained by
plugging the corresponding x-value (independent variable) in the equation. This is
analogous to subtracting the real value from the average in that it gives the distance of
the value from the line.

Now, we can do something useful by differentiating equation 3.48 twice, once with
respect to m and once for b.

( )( ) ( )
( )( ) ( )
1 1
1 1
2 0
2 1 0
i i i i i i i
i i
i i i i
i i
y mx b x x y mx bx
y mx b y mx b
= =
= =
| | c
= = =
\ .
| | c
= = =
\ .

(3.49, 3.50)
Next, several convenient summary equations will be defined and used to rearrange the
above expressions.
y = -0.3844x + 0.7685
R = 0.4996
0 0.5 1 1.5
Viscosity vs. Concentration
Linear (Viscosity)
Nsp/c = -0.384(Conc)+ 0.768
R = 0.499
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


Concentration (PVOH g/100mL)
Intrinsic Viscosity of the
Cleaved Polymer
3.2: Examples of a) unacceptable and b) acceptable graphs
CH154K PChem Lab Manual Page (xxvi)

( ) ( )
2 2
2 2
1 1 1 1

x i i y i i
x y
i i i i
x y
xy i i
S x S x S y S y
S x y D NS
= = = =
= = = =
= =

This gives us:


y x xy xy
m b

= = (3.57-3.58)
While these equations look a little intimidating, they are extraordinarily easy to program
into a spreadsheet. These calculations do something very special for usthey give us
the slope and intercept of a straight line that passes through our data in such a way that
all of the residuals are minimized. In other words, the slope and y-intercept in the above
equations give us the line of best fit.

Combinations of the above formulas offer other useful information, including other
equations for m and b.

2 2
2 2
standard deviations in x, y:
1 1
std. dev. of the residuals:
correlation coefficient
slope uncertainty in m
x y
y x
x y
s s
S m S
= =

( )
y-intercept b uncertainty in b
b r
y mx s s
= =
| |
\ .
The correlation coefficient is a value that ranges from -1 to +1 and gives an indication
of how closely the points fit on the line. An R = 0 has no correlation, whereas R = 1
show perfect correlation. Often a value that is very close to 1 will be obtained (i.e.
0.99954). These should be reported to the second digit that is not a 9. Note that these
values can be deceptively high. In general, the correlation coefficient is most helpful
when it is farther from one than closer to one. If it is close to one, that is nice, but do not
attach too much significance to it.

One of the main advantages to a least squares line is that it allows interpolationit
allows a y value to be calculated for any given x (or vice versa). This is a main function
of calibration plots.

In your reports, you do not need to show a sample calculation for least squares fitting.
Just use the results. For the homework assignment, you will need to show your work.

Some points to consider during graphical analysis:
To program a
spreadsheet to do these
calculations, all that is
needed is to have:
an x column,
a y column,
a column that squares x
that multiplies x and y
and a column that
squares y (=y*y or

The next step is to have
a cell at the bottom of
each column with the
formula (=sum(x1:xn)).

Finally, have cells with
formulas for N, D, m,
and b. (for N,

CH154K PChem Lab Manual Page (xxvii)
- You will almost never have an R=1. This means each data point fit exactly on
the trendline. (I did once get an R = 1, once.)
- Choose the appropriate equation to graph based on your data, and arrange it to
get a linear equation (if possible) using the variables assigned to your data, and
assigning the X variable to the data stream that you varied (the independent
variable-time, temperature, pressure), and assigning the Y variable to the other
stream (the results you obtained from varying the X stream). Keep in mind that
the variables may be inverses, natural logs, and so on.
- If you cannot get a linear equation, that is okay, but you will need to read your
textbook regarding how to do a nonlinear regression.
- Plot your data (with X on the horizontal axis and Y on the vertical), and get your
trendline and equation.
- Remember that slope and intercept have units. The intercept is in the same units
as any other Y-value, and the slope is Y-units/X-units.

As an example of transforming a nonlinear equation to a linear one, look at the
relationship between viscosity and temperature.

RT q
| |
\ .
In the experiment, viscosity () is measured as temperature (T) changes, so those are our
Y and X variables respectively. As it is an exponential equation, we cannot simply
rearrange it into slope-intercept form. We must first take the natural log of both sides.

( )
1 1
ln ln

y b m x
| |
| |
| |
\ .
\ .
= +
So x=(1/T) and y=ln(1/), with a slope equal to ln A and m=(-E
/R). Similarly,
sometimes it is possible to linearize an equation by making x=(T
) instead of T, and
so on.

3.4.3 Error Bars
Error bars are lines extending from individual data points that indicate the uncertainty in
that data point, in both the x- and y-directions. This gives a graphic representation of the
precision of each data point. Ideally, the trendline will pass through the area defined by
error bars for each and every point, which serves as a further indication of the fit quality.
Either they can be entered manually or Excel can calculate and display them
automatically. Kaleidagraph is another program that focuses on graphing and easily
displays error bars, much easier than in Excel.

Sometimes error bars will be so tiny that they are not distinguishable from the data point
icon. If so, change the icon to something smaller. If they cannot be made visible, then
simply note it in the caption and in the text. This will ensure you receive credit for doing
them even though they are not visible.

3.5 Presenting ResultsSharing Numbers Meaningfully
When presenting results, it is important to communicate the numbers clearly. This
includes expressing them to the proper level of precision, appropriate rounding, giving
the error and giving the right units. Now that you have put so much work into obtaining
CH154K PChem Lab Manual Page (xxviii)
these results, take the extra bit of time to show them off properly. Current best practices
dictate the reporting of both confidence limits and sample size when reporting data:

( ) 12.38 0.02 95%, 9
= = (3.70)
However, other methods are still allowed and will be discussed later. Ideally, all of these
are properly applied together, but use what is available.

3.5.1 Significant Figures
While merely defining the accuracy of results through significant figures is minimally
acceptable, it is important to always keep track of them and express them properly.
When reporting a number, all but the last digit are considered certain, and the last one is
uncertain. However, rules define what is or isnt a significant digit.
- Each number1, 2, 3, 4, 5, 6, 7, 8 and 9is always significant
- Any number used as an exact count is infinitely significant (i.e.there are 18
students registered for lab gives an exact counting number; saying there are
about twenty students is not exact and there is uncertainty in the number.)
- Zero may or may not be significant.
o If the zero is between two significant figures, it is significant (regardless
of the decimal point location (100.01 has 5 sig figs).
o A leading zero is never significant (0.001 only has one sig fig).
o A zero to the left of the decimal is a place holder and not significant
unless a decimal is present (100 has 1 sig fig, 100. has 3 sig figs, 100.0
has 4, etc.) To be clear, it is better to express such numbers in scientific
notation, because
o Any zero expressed in scientific notation is significant (1.00x10
has 3
sig figs and is the preferred way to express 100.. Likewise, 1.000x10

is better than 100.0 and still has 4 sig figs.).
o Any trailing zero to the right of the decimal is significant (0.300 has 3 sig
figs; scientific notation is still the preferred way to express this value.)
In any case, the last significant digit expresses the order of magnitude of the uncertainty
(due to its location) and what the expected value of that uncertain digit is (due to its
value). There is still high uncertainty about the magnitude of the error, however. If a
number is expressed as 54.67, the uncertainty can be anywhere from 0.005 to 0.05.
Therefore, in addition to having the proper significant figures, an express value for the
uncertainty in the value should be given. Significant Figures in Calculations
The general principle in carrying sig figs through a calculation is that the final answer
can be no more certain than the least certain component. In other words, the number in
the calculation with the fewest sig figs limits the number of significant figures in the
final result. The rules for determining what that limit is depend on the operations being
performed. Addition and Subtraction
When taking the sum or difference of a group of numbers, the number of sig figs is
determined by the number that has the fewest significant figures after the decimal (to
the right of the decimal).

CH154K PChem Lab Manual Page (xxix)

29 14 7.09

Even though the first number has five significant figures, it only goes to the hundredths
place, so the final answer must end in the hundredths place, in spite of the fact that
5.782 has fewer sig figs. Multiplication and Division
Products and quotients on the surface seem more straightforward: the final answer has
the same number of significant figures as the factor with the fewest sig figs.

12.25 4.0182
0.946595192 0.95

= = (3.72)
BUT, this is statistics and Pchem, so of course it isnt that simple. We have to consider
relative uncertainties. According to the definition of significant figures, the last digit is
uncertain by approximately 1, so that means the relative uncertainties are 1/1225,
1/87394, and 1/52. To get the actual uncertainty in the answer, we must multiply the
result by the largest uncertainty:

0.946595192 .0182 0.02
= = (3.73)
This indicates the answer should actually be rounded to 0.95, to the second decimal
place, as expected. If we change the equation slightly

12.25 4.7384
1.116257692 1.1

= = (3.74)
the relative uncertainty calculation gives us:

1.116257692 0.0215 0.02
= = (3.75)
which leaves us with an answer uncertain also to the second decimal place, but that
gives us three sig figs: 1.12. Logs and Antilogs
As one would expect, propagating sig figs through logs is the least obvious (unless you
understand how logs work), but it isnt hard. Generally, follow these two rules:
1. When performing a logarithm, the answer should have as many sig figs to the
right of the decimal as the original number had sig figs.
2. When taking the antilog, do the reversethe answer should have the same sig
figs as the original number had to the decimals right.

3.5.2 Rounding Numbers
Rule number one of rounding: Never round until the entire calculation is complete!!!
You will lose significant digits along the way and it will give you a different answer.

There are several conventions for rounding. You may use any of them as long as you are
consistent. Some of the common ones include the following:
- If the last digit is: odd, round down; if even, round up (or vice versa).
- If the last digit is: 0-4, round down; 5-9, round up.
- If the last digit is: 0-4, round down; 6-9, round up; 5, round to the even number.
I tend to use the second one.
CH154K PChem Lab Manual Page (xxx)
Table 3.6 Common Methods of Reporting Error
Result Error Report as:
Using Standard Deviations
5.28 cP S=0.06cP 5.28(6)cP
0.912g S=0.023g 0.912(23)g

Using 95% Confidence Limits
92.57 atm 0.18 atm 92.570.18 atm
kJ/mol 1x10
kJ/mol (8.250.01)x10

3.5.3 Reporting Error
Again, there are several conventions to reporting the error in a result. The most
traditional is 42.8734 g, although it is most common in contemporary literature to see
42.87(34) g which is easier to type than the former. In either case, the units are only
typed once, as both the number and the error have the same units. In Equation 3.70, it
was stated that best practices also include the confidence level and sample size.

Notice that the uncertainty presented in the previous paragraph is presented in the last
two digits rather than just the last one. The question arises whether it should be reported
as 42.9(3).Overstating the error by one place is common practice, as it gives a feel for
how robust the error is (i.e. is it .25 or .34?both round to 0.3, but one is almost 0.1
higher than the other).

Regardless, consistency is important. In a table of data, all of the values may have the
same uncertainty, in which case it is given with the first value in the column and the
assumption is that it is the same for the column. Sometimes also, the uncertainty (along
with the units) will be posted in the column heading to relieve repetition. It is also
important to state which kind of error is reportedabsolute, relative, maximum,
probable, confidence limit, standard deviation, and so on. Table 3.6 gives a summary of
different ways to present error.

In a journal article, the calculations of the error are rarely shown, but in undergraduate
lab courses, example calculations should be shown, so the grader can help find areas
where improvement is needed. It is much better for a grader to find mistakes than a peer

3.5.4 Unit Analysis
In chemistry, only a few numbers stand completely alone. The overwhelming vast
majority have two parts, a number and a unit. Most of these also have a tag identifying
the chemical to which the number and unit refer, that is, 18.02 g H
O. All three parts
must be present whenever appropriate so that the number has some context and
meaning. This is the heart of dimensional analysis as learned in high school and
freshman chemistry. Properly keeping track of the chemical tag and units is critical to
avoiding mistakes in data analysis.

The standard set of units in science is the metric system also known as the International
System of Units (SI, Systme International dUnits), which has seven base units
(meter, kilogram, second, ampere, Kelvin, mole, and candela), no end of derived units
(meters/second, etc.), and a large number of named derived units (pascal, newton,
Table 3.7 SI Prefixes

CH154K PChem Lab Manual Page (xxxi)
Multiplier Prefix Abbrev. Numerical Multiplier Prefix Abbrev. Numerical
yotta Y 1 000 000 000 000 000 000 000 000
deci d 0.1

zetta Z 1 000 000 000 000 000 000 000

centi c 0.01

exa E 1 000 000 000 000 000 000

milli m 0.001

peta P 1 000 000 000 000 000

micro 0.000001

tera T 1 000 000 000 000

nano n 0.000000001

giga G 1 000 000 000

pico p 0.000000000001

mega M 1 000 000

femto f 0.000000000000001

kilo k 1000

atto a 0.000000000000000001

hecto h 100

zepto z 0.000000000000000000001

deca da 10

yocto y 0.000000000000000000000001

farad, etc.) which can be expressed as derived units. Being able to manipulate these is
important, as there are many hidden relationships that are revealed when you start
playing with the units.

Each of the base units and many of the others can be modified with the addition of a
prefix that indicates the size of the number involved. The prefixes are explained in
Table 3.7. Conversions among these units and between non-SI units are found in Table

Manipulation of the units is typically done by multiplying or dividing them by other
units. In most equations you will be using, variables and constants having these units are
combined in various ways, and being able to cancel out units and rearrange them will
help to accurately guide you through the data analysis.

Here are some examples of how the named units break down into combinations of base

2 2
2 2 2 3
Joule Newton Pascal Volt
kg m kg m kg kg m
J N Pa V
s s m s s A

= = = =

An examination of the equations that produce values with these units will reveal the
logical arrangement of the base units.

Keep in mind that many fields, especially engineering, use units that are non-standard in
science, so there will be many opportunities to do conversion. It is expected that all
units will be reported in standard SI format in this course.

CH154K PChem Lab Manual Page (xxxii)

Table 3.8 Unit Conversion Factors
To convert from to Multiply by
atmosphere, standard (atm) pascal (Pa) 1.013 25 E+05
atmosphere, standard (atm) kilopascal (kPa) 1.013 25 E+02
bar (bar) pascal (Pa) 1.0 E+05
calorie (cal) joule (J) 4.1868 E+00
calorie per second (cal /s) watt (W) 4.184 E+00
centimeter of mercury (0 C) pascal (Pa) 1.333 22 E+03
centimeter of water, conventional (cmH
O) pascal (Pa) 9.806 65 E+01
centipoise (cP) pascal-second (Pas) 1.0 E03
cubic foot (ft
) cubic meter (m
) 2.831 685 E02
cubic foot per second (ft
/s) cubic meter per second (m
/s) 2.831 685 E02
cubic inch (in
) cubic meter (m
) 1.638 706 E05
cubic yard (yd
) cubic meter (m
) 7.645 549 E01
cup (U.S.) cubic meter (m
) 2.365 882 E04
cup (U.S.) liter (L) 2.365 882 E01
cup (U.S.) milliliter (mL) 2.365 882 E+02
curie (Ci) becquerel (Bq) 3.7 E+10
day (d) second (s) 8.64 E+04
debye (D) coulomb meter (C m) 3.335 641 E30
degree (angle) () radian (rad) 1.745 329 E02
degree Celsius (temperature) (C) kelvin (K) T/ K = t / C+273.15
degree Fahrenheit (temperature) (F) degree Celsius (C) t / C = (t / F 32)/1.8
degree Fahrenheit (temperature) (F) kelvin (K) T/ K = (t / F + 459.67)/1.8
degree Rankine (R) kelvin (K) T/ K = (T/ R)/1.8
degree Rankine (temp interval) (R) kelvin (K) 5.555 556 E01
dyne (dyn) newton (N) 1.0 E05
erg (erg) joule (J) 1.0 E07
erg per second (erg/s) watt (W) 1.0 E07
faraday (based on carbon 12) coulomb (C) 9.648 531 E+04
fluid ounce (U.S.) (fl oz) milliliter (mL) 2.957 353 E+01
foot (ft) meter (m) 3.048 E01
foot (U.S survey ft) meter (m) 3.048 006 E01
footcandle lux (lx) 1.076 391 E+01
foot of mercury, conventional (ft Hg) pascal (Pa) 4.063 666 E+04
foot of mercury, conventional (ft Hg) kilopascal (kPa) 4.063 666 E+01
foot of water, conventional (ft H2O) pascal (Pa) 2.989 067 E+03
foot of water, conventional (ft H2O) kilopascal (kPa) 2.989 067 E+00
gallon (U.S.) (gal) liter (L) 3.785 412 E+00
gamma () tesla (T) 1.0 E09
gauss (Gs, G) tesla (T) 1.0 E04
hour (h) second (s) 3.6 E+03
inch (in) centimeter (cm) 2.54 E+00
inch of mercury, conventional (inHg) pascal (Pa) 3.386 389 E+03
inch of mercury, conventional (inHg) kilopascal (kPa) 3.386 389 E+00
inch of water, conventional (inH2O) pascal (Pa) 2.490 889 E+02
kelvin (K) degree Celsius (C) t / C = T/ K 273.15
kilocalorie (kcal) joule (J) 4.184 E+03
kilocalorie per second (kcal /s) watt (W) 4.184 E+03
kilowatt hour (kWh) joule (J) 3.6 E+06
light year (l.y.) meter (m) 9.460 73 E+15
liter (L) cubic meter (m
) 1.0 E03
mho siemens (S) 1.0 E+00
mile (mi) meter (m) 1.609 344 E+03
mile (mi) kilometer (km) 1.609 344 E+00
mile, nautical meter (m) 1.852 E+03
millimeter of mercury, conventional (mmHg) pascal (Pa) 1.333 224 E+02
millimeter of water, conventional (mmH2O) pascal (Pa) 9.806 65 E+00
minute (angle) (') radian (rad) 2.908 882 E04
minute (min) second (s) 6.0 E+01
ounce (avoirdupois) (oz) gram (g) 2.834 952 E+01
ounce (troy or apothecary) (oz) gram (g) 3.110 348 E+01

CH154K PChem Lab Manual Page (xxxiii)
ounce (U.S fluid) (fl oz) milliliter (mL) 2.957 353 E+01
parsec (pc) meter (m) 3.085 678 E+16
pint (U.S dry) (dry pt) liter (L) 5.506 105 E01
pint (U.S liquid) (liq pt) liter (L) 4.731 765 E01
pound (avoirdupois) (lb) kilogram (kg) 4.535 924 E01
pound (troy or apothecary) (lb kilogram (kg) 3.732 417 E01
psi (pound-force per square inch) (lbf/in
) pascal (Pa) 6.894 757 E+03
psi (pound-force per square inch) (lbf/in
) kilopascal (kPa) 6.894 757 E+00
rad (absorbed dose) (rad) gray (Gy) 1.0 E02
roentgen (R) coulomb per kilogram (C/kg) 2.58 E04
tablespoon milliliter (mL) 1.478 676 E+01
teaspoon milliliter (mL) 4.928 922 E+00
torr (Torr) pascal (Pa) 1.333 224 E+02
watt hour (Wh) joule (J) 3.6 E+03
watt per square centimeter (W/cm
) watt per square meter (W/m
) 1.0 E+04
watt per square inch (W/in
) watt per square meter (W/m
) 1.550 003 E+03
watt second (Ws) joule (J) 1.0 E+00
yard (yd) meter (m) 9.144 E01
year (365 days) second (s) 3.1536 E+07

(Caption notes: 1. table culled from online version of CRC Handbook of Chemistry and Physics, 88
Lide, D. R. Ed-in-Chief, 2007, pp 1-231-33. 2. Boldface factors are exactno uncertainty [where
fewer digits are given, extra precision is not necessary]. 3. Italicized entries indicate non-standard units
that are acceptable for use by NIST [National Institute for Standards and Technology]. 4. NIST Special
Publication 811 is highly recommended for learning proper usage of units. It is available online, in the
Lab Library and on reserve in the Mallet library.)

Skoog, D.A., Leary, J.J., Principles of Instrumental Analysis, 4
Ed., Saunders
College, Ft. Worth, 1992.
Campbell, S. Laboratory Manual For Chem 303L and 304L, Louisiana State University in
Shreveport, 1998. (This is the primary reference for this chapter, though all
references were heavily used in conglomerate.)
Skoog, D.A., West, D.M., Holler, F.J., Fundamentals of Analytical Chemistry, 5
Saunders College Publishing, New York, 1988.
Garland, C.W., Nibbler, J.W., Shoemaker, D.P., Experiments in Physical Chemistry, 7
McGraw-Hill Boston, 2003.
Sime, R.J., Physical Chemistry: Methods, Techniques, Experiments, Saunders College
Publishing, Philadelphia, 1990.
Halpern, A.M., Experimental Physical Chemistry: A Laboratory Textbook, 2
Ed., Prentice
Hall, Upper Saddle River, NJ 1997.
Unknown--heard or read someplacetreat as an anecdotal story.