o
to
= (3.3)
Because the amount of deviation of each point from the mean is indicative of the
precision of the experiment, analyzing means and deviations is very important. A key
term to introduce here is standard deviation. It is, in a sense, the average deviation of all
data points. A helpful way to look at a range of values is by determining how many
standard deviations they are from the mean. Thus, we look at say one standard deviation
above and below the mean, or two above and below, and so on. Statistically, the area
under the curve between one standard deviation above and below the mean is 68.3% of
the entire curve. In other words, any random data point has a 68.3% chance of being
within one standard deviation of the mean. Two standard deviations encompass 95.5%,
three encompass 99.7% and so on. This is known as the threesigma rule since sigma
is a symbol associated with standard deviations, as well see next. In Figure 3.1, 3 is
off the graph. The value of 2 has a special notation of .
These values are important because they define certain common confidence limits in our
data. To operate at the 95% confidence level, one is asserting that 95% of the time a
given measurement will fall within about 2 of the mean (1.96 to be exact). A bit more
will be said about this later.
Figure 3.1 Histogram of Binder Clip Masses (excluding outlier) with Normal Error Curve Superimposed
CH154K PChem Lab Manual Page (xi)
Part of this analysis involves distinguishing between the means and deviations for the
population and those of the sample. A whole branch of statistics deals with the
differences between the population and the sample, and treatment depends on how big
or small the sample is and whether or not it is truly representative of the population.
Thus, we have the population mean (), the population standard deviation (), and the
sample mean () and sample standard deviation (s). There is also a quantity called
variance (including population variance (
2
) and sample variance (s
2
)), which
statisticians prefer over standard deviation, so it is included here for completeness.
3.2.2.2 Mean and Deviations
To facilitate this discussion, lets first acknowledge that graphically calculating mean
and standard deviation every time is not the ideal way to do this. It would be much
easier if we could just take a formula and plug in data values and get a numerical
answer. There are such formulae and here they are to make our lives easier. The sample
and population averages are
1
1
N
i
i
x x
N
=
=
(3.4)
and
lim
N
x
= (3.5)
whereas the sample and population standard deviations are
( ) ( )
2
2
2
1
1 1 1
i i
x x x x
s
N N N
 
 

= =


\ .
\ .
(3.6)
and
( )
2
i
x
N
o
=
(3.7)
CH154K PChem Lab Manual Page (xii)
There are several things to note about these equations, especially Eqn 3.6. The
population values are somewhat theoretical since we do not usually have a way to
measure an entire population. Note that the x x term in Eqn 3.6 is the mathematical
definition of deviation, therefore the standard deviation is essentially the average
deviation over the range. For the sample standard deviation, we divide by N1 because
calculating the sample average uses one of the degrees of freedom. (This makes more
sense if you view a data set as an array or matrix of data and you use up a dimension of
flexibility when you perform a calculation on the set and use that result in another
calculation on the set.) Note that the second form of Eqn 3.6 is more computationally
involved, but easier to program into a spreadsheet (though most spreadsheets calculate it
as a standalone function).
As you can see from the symbols for variance, they are just the square of the standard
deviations and can be obtained easily from the above formulas. The Relative Standard
Deviation (RSD) is analogous to the percent error calculations done many times in
your past, and should be learned and retained as it will likely show up again in your life
because it is quick to calculate and easy to use:
10 10
z z
s
RSD
x
o
 
 
= ~
 
\ .
\ .
(3.8)
where z is an integer, usually 2 or 3. If z=2, then the RSD is expressed as a percentage,
and has the special moniker, the coefficient of variation. If it is 3, then the RSD is
expressed as parts per thousand, and so on.
Note that while the average is probably the most common way of expressing a middle
type value for a dataset, it is not the only one. There are other types of midpoint
values that are used, but typically are much less important in physical chemistry. As
mentioned earlier, synonyms for average are mean or arithmetic mean. All three
describe the same thing in common usage. There is also a geometric mean, which will
not be discussed.
Another value is the median, which is literally the middle value (i.e. the third value in
a list of five, etc). There is no calculation other than counting which value in a series is
the middle one. If there is an even number of data points, the median is the average of
the middle two. Typically, it is assumed that the series is ordered low to high.
Next in our mediocre rogues gallery is the midrange, which very simply is the
average of just the highest and lowest value:
max min
2
MR
x x
x
+
= (3.9)
The last member in this lineup is the mode, or value in a series that occurs with the
greatest frequency. This value may truly be far from the average. For example, given the
dataset x:{1,5,3,8,2,2,9,10}, the average is 5, the median is 4, the midrange is 5.5, and
the mode is 2.
3.2.3 Rejection of Bad Data
Once the range, mean and deviation of a data set are known, it is helpful to know if any
points in the set are so far off, they can be eliminated from further consideration. Such
outliers, or discordant data, must usually be tested to be sure they are beyond
usefulness. It is good to be cautious about throwing out data, as the outliers can
CH154K PChem Lab Manual Page (xiii)
sometimes lead to the most interesting discoveries. Because of this, while there are a
variety of tests, it is poor practice to rely on just one, or make either too general or too
strict of a rule about discarding or keeping outliers.
3.2.3.1 Common Sense
This is probably the most dangerous of the techniques as what may be obviously
errant data may actually be good data. If there are one or more points that appear to be
grossly out of line (or even among the good data) and for which there is evidence in
the lab notebook of irregularities in the collection of those values, then it is worth
considering whether to keep or discard the data. Be very judicious about this. For
example, in the test data, there was a measurement about 3.5 times the average clip
mass. In the lab notebook was the notation that this value was the mass of a large binder
clip instead of a medium binder clip. Since the experiment was to measure the mass of
medium clips, this outlier can be thrown out because it does not belong to the studied
population. That is pretty obvious.
But what about the first measurement of 9.709 g? It is heavier by over a gram from the
rest of the data. Similarly, point 25 with a mass of 7.886 g is wellover half a gram from
the mean and most values. Can we just drop these values? They are a different
manufacturer than any of the other clips, but the stated population of the experiment was
all medium clips. As these are medium clips, their manufacturer or country of origin is
irrelevant. This is all the more demonstrated by the presence of other clips by the same
manufacturers with masses within normal expected values. Other tests must be used.
3.2.3.2 Standard Deviation Test
Earlier it was shown that the area under the Gaussian curve bounded by 3 includes
99.7% of the data or in other words there is only a 0.3% chance of valid data being
outside the 3 area. So one possible criterion is to throw out any data further than 3
from the mean. The challenge here is that the data set has to be large enough that the
standard deviation is fairly well known. In our case, the mean is 8.45 g and the standard
deviation is 0.28 grams, so 3 gives a range of 7.61 g 9.29 g. Clearly, we cannot throw
out point 25 as it is within the 3 range. Point 1 is almost half of a gram outside of the
range, so it is still a candidate. What next?
3.2.3.3 The QTest
The final common tests are statistical tests, more rigorous than the previous one. Of
these, the most common is the Qtest. It calculates how far the potential outlier (x
q
) is
from its nearest neighbor (x
n
) and divides this by the range of the entire data set to get a
ratio called Q
exp
. Then based on the number of measurements (N), and what
confidence level you are operating under (typically 90%, 96% or 99%), a table is
consulted, and if Q
exp
> Q
crit
(the critical value), then the point can be excluded with a
confidence equal to the level selected. A version of the table is given in Table 3.2.
exp
q n
crit
x x
Q Q
R
< = (3.10)
With 35 data points, we obtain:
CH154K PChem Lab Manual Page (xiv)
90% 95% 98% 99%
9.709 8.698 1.011
?
9.709 7.886 1.823
0.260 0.298 0.341 0.372 0.554
Q Q Q Q
= = = < =
= = = <
(3.11, 3.12)
Since 0.554 is greater than 0.372, then we can reject the first data point with greater than
99% confidence.
Note that the Qtest table used only goes up to N=30, not 35. However, as N increases,
the Q value decreases, so we are safe in using the smaller N for our data. In fact, it is
important to note that most Qtables referenced in similar textbooks only go up to N=10
at most. This gives you a certain amount of confidence that the Qtest is reasonably
valid for fairly small sample sizes. Note that the Qtest is allowed but not recommended
for sample sizes N< 4, as there are too few points to have any confidence that one is an
outlier apart from a priori evidence written at the time in the lab notebook (per Section
3.2.3.1).
While there are other statistical tests for outliers, this one will serve most needs in this
course.
3.2.4 Validation and Calibration
Note that the fact that Clip 1 can be discarded may call into question the entire first data
set, which is an equal number of measurements of just that clip. If we can throw out its
value, is the data taken just from that clip worth anything? Again, it depends. The thirty
six replicate measurements of Clip 1 do show several useful things. It proves the relative
precision of the balance, which shows that variations in mass beyond that of a milligram
are truly significant. In other words, for Clip 1s replicate measures is 0.001, and for
the larger clip sample is 0.278, almost 300 times greater. Therefore, the variation in the
masses is real, and not subject to a limitation in the balance. Indeed, it shows that having
an even more precise balance would not give any more real information about the clip
masses. Even though Clip 1 is excluded from the bulk data, the measurements taken just
with that clip are useful because they answer different but related questions. In other
words, the Clip 1 experiment shows more about the balance than about binder clips,
and, in fact, serves to validate the use of the balance in the larger experiment. Another
way to validate the balance would be to take triplicate (or more) measurements of each
clip, average those and then use that average mass in the larger data set. This can be
much more tedious as many more calculations are involved. By validating the balance
ahead of time, subsequent measurements can be taken with confidence in the precision
of the balance.
This validation only serves to verify the precision of the balance, but says nothing about
its accuracy. To verify accuracy requires calibrationusing the balance to measure an
object whose mass is very well known or defined. If we were to put a certified 10.000g
mass on the balance, it would reveal how accurate the balance is and whether it needs
maintenance. By repeatedly measuring the mass, both accuracy and precision could be
verified. This calibration should be done on a regular basis, according to the schedule in
the instrument manual. Verification may be done more frequently, and is best done with
an object similar in mass to the objects being investigated. Typically, only three to ten
replicates are needed for a wellmaintained balancethirtysix is a bit of overkill, but I
trust I have made my point.
NB: You may only use a
Qtest to eliminate one
errant data point. It is
not valid for multiple
outliers.
CH154K PChem Lab Manual Page (xv)
Table 3.2 1 QTest Critical Values (Rorabacher, D.B., Anal. Chem. 63, (1991), 139.) (Note highlighted row used in
example calculation)
Confidence Level
N 90% 95% 98% 99%
3 .941 .970 .988 .994
4 .765 .829 .889 .926
5 .642 .710 .780 .821
6 .560 .625 .698 .740
7 .507 .568 .637 .680
8 .554 .615 .683 .725
9 .512 .570 .635 .677
10 .477 .534 .597 .639
11 .576 .625 .679 .713
12 .546 .592 .642 .675
13 .521 .565 .615 .649
14 .546 .590 .641 .674
15 .525 .568 .616 .647
16 .507 .548 .595 .624
17 .490 .531 .577 .605
18 .475 .516 .561 .589
19 .462 .503 .547 .575
20 .450 .491 .535 .562
21 .440 .480 .524 .551
22 .430 .470 .514 .541
23 .421 .461 .505 .532
24 .413 .452 .497 .524
25 .406 .445 .489 .516
26 .399 .438 .482 .508
27 .393 .432 .475 .501
28 .387 .426 .469 .495
29 .381 .419 .463 .489
30 .376 .414 .457 .483
3.2.5 Standard Deviation of the Mean
If instead of measuring one clip many times, we took the other path and weighed each
clip multiple times, then the mass of each clip could be given as a mean a standard
deviation. Then when data from all thirtyplus clips are combined, we obtain a mean of
all the means, and a standard deviation based on the spread of the means. This is called
the standard deviation of the mean or standard error of the mean, given as
m
. It is
related simply to by the following:
m m
s
or s
N N
o
o = = (3.13)
3.3 ErrorNot Really Wrong
Error is the technical term for uncertainty. It is not a moral judgment or indication of a
mistake. However, one source of error can be mistakes in performing the experiment.
Basically, everything said previously about uncertainty applies to error.
CH154K PChem Lab Manual Page (xvi)
The challenge for any scientist is to accurately assess sources of error and their
magnitude. Three mistakes students make are (1) to list every conceivable source of
error in this universe or any other, regardless of likely impact and (2) to misassign the
impact of real errorsthat is, too much influence on a minor source and not enough on
a major source, and (3) to assign all error to the experimenters technique. This section
will try to help establish some perspective.
Typically, error comes from inherent limitations in experimental technique. Therefore,
the ability to evaluate each part of an experiment for its weaknesses and limitations, and
how to quantify them, is important. Equally important is the ability to determine how
limitations from different parts of the experiment interactdo they reinforce and
maximize error or do they offset each other, at least partially? This is where the idea of
error propagation comes in. Statisticians have developed equations to calculate the
maximum probable error and also the maximum error. The first allows for some
offsetting of error, while the second assumes all error is additive. Understanding both is
helpful in refining procedures as the goal is to minimize error and to design experiments
where they cancel each other as much as possible.
3.3.1 Types of Error
It is now appropriate to discuss the types of error, gross, systematic and random. Gross
error is simply user erroractual mistakes. These need to be avoided and/or eliminated.
3.3.1.1 Systematic Error
Systematic error is uncertainty in a measurement due to a constant effect. In other
words, a speedometer that reads 5 miles per hour fast exhibits a systematic error of +5
mph. Systematic errors are caused by poor calibration, influence of a constant external
contaminant, or by a consistent mistake by the experimenter. A good experiment design
eliminates known systematic errors and looks for methods to reveal those that are
unknown. Thus, the best way to uncover and remove these determinate errors is by
using two different methods to obtain the data. Such orthogonal methods lead to the
same information from paths completely independent of each other. This acts as a
check. If the results are basically the same, then determinate errors are probably
minimized. If they are significantly different, then both methods must be examined
carefully, and every assumption reassessed.
Systematic errors have the advantage of being invisible when taking difference
measurements or similar. If all measurements are off by the same amount, then when
subtracting one measurement from another, the systematic error is subtracted out. There
are also systematic errors that are not constant but occur in a constant proportion. For
example, a speedometer that has a 10% systematic error is always off by 10% at
10mph, it reads 11, but at 100mph it reads 110. Regardless, the error is always in the
same direction (i.e., it doesnt read fast sometimes and slow other times) and the effect
is constant in some way. This is known as instrumental bias or error. Thus, synonyms
for systematic error are determinate error or bias.
Systematic errors tend to reflect errors in experimental accuracy, because they take what
would otherwise be accurate data and shift the results by the magnitude of the error,
Table 3.3 Tolerances for Volumetric Glassware (ASTM E28806, Standard Specification for
Laboratory Glass Volumetric Flasks,and ASTM E96902(2007), Standard Specification for Glass
CH154K PChem Lab Manual Page (xvii)
Volumetric (Transfer) Pipets) (Class A items are marked and made to higher tolerances. Class B may or
may not be marked.)
Capacity Volumetric Flasks Volumetric Pipettes
(mL) (mL) (mL)
Class A Class B Class A Class B
2 0.006 0.012
5 0.02 0.04 0.01 0.02
10 0.02 0.04 0.02 0.04
25 0.03 0.06 0.03 0.06
50 0.05 0.10 0.05 0.10
100 0.08 0.16 0.08 0.16
200 0.10 0.20
500 0.20 0.40
1000 0.30 0.60
reducing the accuracy, but without impacting the precision. Consequently, as
contradictory as it seems, it is better for random error to be larger in magnitude than
systematic error. This is because the statistical methods discussed later in the chapter
address random error. If systematic error is larger, then the methods are far less
effective. Therefore, while we want to eliminate as much error as possible of any kind,
systematic needs to be the smaller of the two main types.
3.3.1.2 Random Error
Random error is just what it sounds like. The reading can be off by some amount that is
either higher or lower than the actual value, and the amount may vary, so it is also
known as indeterminate error. This is the effect of noise in the signal or random
inconsistencies in measurement, which is why it is important when taking measurements
to take them exactly the same way always, so that any errors in reading (personal
bias/error) are performed systematically and have the exact same impact on the data.
Since random error is a reflection of the fuzziness in our ability to take and read
measurements, it tends to be associated with the precision of the measurement. The
more precise the measure, the smaller the random error should be.
An inlab example of increased random error would be using a graduated cylinder for a
series of dilutions instead of volumetric flasks. As shown in Table 3.3, different types of
glassware are designed to have different amounts of accuracy. Keep in mind that the
glassware must be absolutely clean for these tolerances to be accurate.
In an ideal world, the only error is random error, and it is small enough to be Heisenberg
limited. In the real world, not even all systematic error can always be eliminated.
Therefore, your job is to determine (1) what error is present, (2) what type it is, and (3)
how important that source is.
For instance, a student once said in a report for an experiment in a temperature
regulated oil bath that she had a very large error in her results because the room
temperature changed by 1/10th of 1 F over the course of 3 hours. Is this a possible
source of experimental error? Yes. Is it a systematic or random error? As presented in
her paper, it is likely a systematic error, unless it fluctuated back and forth by that tenth
of a degree. Is it a reasonable source, especially a reasonably large source? No! An oil
Four questions to ask
about error in an
experiment:
(1) What sources of error
are present?
(2) Are they gross,
systematic or random?
(3) Based on the
procedure performed, is
this source likely to have
a significant impact on
data?
(4) If so, how big is that
impact quantitatively?
CH154K PChem Lab Manual Page (xviii)
bath, especially one that is temperatureregulated insulates the sample from the air
temperature. Unless doing highly precise work, which this course does not do, a 1/10th
of a degree change is not even worth mentioning, especially over the course of 3 hours.
The procedure for the oil bath experiment calls for the temperature of the bath to be
changed by 30C or more during that same 3 hours. How could that small of a room
temperature change have any kind of impact?
As you consider your experiment, certain types of error should be obvious to the
specific procedure and equipment (method bias/error). Beyond that, brainstorming is a
great way to make sure nothing has been missed. In either case, ask the four questions
above for each to determine if it is a significant source of experimental uncertainty.
It is also important when reporting your work to discuss not only the significant sources
of error, but also sources that one would expect to be significant but werent. When
reading a lab report or journal article, a scientist is internally asking about different
sources, and if your paper does not even mention an expected source, then your results
will be questionable. Therefore, a good error analysis discusses important sources
whether or not they are significant and why/why not. Perhaps you designed the
experiment to eliminate a specific source. Say so. Perhaps some serendipitous condition
negated the effect of a common source. Explain that also. The ability to competently
analyze error is what allows major discoveries to take place from noticing small
irregularities and not ignoring them.
To sum up this section, Table 3.1 contained several notes made about irregularities.
Table 3.4 lists them and gives the type of error and significance. Note that in your
reports, this discussion should take place in the text, rather than in a table.
3.3.2 Absolute and Relative Error
Just as with precision, error is expressed either in terms of absolute error (uncertainty)
or relative error (uncertainty). For example, the uncertainty in any given mass
measurement in the binder clip experiment is given by the error in the scale reading.
Clip 33 has a mass measured to be 8.286g. The uncertainty in any measurement on that
scale is 0.001g. So in absolute terms, the mass of clip 33 is 8.2860.001g. To find the
relative error, divide the uncertainty by the mass: 0.001/8.286 = 0.00012 or 0.01%,
which is expressed as 8.286g 0.01%. Again, this way is more informative because a
grain of wheat may have a mass of a milligram and there the absolute error is the same,
but the relative error is now 100%!
3.3.3 Error Propagation
In section 3.3, the idea of propagating the effects of measurement error through all
stages of a data analysis was mentioned. Here we will go into more detail. It should
make sense that just recording uncertainty in measurements is merely a first step. As we
perform calculations on the data, the uncertainty does not go away, but has an impact on
the validity of each step of the calculation. To add insult to injury, since each variable
and each constant has some level of uncertainty, all of them must combine in some way
to affect the final results.
Table 3.4 Sources of Binder Clip Experiment Error and Significance
Error Type Significance
Did not tare Gross
In this case, the deviation was
negligible
CH154K PChem Lab Manual Page (xix)
Left balance door open Gross Same
Leaned hard on bench top (2x) Gross/Systematic
The deviation was measurable and
constant, worth quantifying
Manufacturer 2, Country 2 Random
Normal data variation given our
population, however, should be Q
tested to see if an outlier
LARGE Binder Clip Gross
If testing medium clips, then using a
large one is a clear mistake and its data
should be rejected out right.
Manufacturer 1, Country 1, used Random
Normal data variation given our
population, however, should be Q
tested to see if an outlier
Manufacturer 3, Country 1 Random Normal data variation
Manufacturer 2, Country 1 Random Normal data variation
Not wearing gloves to prevent
fingerprints on clips/cleaning
clips before putting on balance
Method error,
probably
systematic
Not significant. The clips are
sufficiently massive that minor finger
oils and dust are negligible. For smaller
objects, can be an issue
3.3.3.1 Partial Derivatives
Before going farther, it will be helpful to review basic partial differential equations. If
you have a simple equation such as y = mx + b, then taking the derivative is pretty easy:
dy
m
dx
= . If, on the other hand, the equation is multivariate, say,
2 3
( , ) 3 42
y
f x y x xy e = + , we might have a bit more of a problem. However, it turns out
that like in regular algebra, we can ignore the parts we dont like by redefining them
temporarily. In this case, we take partial derivativestake the derivative of the entire
equation with respect to one variable, and treating the other(s) as constants, then take the
derivative with respect to each of the remaining variables and treating previous variables
as constants. To wit:
3 2
( , ) ( , )
6 42 and 126 +e
y
y
x
f x y f x y
x y xy
x y
  c c  
= + =
 
c c
\ .
\ .
(3.14)
A partial derivative is denoted by the character rather than the normal d. Also, the
variable(s) kept constant are denoted by the subscript outside of the parenthesis. This
practice is a fundamental part of data and error analysis. It will likely be very helpful to
have a table of derivatives and integrals handy. The CRC Handbook has a fairly
extensive list of the ones most likely to be found in chemistry.
3.3.3.2 Maximum Propagated Error
At the very minimum, it is reasonable to expect that a result cannot be any more certain
than the variable with the largest error. To be cynical, we could expect all errors to
combine in the worst possible way, known as the maximum propagated error. For an
arbitrary physical phenomenon h, that depends on variables x, y, z, we have that
( , , )
yz xy
xz
h h h
h f x y z and dh dx dy dz
x y z
c c c
= = + +
c c c
(3.15)
For example, the Coulombic interaction between an electron and a helium nucleus
separated by 1.000.01 pm is found by using Coulombs Law:
1 2 1 2
2 1 1 2
1 2
1 2 2
1 2
1
leading to
4
o o
o o
o
q q r q q o o
q r q r q q r
q q F F F F F
F dF d d dq dq dr
r q q r
c tc
tc tc t
t c
tc t c
c c c c c
= = + + + +
c c c c c
CH154K PChem Lab Manual Page (xx)
(3.16, 3.17)
or
1 2 1 2 2 1 1 2
1 2 2 2 2 2 2 2 3
1 1 1 1 1
4 4 4 4 2
o
o o o o o
q q q q q q q q
dF d d dq dq dr
r r r r r
t c
t c tc tc tc tc
         
= + + + +
    
\ . \ . \ . \ . \ .
(3.18)
Note that according to Table 3.5, the electric constant is 8.854187817x10
12
F/m and
that it is defined to be exact. Since it is an exact number, its uncertainty is 0 (zero), so
the second term in equation 3.18 is zero and can be dropped. For pi, the uncertainty is
given to be 1 in whatever the last decimal place we use. (ie 3.140.01) In this case, q
1
is equal to the charge on an electron and that is the elementary charge given as
1.60217653x10
19
C, which has a relative uncertainty of 8.5x10
8
C, so it would probably
be better to state pi to ten to twelve places. Note also that relative uncertainties must be
Table 3.5 Fundamental Physical Constants with Associated Relative Uncertainties
(Caption Note: pasted from http://physics.nist.gov/cgibin/cuu/Category?view=gif&Universal.x=87&Universal.y=12
Accessed June 16, 2008)*
* Note that the Gas Constant value and uncertainty is: R = 8.314 471 0.000 014 J/(molK). To convert
to other units will require propagation of the uncertainty through the conversion. (Reference:
http://properties.nist.gov/fluidsci/metrlgy.html (Accessed June 22, 2008))
converted back to absolute uncertainties. The helium nucleus contains two protons, so q
2
= 2e. The distance and its uncertainty are given above. Putting everything together, we
get (and using pi to ten places and dropping the uncertainty term):
CH154K PChem Lab Manual Page (xxi)
( )
( ) ( )
( )
( )
( )
( )
19 19
2
12 12
38 2
4
24 2
10
1.60217653 10 2 1.60217653 10
1
4(3.1415926535 898) 8.854187817 10 1.00 10
5.13393927 10
1
4.61415450 10
1.00 10
1.11265006 10
x C x C
F
F
x x m
m
x C
F x N
F
x m
x
m
=
= =
I leave the resolving of the units for the reader. Now for the uncertainty:
( )
( )
( )
( )
( )
( )
( )
( )
19
27
2
10 12
19
27
2
10 12
19
11
2 1.60217653 10
1
0 8.5 10
1.11265006 10 1.00 10
1.60217653 10
1
8.5 10
1.11265006 10 1.00 10
1.60217653 10 2 1.6021
1
5.5632502
8 10
x C
dF x
F
x x m
m
x C
x
F
x x m
m
x C
F
x
m
 

= +


\ .
 

+


\ .
+
( )
( )
( )
19
14
3
12
7653 10
1.00 10
1.00 10
x C
x
x m
 



\ .
11 11 6
6
0 2.44793956 10 1.22396978 10 9.228309007 10
9.228345726 10
dF x x x
dF x
= + + +
=
This gives a final answer of
4 6
4 6
4 4
4.61415450 10 9.228345726 10
4.614 10 9.2 10
4.614 10 0.092 10
F x x N
F x x N
F x x N
=
=
=
(3.243.26)
3.3.3.3 Relative Error
While the above is a typical way of expressing the error, another way is the relative
error, which is
6
4
9.228345726 10
100% 2%
4.61415450 10
dF x
F x
= = (3.27)
This amount is reasonable, but it is still helpful to evaluate the experimental procedure
to see if increased precision can be obtained to lower the uncertainty.
3.3.3.4 Probable Propagated Error
The maximum propagated error assumes that all of the errors are additive, but in reality,
it can be expected that some errors will offset, so that the actual uncertainty is
somewhere between the maximum propagated error and the minimum error (assumed to
be the largest single uncertainty in the variables. The probable propagated error actually
depends on the mathematical operations of the function, and for complex functions
requires an iterative process. The key requirement is that all variables MUST be
independent and the uncertainties random.
(3.19)
(3.20)
(3.21)
(3.23)
(3.22)
CH154K PChem Lab Manual Page (xxii)
Note that in y=abc, a, b, and c are all independent. In y=ax
2
, a and x are independent
from each other, but when expressed as the following, y=axx, we can see that x is not
independent of itself, so that is where Section 3.3.3.4.3 will come in.
3.3.3.4.1 Additive/Subtractive Functions
If f(a,b,c)= 2ab+3c, then the uncertainty in the function is
( ) ( ) ( )
2 2 2
f da db dc c = + + (3.28)
assuming that da, db, and dc are the absolute uncertainties of a, b, and c. This method is
known as the quadrature rule.
3.3.3.4.2 Multiplicative/Divisive Functions
For a function where the variables combine to form products and/or quotients with
known absolute uncertainties in each variable, we can use a similar quadrature formula.
For
( )
2 2 2
, , then
a df da db dc
f a b c
bc f a b c
     
= = + +
  
  
\ . \ . \ .
(3.29)
3.3.3.4.3 Exponential Functions
For functions where the variables are raised to a power, such as f(a) = a
z
, then the
uncertainty is given by
df da
z
f a
= (3.30)
3.3.3.4.4 Log/Antilog Functions
Next consider functions such as f(x) = log x. The uncertainty in these is simple:
0.434
dx
df
x
= (3.31)
where the dx/x term is simply the relative error in the variable. For natural logs, the
formula is the same, but without the 0.434 factor.
The formulas for antilogs are as follows:
( )
( )
10 2.303
x
x
df
f x e x
f
df
f x x
f
= = c
= = c
(3.32, 3.33)
3.3.3.4.5 Trigonometric Functions
There are many trig functions, and as the error functions are basically derivatives of
them, only one will be presented here, with the rest obtained from a table of trig
derivatives. Keep in mind that these formulas work explicitly only for values in radians,
not degrees.
( ) sin cos f x x f x x = c = c (3.34)
CH154K PChem Lab Manual Page (xxiii)
3.3.3.4.6 Normal Functions Often Combine One or More of These Functions
When a needed equation uses several of the above operations in it, the complexity
increases, but it is still manageable. The trick is to handle each type of calculation
individually, then use the result in each subsequent calculation. To illustrate this, use our
Coulombic attraction example, which has an exponential and several products and
quotients.
1 2
2
1
4
o
q q
F
r tc
= (3.35)
To get the uncertainty in F, lets first redefine 1/r
2
to be A:
1 2 1 2
1 1
4 1 4
o o
q q A q q A
F
tc t c
 
= =

\ .
(3.36)
This leads to
2 2 2 2
1 2
1 2
2
2 2 2 2 2 2
2
3
1 2 1 2
3
1 2 1 2
2
2
4
1
o
o
o o
o o
d dq dq dF dA
F q q A
dr
r dr
d d dq dq dq dq dF dF
r
F q q F q q r
r
c
c
c c
c c
       
= + + +
   
   
\ . \ . \ . \ .
 
  
           
  = + + + = + + +
     
     

\ . \ . \ . \ . \ . \ .
\ .

\ .
( ) ( ) ( ) ( )
2
2 2 2 2
27 27 14
19 19 12 12
2 2 2
2
8 8 2
8.5 10 8.5 10 0 1 10
4
1.60217653 10 2 1.60217653 10 8.854187817 10 1.00 10
5.305283058 10 2.652641529 10 0 4 1 10
2.81460283 1
dF x x x
F x x x x
dF
x x x
F
dF
x
F

       
    = + + +
   
\ . \ . \ . \ .
= + + +
= ( )
2
15 16 4
4 2
0 7.03650708 10 0 4 10
4 10 2 10 2% relative error
x x
dF
x x
F
+ + +
= = =
Note that even though
o
is exact, we do not ignore it, but include it, giving it an
uncertainty of 0.
The probable propagated error should never be larger than the maximum, and the more
independent variables in the calculation, the greater the difference between the two
methods, in general. In this case, r is not independent, and it had the largest error,
resulting in the negligible difference between maximum and probable errors.
3.3.3.5 Combining Systematic and Random Error
If a scientist is successful in eliminating all systematic error, then the only error that is a
factor is the random error. This is highly unusual, especially in the undergraduate lab.
The final bit of the propagation puzzle is to then propagate the systematic errors (which
must keep the sign of each error since it does not randomly fluctuate about the value),
and use the quadrature equation to combine the systematic and random errors to give the
total error:
( ) ( )
2
2
tot random syst
e de de c = + (3.44)
(3. 37)
(3. 38, 3.39)
(3. 40)
(3. 41)
(3. 42)
(3. 43)
CH154K PChem Lab Manual Page (xxiv)
3.4 GraphingA Picture Is Worth 1000 Words, IF It Is Clear
A graph is a pictorial display of your data. It is designed to display clearly the
relationships of the investigated variables (linearity or not, maxima/minima, inflection
points). Just as the histogram provided the graphical basis for means and uncertainties
for raw data, an xy graph (often called a scatter plot) can do the same thing for both raw
and calculated data, even showing how closely the trends in the data are supported by
the data. In some cases, graphical analyses are easier to perform than rigid numerical
analyses, and even though both should lead to the same results, the graphical methods
tend to be less accurate. This section will therefore discuss proper formatting of graphs
and then using them to show the significance of your data analysis via a method known
as least squares analysis.
3.4.1 Graph Format
First, ALL graphs must be produced on the computer. The computer can make a
graph faster and more accurately from data than is possible by hand. It is also able to
update the graph in real time as the parameters are manipulated.
Key formatting features:
 Descriptive but concise title (in several reports you will have to make numerous
similar graphsmake sure that each is easily identified in English rather than
jargon or code)
 Proper labels on axes (x or y is a nonouse a descriptive title with words;
in parentheses, give any symbolic notation and units)
 DO display the trendline. Include the equation and R
2
value. Position it in a
blank area of the graph rather than in the margins, so the graph can be as large as
possible.
 Change the variables in the equation from x and y to the actual variables for
which they stand.
 Unless specifically needed, do NOT draw a line that connects the dots.
 Unless multiple data sets are shown on the same graph, do NOT display the
legend. If only one thing is being graphed, the legend is redundant.
 Do NOT have the graphs origin at (0,0). Zoom in on the data of interest so that
it fills the entire useful area of the graph. The goal is to highlight data, so make it
big.
 Do NOT have a shaded background for the graph. This is the Excel default
setting. Change it.
 Do NOT show gridlines. This is also default and unacceptable.
 Make all font sizes legible and match font to report text
 See Figure 3.2 to compare a graph using Excel defaults and poor labeling to an
acceptable graph.
CH154K PChem Lab Manual Page (xxv)
3.4.2 Least Squares Analysis
The range has limited usefulness because it says nothing about how the data are
arranged about the average. Therefore, we often look at the deviation or residual of
each value from the average:
i i
r x x = (3.45)
However,
0
i
r r = ~
(3.46)
because the residuals above the average should cancel out the residuals below the
average. Since that is not terribly helpful, there are two ways of getting around the
problem:
i
r r =
(3.47)
or more commonly the use of the squared residual:
( ) ( )
2 2
2 2
i i i i
r r y y y mx b = = =
(3.48)
The last equality shows that if the data can be expressed in the form of a linear equation,
then the actual yvalues (dependent variable) are compared to the value obtained by
plugging the corresponding xvalue (independent variable) in the equation. This is
analogous to subtracting the real value from the average in that it gives the distance of
the value from the line.
Now, we can do something useful by differentiating equation 3.48 twice, once with
respect to m and once for b.
( )( ) ( )
( )( ) ( )
2
2
1 1
2
1 1
2 0
2 1 0
N N
i i i i i i i
i i
b
N N
i i i i
i i
m
r
y mx b x x y mx bx
m
r
y mx b y mx b
b
= =
= =
  c
= = =

c
\ .
  c
= = =

c
\ .
(3.49, 3.50)
Next, several convenient summary equations will be defined and used to rearrange the
above expressions.
y = 0.3844x + 0.7685
R = 0.4996
0
0.2
0.4
0.6
0.8
1
0 0.5 1 1.5
V
i
s
c
o
s
i
t
y
Conc
Viscosity vs. Concentration
Viscosity
Linear (Viscosity)
Nsp/c = 0.384(Conc)+ 0.768
R = 0.499
0.3
0.4
0.5
0.6
0.7
0.8
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
S
p
c
e
i
f
i
c
V
i
s
c
o
s
i
t
y
/
C
o
n
c
.
(
1
0
2
c
m
3
/
g
)
Concentration (PVOH g/100mL)
Intrinsic Viscosity of the
Cleaved Polymer
3.2: Examples of a) unacceptable and b) acceptable graphs
CH154K PChem Lab Manual Page (xxvi)
( ) ( )
2 2
2
2
2
2 2
1 1 1 1
1
N N N N
y
x
x i i y i i
x y
i i i i
N
x y
xy i i
x
i
S
S
S x S x S y S y
N N
S S
S x y D NS
N
= = = =
=
= = = =
= =
(3.513.56)
This gives us:
2
y x xy xy
x
S S S S NS
m b
D D
= = (3.573.58)
While these equations look a little intimidating, they are extraordinarily easy to program
into a spreadsheet. These calculations do something very special for usthey give us
the slope and intercept of a straight line that passes through our data in such a way that
all of the residuals are minimized. In other words, the slope and yintercept in the above
equations give us the line of best fit.
Combinations of the above formulas offer other useful information, including other
equations for m and b.
2
2
2 2
2 2
2
2
standard deviations in x, y:
1 1
std. dev. of the residuals:
2
correlation coefficient
slope uncertainty in m
y
x
x y
y x
r
xy
x y
xy
x
S
S
s s
N N
S m S
s
N
S
R
S S
S
m
S
= =
=
=
( )
2
2
2
1
yintercept b uncertainty in b
r
m
x
b r
x
x
s
s
S
y mx s s
S
N
S
=
= =
 


\ .
(3.593.66)
The correlation coefficient is a value that ranges from 1 to +1 and gives an indication
of how closely the points fit on the line. An R = 0 has no correlation, whereas R = 1
show perfect correlation. Often a value that is very close to 1 will be obtained (i.e.
0.99954). These should be reported to the second digit that is not a 9. Note that these
values can be deceptively high. In general, the correlation coefficient is most helpful
when it is farther from one than closer to one. If it is close to one, that is nice, but do not
attach too much significance to it.
One of the main advantages to a least squares line is that it allows interpolationit
allows a y value to be calculated for any given x (or vice versa). This is a main function
of calibration plots.
In your reports, you do not need to show a sample calculation for least squares fitting.
Just use the results. For the homework assignment, you will need to show your work.
Some points to consider during graphical analysis:
To program a
spreadsheet to do these
calculations, all that is
needed is to have:
an x column,
a y column,
a column that squares x
(=x*x),
that multiplies x and y
(=x*y),
and a column that
squares y (=y*y or
=y^2).
The next step is to have
a cell at the bottom of
each column with the
formula (=sum(x1:xn)).
Finally, have cells with
formulas for N, D, m,
and b. (for N,
=Count(x1:xn)).
CH154K PChem Lab Manual Page (xxvii)
 You will almost never have an R=1. This means each data point fit exactly on
the trendline. (I did once get an R = 1, once.)
 Choose the appropriate equation to graph based on your data, and arrange it to
get a linear equation (if possible) using the variables assigned to your data, and
assigning the X variable to the data stream that you varied (the independent
variabletime, temperature, pressure), and assigning the Y variable to the other
stream (the results you obtained from varying the X stream). Keep in mind that
the variables may be inverses, natural logs, and so on.
 If you cannot get a linear equation, that is okay, but you will need to read your
textbook regarding how to do a nonlinear regression.
 Plot your data (with X on the horizontal axis and Y on the vertical), and get your
trendline and equation.
 Remember that slope and intercept have units. The intercept is in the same units
as any other Yvalue, and the slope is Yunits/Xunits.
As an example of transforming a nonlinear equation to a linear one, look at the
relationship between viscosity and temperature.
1
exp
a
E
A
RT q
 
=

\ .
(3.67)
In the experiment, viscosity () is measured as temperature (T) changes, so those are our
Y and X variables respectively. As it is an exponential equation, we cannot simply
rearrange it into slopeintercept form. We must first take the natural log of both sides.
( )
1 1
ln ln
a
E
A
R T
y b m x
q
 
 
=
 
\ .
\ .
= +
(3.683.69)
So x=(1/T) and y=ln(1/), with a slope equal to ln A and m=(E
a
/R). Similarly,
sometimes it is possible to linearize an equation by making x=(T
2
) instead of T, and
so on.
3.4.3 Error Bars
Error bars are lines extending from individual data points that indicate the uncertainty in
that data point, in both the x and ydirections. This gives a graphic representation of the
precision of each data point. Ideally, the trendline will pass through the area defined by
error bars for each and every point, which serves as a further indication of the fit quality.
Either they can be entered manually or Excel can calculate and display them
automatically. Kaleidagraph is another program that focuses on graphing and easily
displays error bars, much easier than in Excel.
Sometimes error bars will be so tiny that they are not distinguishable from the data point
icon. If so, change the icon to something smaller. If they cannot be made visible, then
simply note it in the caption and in the text. This will ensure you receive credit for doing
them even though they are not visible.
3.5 Presenting ResultsSharing Numbers Meaningfully
When presenting results, it is important to communicate the numbers clearly. This
includes expressing them to the proper level of precision, appropriate rounding, giving
the error and giving the right units. Now that you have put so much work into obtaining
CH154K PChem Lab Manual Page (xxviii)
these results, take the extra bit of time to show them off properly. Current best practices
dictate the reporting of both confidence limits and sample size when reporting data:
( ) 12.38 0.02 95%, 9
g
N
mL
= = (3.70)
However, other methods are still allowed and will be discussed later. Ideally, all of these
are properly applied together, but use what is available.
3.5.1 Significant Figures
While merely defining the accuracy of results through significant figures is minimally
acceptable, it is important to always keep track of them and express them properly.
When reporting a number, all but the last digit are considered certain, and the last one is
uncertain. However, rules define what is or isnt a significant digit.
 Each number1, 2, 3, 4, 5, 6, 7, 8 and 9is always significant
 Any number used as an exact count is infinitely significant (i.e.there are 18
students registered for lab gives an exact counting number; saying there are
about twenty students is not exact and there is uncertainty in the number.)
 Zero may or may not be significant.
o If the zero is between two significant figures, it is significant (regardless
of the decimal point location (100.01 has 5 sig figs).
o A leading zero is never significant (0.001 only has one sig fig).
o A zero to the left of the decimal is a place holder and not significant
unless a decimal is present (100 has 1 sig fig, 100. has 3 sig figs, 100.0
has 4, etc.) To be clear, it is better to express such numbers in scientific
notation, because
o Any zero expressed in scientific notation is significant (1.00x10
2
has 3
sig figs and is the preferred way to express 100.. Likewise, 1.000x10
2
is better than 100.0 and still has 4 sig figs.).
o Any trailing zero to the right of the decimal is significant (0.300 has 3 sig
figs; scientific notation is still the preferred way to express this value.)
In any case, the last significant digit expresses the order of magnitude of the uncertainty
(due to its location) and what the expected value of that uncertain digit is (due to its
value). There is still high uncertainty about the magnitude of the error, however. If a
number is expressed as 54.67, the uncertainty can be anywhere from 0.005 to 0.05.
Therefore, in addition to having the proper significant figures, an express value for the
uncertainty in the value should be given.
3.5.1.1 Significant Figures in Calculations
The general principle in carrying sig figs through a calculation is that the final answer
can be no more certain than the least certain component. In other words, the number in
the calculation with the fewest sig figs limits the number of significant figures in the
final result. The rules for determining what that limit is depend on the operations being
performed.
3.5.1.1.1 Addition and Subtraction
When taking the sum or difference of a group of numbers, the number of sig figs is
determined by the number that has the fewest significant figures after the decimal (to
the right of the decimal).
CH154K PChem Lab Manual Page (xxix)
102.98
5.782
199.8934
29 14 7.09
+
(3.71)
Even though the first number has five significant figures, it only goes to the hundredths
place, so the final answer must end in the hundredths place, in spite of the fact that
5.782 has fewer sig figs.
3.5.1.1.2 Multiplication and Division
Products and quotients on the surface seem more straightforward: the final answer has
the same number of significant figures as the factor with the fewest sig figs.
12.25 4.0182
0.946595192 0.95
52
= = (3.72)
BUT, this is statistics and Pchem, so of course it isnt that simple. We have to consider
relative uncertainties. According to the definition of significant figures, the last digit is
uncertain by approximately 1, so that means the relative uncertainties are 1/1225,
1/87394, and 1/52. To get the actual uncertainty in the answer, we must multiply the
result by the largest uncertainty:
1
0.946595192 .0182 0.02
52
= = (3.73)
This indicates the answer should actually be rounded to 0.95, to the second decimal
place, as expected. If we change the equation slightly
12.25 4.7384
1.116257692 1.1
52
= = (3.74)
the relative uncertainty calculation gives us:
1
1.116257692 0.0215 0.02
52
= = (3.75)
which leaves us with an answer uncertain also to the second decimal place, but that
gives us three sig figs: 1.12.
3.5.1.1.3 Logs and Antilogs
As one would expect, propagating sig figs through logs is the least obvious (unless you
understand how logs work), but it isnt hard. Generally, follow these two rules:
1. When performing a logarithm, the answer should have as many sig figs to the
right of the decimal as the original number had sig figs.
2. When taking the antilog, do the reversethe answer should have the same sig
figs as the original number had to the decimals right.
3.5.2 Rounding Numbers
Rule number one of rounding: Never round until the entire calculation is complete!!!
You will lose significant digits along the way and it will give you a different answer.
There are several conventions for rounding. You may use any of them as long as you are
consistent. Some of the common ones include the following:
 If the last digit is: odd, round down; if even, round up (or vice versa).
 If the last digit is: 04, round down; 59, round up.
 If the last digit is: 04, round down; 69, round up; 5, round to the even number.
I tend to use the second one.
CH154K PChem Lab Manual Page (xxx)
Table 3.6 Common Methods of Reporting Error
Result Error Report as:
Using Standard Deviations
5.28 cP S=0.06cP 5.28(6)cP
0.912g S=0.023g 0.912(23)g
Using 95% Confidence Limits
92.57 atm 0.18 atm 92.570.18 atm
8.25x10
5
kJ/mol 1x10
7
kJ/mol (8.250.01)x10
5
kJ/mol
3.5.3 Reporting Error
Again, there are several conventions to reporting the error in a result. The most
traditional is 42.8734 g, although it is most common in contemporary literature to see
42.87(34) g which is easier to type than the former. In either case, the units are only
typed once, as both the number and the error have the same units. In Equation 3.70, it
was stated that best practices also include the confidence level and sample size.
Notice that the uncertainty presented in the previous paragraph is presented in the last
two digits rather than just the last one. The question arises whether it should be reported
as 42.9(3).Overstating the error by one place is common practice, as it gives a feel for
how robust the error is (i.e. is it .25 or .34?both round to 0.3, but one is almost 0.1
higher than the other).
Regardless, consistency is important. In a table of data, all of the values may have the
same uncertainty, in which case it is given with the first value in the column and the
assumption is that it is the same for the column. Sometimes also, the uncertainty (along
with the units) will be posted in the column heading to relieve repetition. It is also
important to state which kind of error is reportedabsolute, relative, maximum,
probable, confidence limit, standard deviation, and so on. Table 3.6 gives a summary of
different ways to present error.
In a journal article, the calculations of the error are rarely shown, but in undergraduate
lab courses, example calculations should be shown, so the grader can help find areas
where improvement is needed. It is much better for a grader to find mistakes than a peer
reviewer.
3.5.4 Unit Analysis
In chemistry, only a few numbers stand completely alone. The overwhelming vast
majority have two parts, a number and a unit. Most of these also have a tag identifying
the chemical to which the number and unit refer, that is, 18.02 g H
2
O. All three parts
must be present whenever appropriate so that the number has some context and
meaning. This is the heart of dimensional analysis as learned in high school and
freshman chemistry. Properly keeping track of the chemical tag and units is critical to
avoiding mistakes in data analysis.
The standard set of units in science is the metric system also known as the International
System of Units (SI, Systme International dUnits), which has seven base units
(meter, kilogram, second, ampere, Kelvin, mole, and candela), no end of derived units
(meters/second, etc.), and a large number of named derived units (pascal, newton,
Table 3.7 SI Prefixes
CH154K PChem Lab Manual Page (xxxi)
Multiplier Prefix Abbrev. Numerical Multiplier Prefix Abbrev. Numerical
10
24
yotta Y 1 000 000 000 000 000 000 000 000
10
1
deci d 0.1
10
21
zetta Z 1 000 000 000 000 000 000 000
10
2
centi c 0.01
10
18
exa E 1 000 000 000 000 000 000
10
3
milli m 0.001
10
15
peta P 1 000 000 000 000 000
10
6
micro 0.000001
10
12
tera T 1 000 000 000 000
10
9
nano n 0.000000001
10
9
giga G 1 000 000 000
10
12
pico p 0.000000000001
10
6
mega M 1 000 000
10
15
femto f 0.000000000000001
10
3
kilo k 1000
10
18
atto a 0.000000000000000001
10
2
hecto h 100
10
21
zepto z 0.000000000000000000001
10
1
deca da 10
10
24
yocto y 0.000000000000000000000001
farad, etc.) which can be expressed as derived units. Being able to manipulate these is
important, as there are many hidden relationships that are revealed when you start
playing with the units.
Each of the base units and many of the others can be modified with the addition of a
prefix that indicates the size of the number involved. The prefixes are explained in
Table 3.7. Conversions among these units and between nonSI units are found in Table
3.8.
Manipulation of the units is typically done by multiplying or dividing them by other
units. In most equations you will be using, variables and constants having these units are
combined in various ways, and being able to cancel out units and rearrange them will
help to accurately guide you through the data analysis.
Here are some examples of how the named units break down into combinations of base
units:
2 2
2 2 2 3
Joule Newton Pascal Volt
kg m kg m kg kg m
J N Pa V
s s m s s A
= = = =
(3.763.79)
An examination of the equations that produce values with these units will reveal the
logical arrangement of the base units.
Keep in mind that many fields, especially engineering, use units that are nonstandard in
science, so there will be many opportunities to do conversion. It is expected that all
units will be reported in standard SI format in this course.
CH154K PChem Lab Manual Page (xxxii)
Table 3.8 Unit Conversion Factors
To convert from to Multiply by
atmosphere, standard (atm) pascal (Pa) 1.013 25 E+05
atmosphere, standard (atm) kilopascal (kPa) 1.013 25 E+02
bar (bar) pascal (Pa) 1.0 E+05
calorie (cal) joule (J) 4.1868 E+00
calorie per second (cal /s) watt (W) 4.184 E+00
centimeter of mercury (0 C) pascal (Pa) 1.333 22 E+03
centimeter of water, conventional (cmH
2
O) pascal (Pa) 9.806 65 E+01
centipoise (cP) pascalsecond (Pas) 1.0 E03
cubic foot (ft
3
) cubic meter (m
3
) 2.831 685 E02
cubic foot per second (ft
3
/s) cubic meter per second (m
3
/s) 2.831 685 E02
cubic inch (in
3
) cubic meter (m
3
) 1.638 706 E05
cubic yard (yd
3
) cubic meter (m
3
) 7.645 549 E01
cup (U.S.) cubic meter (m
3
) 2.365 882 E04
cup (U.S.) liter (L) 2.365 882 E01
cup (U.S.) milliliter (mL) 2.365 882 E+02
curie (Ci) becquerel (Bq) 3.7 E+10
day (d) second (s) 8.64 E+04
debye (D) coulomb meter (C m) 3.335 641 E30
degree (angle) () radian (rad) 1.745 329 E02
degree Celsius (temperature) (C) kelvin (K) T/ K = t / C+273.15
degree Fahrenheit (temperature) (F) degree Celsius (C) t / C = (t / F 32)/1.8
degree Fahrenheit (temperature) (F) kelvin (K) T/ K = (t / F + 459.67)/1.8
degree Rankine (R) kelvin (K) T/ K = (T/ R)/1.8
degree Rankine (temp interval) (R) kelvin (K) 5.555 556 E01
dyne (dyn) newton (N) 1.0 E05
erg (erg) joule (J) 1.0 E07
erg per second (erg/s) watt (W) 1.0 E07
faraday (based on carbon 12) coulomb (C) 9.648 531 E+04
fluid ounce (U.S.) (fl oz) milliliter (mL) 2.957 353 E+01
foot (ft) meter (m) 3.048 E01
foot (U.S survey ft) meter (m) 3.048 006 E01
footcandle lux (lx) 1.076 391 E+01
foot of mercury, conventional (ft Hg) pascal (Pa) 4.063 666 E+04
foot of mercury, conventional (ft Hg) kilopascal (kPa) 4.063 666 E+01
foot of water, conventional (ft H2O) pascal (Pa) 2.989 067 E+03
foot of water, conventional (ft H2O) kilopascal (kPa) 2.989 067 E+00
gallon (U.S.) (gal) liter (L) 3.785 412 E+00
gamma () tesla (T) 1.0 E09
gauss (Gs, G) tesla (T) 1.0 E04
hour (h) second (s) 3.6 E+03
inch (in) centimeter (cm) 2.54 E+00
inch of mercury, conventional (inHg) pascal (Pa) 3.386 389 E+03
inch of mercury, conventional (inHg) kilopascal (kPa) 3.386 389 E+00
inch of water, conventional (inH2O) pascal (Pa) 2.490 889 E+02
kelvin (K) degree Celsius (C) t / C = T/ K 273.15
kilocalorie (kcal) joule (J) 4.184 E+03
kilocalorie per second (kcal /s) watt (W) 4.184 E+03
kilowatt hour (kWh) joule (J) 3.6 E+06
light year (l.y.) meter (m) 9.460 73 E+15
liter (L) cubic meter (m
3
) 1.0 E03
mho siemens (S) 1.0 E+00
mile (mi) meter (m) 1.609 344 E+03
mile (mi) kilometer (km) 1.609 344 E+00
mile, nautical meter (m) 1.852 E+03
millimeter of mercury, conventional (mmHg) pascal (Pa) 1.333 224 E+02
millimeter of water, conventional (mmH2O) pascal (Pa) 9.806 65 E+00
minute (angle) (') radian (rad) 2.908 882 E04
minute (min) second (s) 6.0 E+01
ounce (avoirdupois) (oz) gram (g) 2.834 952 E+01
ounce (troy or apothecary) (oz) gram (g) 3.110 348 E+01
CH154K PChem Lab Manual Page (xxxiii)
ounce (U.S fluid) (fl oz) milliliter (mL) 2.957 353 E+01
parsec (pc) meter (m) 3.085 678 E+16
pint (U.S dry) (dry pt) liter (L) 5.506 105 E01
pint (U.S liquid) (liq pt) liter (L) 4.731 765 E01
pound (avoirdupois) (lb) kilogram (kg) 4.535 924 E01
pound (troy or apothecary) (lb kilogram (kg) 3.732 417 E01
psi (poundforce per square inch) (lbf/in
2
) pascal (Pa) 6.894 757 E+03
psi (poundforce per square inch) (lbf/in
2
) kilopascal (kPa) 6.894 757 E+00
rad (absorbed dose) (rad) gray (Gy) 1.0 E02
roentgen (R) coulomb per kilogram (C/kg) 2.58 E04
tablespoon milliliter (mL) 1.478 676 E+01
teaspoon milliliter (mL) 4.928 922 E+00
torr (Torr) pascal (Pa) 1.333 224 E+02
watt hour (Wh) joule (J) 3.6 E+03
watt per square centimeter (W/cm
2
) watt per square meter (W/m
2
) 1.0 E+04
watt per square inch (W/in
2
) watt per square meter (W/m
2
) 1.550 003 E+03
watt second (Ws) joule (J) 1.0 E+00
yard (yd) meter (m) 9.144 E01
year (365 days) second (s) 3.1536 E+07
(Caption notes: 1. table culled from online version of CRC Handbook of Chemistry and Physics, 88
th
Ed.,
Lide, D. R. EdinChief, 2007, pp 123133. 2. Boldface factors are exactno uncertainty [where
fewer digits are given, extra precision is not necessary]. 3. Italicized entries indicate nonstandard units
that are acceptable for use by NIST [National Institute for Standards and Technology]. 4. NIST Special
Publication 811 is highly recommended for learning proper usage of units. It is available online, in the
Lab Library and on reserve in the Mallet library.)
1
Skoog, D.A., Leary, J.J., Principles of Instrumental Analysis, 4
th
Ed., Saunders
College, Ft. Worth, 1992.
Campbell, S. Laboratory Manual For Chem 303L and 304L, Louisiana State University in
Shreveport, 1998. (This is the primary reference for this chapter, though all
references were heavily used in conglomerate.)
Skoog, D.A., West, D.M., Holler, F.J., Fundamentals of Analytical Chemistry, 5
th
Ed.
Saunders College Publishing, New York, 1988.
Garland, C.W., Nibbler, J.W., Shoemaker, D.P., Experiments in Physical Chemistry, 7
th
Ed,
McGrawHill Boston, 2003.
Sime, R.J., Physical Chemistry: Methods, Techniques, Experiments, Saunders College
Publishing, Philadelphia, 1990.
Halpern, A.M., Experimental Physical Chemistry: A Laboratory Textbook, 2
nd
Ed., Prentice
Hall, Upper Saddle River, NJ 1997.
2
Unknownheard or read someplacetreat as an anecdotal story.