Sie sind auf Seite 1von 56

Department of Physics

MSc Laboratory Handbook



Academic Year 2012-13















Abridged from the original by Professor Ben Murdin

Fundamental Constants

Electron rest mass M
e
9.11x10
-31
kg
Proton rest mass M
p
1.67x10
-27
kg
Electronic charge e 1.60x10
-19
C
Speed of light in vacuum c 3.00x10
8
m s-1
Planck's constant h 6.63x10
-34
J s
h/2t 1.05x10
-34
J s
c 197 MeV fm
Boltzmann's constant k 1.38x10
-23
J K
-1

8.85x10
-5
eV K
-1

Molar gas constant R 8.31 J mol
-1
K
-1

Avogadro's number N
A
6.02x10
23
mol
-1

Standard molar volume 22.4x10
-3
m
3
mol
-1

Bohr magneton
B
9.27x10
-24
J T
-1

Nuclear magneton
N
5.05x10
-27
J T
-1

Bohr radius a
0
5.29x10
-11
m
Fine structure constant = e
2
/(4tcc
0
c) (137)
-1

Compton wavelength of electron
c
=h / (m
e
c) 2.43x10
-12
m
Rydberg's constant R

1.1x10
7
m
R

hc 13.6 eV
Stefan-Boltzmann constant 5.67x10
-8
W m
-2
K
-4

Gravitational constant G 6.67x10
-11
N m
2
kg
-2

Proton magnetic moment
p
2.79
N

Neutron magnetic moment
n
-1.91
N


Other Data and Conversion Factors
1 angstrom
10
-10
m
1 fermi fm 10
-15
m
1 barn b 10
-28
m
2

1 pascal Pa 1 N m
-2

1 standard atmosphere 1.01x10
5
Pa
standard acceleration due to gravity g 9.81 m s
-2

permeability of free space
0
4x10
-7
H m
-1

permittivity of free space
0
8.85x10
-12
F m
-1

1 electron volt
eV
1.60x10
-19
J
eV/hc 8.07x10
5
m
-1

eV/k 1.16x10
4
K
1 unified atomic mass unit (12C scale) u 931 MeV/c
2

1.66x10
-27
kg
wavelength of 1eV photon 1.24x10
-6
m
base of natural logarithms e 2.718
ln 10 = loge 10 2.303

Abbreviations
A ampere H henry mol gramme mole T tesla
C coulomb Hz hertz N newton V volt
eV electron volt J joule Pa pascal Wb weber
F farad K kelvin S siemens W watt
g gramme m metre s second ohm

Prefixes
f femto 10
-15
p pico 10
-12
n nano 10
-9


micro 10
-6
m milli 10
-3
c centi 10
-2

k Kilo 10
3
M Mega 10
6
G Giga 10
9

T Tera 10
12

3
Contents
CONTENTS.......................................................................................................................................................... 3
1. INTRODUCTION............................................................................................................................................ 5
2. WORKING IN THE LABORATORY........................................................................................................... 6
2.1 SAFETY IN THE LABORATORY -----------------------------------------------------------------------------------------6
3. THE LABORATORY NOTEBOOK.............................................................................................................. 7
4. UNCERTAINTIES IN MEASUREMENTS .................................................................................................. 9
4.1 INTRODUCTION ----------------------------------------------------------------------------------------------------------9
4.2 SYSTEMATIC ERRORS------------------------------------------------------------------------------------------------- 10
4.3 RANDOM ERRORS ----------------------------------------------------------------------------------------------------- 10
4.4 NUMBERS AND UNITS ------------------------------------------------------------------------------------------------ 11
4.5 PARENT AND SAMPLE DISTRIBUTIONS------------------------------------------------------------------------------ 11
4.5.1 Introduction ....................................................................................................................................... 11
4.5.2 Sample Distribution ........................................................................................................................... 12
4.5.3 Parent Distribution ............................................................................................................................ 14
4.5.4 The error in the mean ........................................................................................................................ 15
5. PROBABILITY DISTRIBUTIONS ............................................................................................................. 17
5.1 INTRODUCTION -------------------------------------------------------------------------------------------------------- 17
5.2 PROBABILITY DENSITY FUNCTIONS --------------------------------------------------------------------------------- 17
5.3 GAUSSIAN OR NORMAL DISTRIBUTION----------------------------------------------------------------------------- 18
5.4 DISCRETE PROBABILITY FUNCTIONS -------------------------------------------------------------------------------- 20
5.5 BINOMIAL DISTRIBUTION -------------------------------------------------------------------------------------------- 20
5.6 POISSON DISTRIBUTION ---------------------------------------------------------------------------------------------- 21
6. PROPAGATION OF ERRORS.................................................................................................................... 23
6.1 MATHEMATICAL BACKGROUND------------------------------------------------------------------------------------- 23
6.2 SPECIFIC EXAMPLES -------------------------------------------------------------------------------------------------- 25
6.2.1 Addition and Subtraction ................................................................................................................... 25
6.2.2 Multiplication and Division ............................................................................................................... 26
6.2.3 Power laws......................................................................................................................................... 27
6.2.4 Logarithms and Exponentials ............................................................................................................ 28
7. REPRESENTING DATA.............................................................................................................................. 29
7.1 DRAWINGS, PHOTOGRAPHS AND TABLES -------------------------------------------------------------------------- 29
7.1.1 Photographs....................................................................................................................................... 29
7.1.2 Tables................................................................................................................................................. 29
7.2 GRAPHS ---------------------------------------------------------------------------------------------------------------- 30
7.2.1 Error Bars.......................................................................................................................................... 30
7.2.2 Drawing The Line .............................................................................................................................. 30
7.2.3 More than One Line on the Same Graph........................................................................................... 30
7.2.4 Linear Scales...................................................................................................................................... 31
7.2.5 Logarithmic Scales............................................................................................................................. 32
8. FITTING DATA USING THE LEAST SQUARES TECHNIQUE.......................................................... 34
8.1 INTRODUCTION -------------------------------------------------------------------------------------------------------- 34
8.2 BEST STRAIGHT LINE FIT: LINEAR REGRESSION ------------------------------------------------------------------ 35
8.3 CORRELATION --------------------------------------------------------------------------------------------------------- 36
8.4 THE _
2
DISTRIBUTION: TESTING THE GOODNESS OF FIT---------------------------------------------------------- 36
8.5 FINAL REMARKS ------------------------------------------------------------------------------------------------------ 37
9. THE LABORATORY REPORT .................................................................................................................. 38
9.1 General Comments................................................................................................................................ 38
9.2.1 Plagiarism and Copying .................................................................................................................... 38
9.2.2 Title, Authors and Affiliation ............................................................................................................. 39
4
9.2.3 Abstract .............................................................................................................................................. 39
9.2.4 Introduction ....................................................................................................................................... 39
9.2.5 Theory ................................................................................................................................................ 39
9.2.6 Experimental Arrangements and Techniques .................................................................................... 40
9.2.7 Procedure........................................................................................................................................... 40
9.2.8 Results................................................................................................................................................ 40
9.2.9 Discussion.......................................................................................................................................... 40
9.2.10 Conclusions...................................................................................................................................... 40
9.2.11 Acknowledgements........................................................................................................................... 40
9.2.12 References ........................................................................................................................................ 41
10. BIBLIOGRAPHY......................................................................................................................................... 42
APPENDIX A: SUMMARY FORMULAE FOR ERROR ANALYSIS........................................................ 43
APPENDIX B: KEY UNITS AND CALCULATIONS IN RADIATION PHYSICS................................... 44
APPENDIX C: GLOSSARY............................................................................................................................. 46

5
1. Introduction

The MSc laboratory classes provide an opportunity for you to further develop the practical skills
associated with radiation, medical and nuclear physics. You should gain experience of studying
problems and designing experiments to test many scientific theories. To do this successfully you
will need to analyse, critically assess and interpret the data you obtained. Finally, you must
develop the skills necessary to describe the work clearly for the benefit of others. These are
amongst the most important skills that a physicist possesses and are often the ones most required
in professional life.

The experimental work that you will carry out in the MSc laboratory will give you specific skills
in the operation and understanding of a variety of complex and specialist nuclear detection
equipment. The experiments will also help you to explore new aspects of nuclear physics, and
the underlying processes related to radioactivity and nuclear measurements. In addition the
laboratory encourages you to think about experimental problems and how to solve them, and you
will extend your general skills and techniques that are fundamental to good experimental work.

There are a number of textbooks and other resource material that you may find useful
background reading for your laboratory work. An extensive bibliography is included in chapter
10. For a good general introduction to laboratory practice students are referred to the recent book
by Kirkup [1].

6

2. Working in the Laboratory

2.1 Safety in the Laboratory

You are required to exercise a sense of responsibility in the laboratory at all times, and to
work in ways that do not endanger either yourself or other people. You are required to
comply with the following instructions.

As a new student you must attend a radiation safety induction and a laboratory
safety induction before you can work in the MSc laboratory. Students will be
issued with radiation badges which must be used at all times.

Smoking, eating or drinking is not allowed within the laboratories. This
includes chewing etc. your pen or pencil.

Always wear a laboratory coat. Coats and any bags should be stored in the
cloakroom area, not at your work bench.

Remove your laboratory coat and wash your hands before leaving the lab.

Use tweezers to move radioactive sources. Sources should not be moved away
from your work area without the approval of the laboratory supervisor. You
must never take a source out of the laboratory.

Ensure that you follow the written procedures in the lab script safely and take
heed of any verbal instructions by staff. Risk assessments are available for the
experiments as part of the help folders. If you are in any doubt consult a
demonstrator or supervisor.


IMPORTANT:

Never turn the power on a NIM crate ON or OFF - leave the crates as you find them




7
3. The Laboratory Notebook

It is said that Rutherford discovered Radon by observing that the results of his
experiment depended on whether the laboratory door was open or closed!

A laboratory research notebook is one of a physicists most valuable tools. For a practising
physicist it contains a permanent written record of his or her mental and physical activities
from experiment and observation through data analysis to the ultimate understanding of the
phenomena under investigation. The act of writing in the notebook should cause the physicist
to stop and think about what is being done in the laboratory. In this way it is an essential part
of doing good science.

It is a requirement of the course that all students keep an account of all experiments in a
laboratory notebook. Keeping good notes is an acquired skill that can be of tremendous
benefit in any career. Writing good notes requires practice and discipline. It is not a skill that
comes easily to most people. For research students the notebook will be the prime source of
information required to write a PhD thesis. The researcher may even wish to make money
from an invention or result! In the case of important data for a patent, for example, the
experiment and the written record should also be witnessed and signed by a competent
observer. The notebook may play a vital role in obtaining legal rights to the invention!

A bound notebook is provided at the start of your course. If need to purchase another it
should be a hard back book. It must be brought into the laboratory on every laboratory day
and used exclusively for entering notes on the conduct of the experiment. All information
should be recorded clearly in ink in the notebook on pages used consecutively with no gaps.
All entries should be dated. Note that, unless it is a help to you in your learning, it is a waste
of time to copy material from this booklet or the experiment instruction sheets into your
notebook. However, any information you do consider important should be photocopied and
permanently glued into the notebook.

The written record should include information on:

Preliminary reading, settings, adjustments.

Readings together with uncertainties and settings used in the final data
gathering.

Any procedure or precaution not fully described in the printed script or
elsewhere.

Any relevant external conditions (room temperature, barometric pressure, etc.)

Precautions taken to optimise performance and/or minimize errors.

Sources of important information, components and advice.

8
The written record should NOT be:

Censored;
Corrected;
Written after the event

Before taking any readings, rough estimates of the expected results should be made and
noted. This is so you will have some idea how to setup instruments initially, how to scale
axes on graphs so that these can be plotted as readings are taken, and so on. It also helps you
spot gross errors in the experimental arrangement or data more easily if you have some idea
of what should happen.

Most importantly, readings should be tabulated with uncertainties as they are taken. The
tables should have headings that identify the measurements taken, the equipment used, over
what range and in what units. Normally results should be graphed as the readings are taken.
If, for some unusual reason this cannot be done, the graph should be drawn immediately
afterwards and certainly before leaving the laboratory or proceeding to the next part of the
experiment.

After the data has been recorded the physicist begins to study it. The notebook provides the
forum in which the data and observations are analysed, evaluated and interpreted. This
further analysis must also be written in the notebook. This final complete record may then be
used as the basis for writing reports, technical papers, patent disclosures and correspondence
with colleagues. The information can also be used to review progress and plan future work.

It should be clear from the above just how important the laboratory notebook is. A well-kept
notebook should be a valuable asset but do not spend time perfecting the appearance of your
notes at the expense of working on the scientific problem.
9
4. Uncertainties in Measurements

4.1 Introduction

Experimental physics is about studying the world around us. This invariably involves making
measurements - and this is where the problems start! The spinning of a coin only has two
possible outcomes but we still cannot tell what it will be. A Geiger Muller tube measures
accurately 12 disintegrations in one 60s period but 15 in the next! I ask ten people to tell me
the length of the same piece of string and get 10 different values. How is a physicist meant to
make sense of this data?

In addition, how should physicists report their findings to others? For example if we measure
the length of a piece of string and get the answer of 1m what can we say? As scientists the
one thing we should not say is:

The length of this piece of string is 1m.

However, we might say:

We measured the piece of string and got the result of 1m.

This immediately means that we have to be concerned about confidence. Are other people
going to be confident in our measurements or not! We might instead have written:

We find the length of the piece of string to be 1m with 95% confidence that it lies
between 0.90 and 1.10m

This is still our opinion of the measurement. If real confidence is to be justified then enough
information must be supplied for the reader to make their own assessment of the
measurement. There have been many instances where work, even by eminent physicists, has
been shown later to contain errors much larger than the limits they originally quoted!
Gradually as the experimental techniques and methods are improved the results tend to
approach a single value. In a dictionary error is often defined as the difference between the
observed or calculated value and the true value. In general this is not what error means to a
scientist. The problem with experimental science is that the true value is not known.

One class of errors can be dealt with easily. Simple mistakes in measurements or calculation
are usually easily spotted and eliminated by carefully repeating the experiment or calculation.

Random fluctuations in measurements and systematic errors are more difficult to correct for.
In everyday language accuracy and precision mean much the same thing. In science the
difference between accuracy and precision is very important. The accuracy of an experiment
is a measure of how close the result of an experiment is to the true value. The precision of an
experiment is a measure of how well the result has been determined without reference to its
agreement with the true value. This distinction is illustrated in Figure 4.1. Generally when we
quote uncertainties in an experiment we are talking about the precision with which the result
has been found.
10
X Axis
Y

A
x
i
s
X Axis
Y

A
x
i
s
(a) (b)

Figure 4.1: (a) Accurate but imprecise data and (b) precise but inaccurate data. The solid
line represents the "true" relationship between x and y.

4.2 Systematic Errors

The accuracy of an experiment is usually limited by how well we can control systematic
errors. These are not easily detected but can be estimated from careful study of the
experimental conditions and techniques used. For example, they may be due to problems
with:

Instrument Calibration

Instrument Reproducibility

Biased observation

A significant part of planning of an experiment should be devoted to understanding the origin
and reducing the sources of systematic error.

4.3 Random Errors

The precision of an experiment depends on how well random errors can be reduced. As
before, we can refine the experimental method and techniques. We also know intuitively that
taking a sufficiently large number of readings may reduce the effect of random errors on the
precision of the measurements.

As Figure 4.1 shows there is usually little point in reducing the random error significantly
below the systematic error, although of course it is rare that we actually know how much the
systematic error is.

11
4.4 Numbers and Units

Quantitative information is expressed in numbers and units - unless the quantity is
dimensionless in which case there are no units. You should use the International System of
Units (SI units) together with other units that are in use with the International System; these
are summarised in Appendix B and on the inside front cover of this handbook.

The number associated with the unit should contain the minimum number of digits necessary
to adequately express the quantity with due regard to the precision with which it has been
determined. For example, if a length has been determined to a precision of 0.1m it would be
misleading to express it as 17.1154m, as might happen if you had simply copied the digits
from a calculator. Instead, write it as 17.1m, or, more informatively 17.10.1m; the
remaining digits do not contain useful information and should be omitted.

It might be that you have a good reason for wishing to express this length in mm. If without
comment you simply write 17100mm, it is not clear whether the two zeros are significant or
whether, as in this example, they are there simply to get the decimal point in the right place.
It is possible of course, to write 17100100mm, but although this is probably less misleading
there is still some uncertainty as to the significance of the zeros in the 100. It would be
better to write 1.71x10
4
mm, or, more informatively, (1.710.01)x10
4
mm.

It is often preferable to avoid using zeros that are there only to get the decimal point in the
right place; such zeros are referred to here as superfluous zeros because it is possible to
avoid the need for them. Superfluous zeros can occur not only when the number is much
larger than unity, as in the above example of 17100mm, but also when the number is very
much smaller than unity; for example in writing 0.003920.00001m, again the zeros are there
to show where the decimal point is. One technique of removing superfluous zeros has been
shown in the previous paragraph, namely to use 10 raised to an appropriate power as a
multiplier; in this case to write (3.920.01)x10
-3
m reduces the number of superfluous zeros to
one. Another technique is to change the size of the unit by using one of the prefix letters
which form part of the SI system; for example, 0.003920.00001m can be written
3.920.01mm. Again the number of superfluous zeros has been reduced to one.

4.5 Parent and Sample Distributions

4.5.1 Introduction

If we consider some quantity e.g. the length of a specific piece of string, it is reasonable to
assume that there is a true value for that quantity, which we would like to know. However
random fluctuations mean that there is some variation between any measurement and this true
value. In cases where the quantity is discrete then the number of possible results is limited in
a way that the values of a continuous quantity are not. However there is still a true
distribution for the results and hence a true mean value which is likely to be continuous.
Imagine that we make repeated measurements of our piece of string, and we plot the results
obtained on a graph. If this graph represents all the measurements we could have made, i.e.
an infinite number of measurements then this is called the parent distribution. See Figure 4.2
for example. Such a distribution is also sometimes called a population or universe
distribution.

12
x
f(x)
0.0
0.5
-10 -8 -6 -4 -2 0 2 4 6 8 10
o = 1
o = 2
o = 4
= 0

Figure 4.2: Example of three parent distributions of data having the same true value, ,
which in this example is zero, but different amounts of spread, o, as indicated.
The distribution depicted is the Gaussian distribution (see below section 5.4)

The spread of measurements around the central peak of the histogram is the result of random
errors in the measurement process. These arise, in this case, mainly from the limitations of
the measuring device that prevents the observer from obtaining exactly repeatable results.

In reality of course the parent distribution described above cannot be measured. However, a
trial or sample distribution can be obtained by taking a finite number of measurements, n, of
the observed quantity x. Clearly it is assumed that in the limit as n that the sample
distribution tends to the parent distribution. We want to know the following things from the
sample distribution:

1) What is the best estimate of the mean of the parent distribution we can obtain?
i.e. what is the best number to quote for the measurement made?

2) What is the spread of the parent distribution? This gives us information about
the precision of our measurements.

3) What error should we quote for the best estimate of the mean? This is usually
called the standard error in the mean. It is not the same as the spread.


4.5.2 Sample Distribution

Clearly, as we never take an infinite number of measurements we can never know the mean
or spread of the parent distribution. Instead, the best we can do is to take a finite number of
measurements.

For a given number of trials n the sample mean, m, is defined by:

x x
n
m
n
i
i
=

=1
1
(4-1)
13

This is just another way of saying that the mean of a sample of data x
1
, x
2
, x
3
, x
n
is the sum
of all the data values, divided by the number of data. We use the bar above x to mean
simply the mean value of.

The median and mode of a distribution are ways alternative to the mean of talking about the
location or centre of the distribution, but are less frequently used (see Bevington [2]).

The most commonly used way of describing spread of a distribution or data set is with the
variance or the standard deviation (s.d.). [The reason for this is that it is easy to calculate the
shape etc of the Gaussian, Poisson and Binomial distributions from their s.d., and as we shall
see later, and these are the most important distributions in statistics]. The variance and s.d.
are given by:

2
) var(
n
s x =
(4-2)

( ) ( )
2 2 2
1
2 2
1
m x m x m x
n
s
n
i
i n
= =

=
(4-3)

In other words the variance is the mean value of the square of the deviation of each data point
from the mean, or for short, the mean square deviation. The s.d. is just the square root of
the variance. It is also known as the root mean square deviation, or for short, the r.m.s.
deviation. It is often convenient to find s
n
using the last equality in 4-3, but beware, the two
terms are large and the difference is very small. This means that you should keep as much
precision as possible until the end.

There are many convenient ways of calculating the standard deviation
a) Most calculators, including those sanctioned by the Physics Department, have a
statistics mode. In the stats mode, you should clear the stats memory, then enter each
data value. Functions will be available to tell you the number of data values, the
mean, the standard deviation of the sample s
n
(and the estimated population s.d. s
n-1
,
see next section)
b) Use Excel. Type in the data into a blank spreadsheet and use the functions STDEVP
and SQRT. N.B. in Excel STDEV is used to calculate s
n-1
(see next section).
c) By hand. Calculate the sum of all the data values and divide by n. This is the mean
value; squaring it gives the second term in the last equality of equation 4-3. The sum
of the squares of all the data values divided by n is the first term. To find s
n
subtract
and square root.

It is often helpful to know that if you have data values which only differ in the last few
decimal places there is a shortcut to the calculation, which can be especially helpful if you
are calculating by hand or with a calculator (though not really necessary if you are using a
spreadsheet like Excel). The following formulas say how to calculate the mean and standard
deviation when you subtract off some number A from all your data x
i
and divide by a scale
factor by B to get a new data set, y
i
.
B
A x
y
i
i

=
You can then calculate the mean and s.d. in y and get back to the mean of x using:
A y B A
B
A x
B x + = +
|
.
|

\
|
=
14
which is just B times the scaled mean plus A, and the s.d. in x is:
( )
y n x n
Bs
B
A x
B
A x
B x x s
,
2 2
2 2
,
=
|
|
.
|

\
|
|
.
|

\
|

|
.
|

\
|
= =
i.e. just B times the s.d. of the scaled data. E.g. if your x data set is 20.012174, 20.012146,
20.012198, 20.012186, then subtract off A=20.0121 and scale by B=0.000001 to give a scaled
y data set of 74, 46, 98, 86. The mean of these y = 76. The mean of the squares of the scaled
set is 6148, so using (4-3) the scaled error is
( ) 19 76 6148
2
,
= =
y n
s
Hence the original unscaled mean of x is 0.00000176+20.0121 = 20.012176 and the original
s.d. is 0.00000119=0.000019.

4.5.3 Parent Distribution

If the number of possible outcomes of the measurement is small, i.e. the total population is
small, then it may be possible to actually make all possible measurements, e.g. when
counting moons around each planet in the solar system. In this case the mean and standard
deviation of the population are obtained using equations 4-1,2,3.

If there are an infinite number of possible measurements, then the population distribution has
a mean, , defined by:

n
i
i
n
x
n
1
1
lim
(4-4)

Note that it is quite common for the characteristic properties of the population distribution to
have Greek letters (e.g. ) while the sample distribution normally has Latin letters (e.g. m).

Following the logic of 4-1 to 4-4, the standard deviation of the population distribution is
defined by:

( )

=


n
i
i
n
x
n
1
2 2
1
lim o
(4-5)

Eqns 4-4 and 4-5 are just another way of saying that the population mean and s.d. are the
mean and s.d. of a sample of data as the sample size tends to infinity. Of course it is not
possible to make an infinite number of measurements. In fact it is often not possible (due to
time constraints) even to measure a large number of data, so we are almost always required to
make an estimate of and from a small sample. In the case of , life is simple, as the best
possible estimate of is just m.

It would seem intuitive that that the best estimator for o would be s
n
, and it certainly has the
same limiting value as equation 4-5 as n . However, there is a flaw in this argument that
is important at small values of n, i.e. just the situation where we probably want to use it [3].
The best estimate of o obtained from the sample distribution has been derived in a number of
texts [2,3]. It is given by:

15
( )

=
n
i
i n
m x
n
s
1
2 2
1
1
1
(4-6)
It is now clear why we used the subscript n for the s.d. of eqn 4-3; because the denominator is
n, and this is what distinguishes it from 4-6 where the denominator is n-1.

N.B There is great confusion between various texts about the names of the two quantities
defined by equations 4-3 and 4-6. Some texts call the former the sample s.d. and the latter the
population (or parent) s.d., but many use exactly the reverse! The important thing is not to
remember the name, but what they are for. Equation 4-3 is an exact calculation of the
variance of the sample data (which is most useful if the data include the whole population of
possible results hence the confusion with names), and equation 4-6 is for estimating the
population s.d. from a small sample (again, you can see why confusion can arise). The
confusion disappears if you use the symbol s
n
or s
n-1
. In physics, it is very rare that you will
be considering a measurement where you are able to take the entire population, so you will
almost always be interested in s
n-1
.

It is sometimes useful to note that s
n-1
and s
n
are related by:

2 2
1
1
n n
s
n
n
s
(

(4-7)

4.5.4 The error in the mean

Having taken a small sample of data and used it to estimate the mean of the parent
population, it is extremely important to give some indication of how uncertain this estimate
is, as explained in section 4.1. It is important to note that the standard deviation, s
n-1
or o,
determines the width of the histogram or distribution of measurements. It is not a measure of
how close m is to , i.e. the precision to which the mean, m, is known. The likely difference
between m and is called the standard error in the mean or more simply the standard
error. Here it is given the symbol s
m
, see figure 4.3.

How do we determine the standard error? Even a small number of measurements will give a
distribution that is a rough approximation to the parent distribution. If this has a narrow peak
and hence small o, as shown in Figure 4.2, the measurements will be close together and the
mean m of the sample will be close to . Alternatively, if the peak is broad the measurements
will be more widely scattered and the mean m is less likely to be near . So o, or its best
estimate s
n-1
, must be related to the error in m. We intuitively expect that by making n larger
and larger we can make the histogram of the sample look more and more closely like the
population, and so the error in m should become smaller and smaller. In fact we should be
able to get m as near to as we like, even if our individual data points have poor precision
and hence large s or o. In short, the error in m decreases with increasing n. However, s
n-1

does not change much with n, it just gets closer and closer to o, so the error cannot simply be
s
n-1
. It can be shown that the likely difference between m and , the standard error in the
mean, is
n
s
s
n
m
1
=
(4-8)

You can see that if only one measurement is taken the error is the same as the standard
deviation, but that the error goes down the more measurements are taken. [Of course it is
16
very difficult to guess the s.d. unless you take more than one measurement]. There is a law of
diminishing returns due to the square root: you need to quadruple the number of data to halve
the error, but take a hundred times more data to get the error down by a factor of ten.


















Figure 4.3: Example of a sample histogram (cross-hatched) taken from a parent
population (solid line). The population has mean, , while the sample has mean m. The best
estimate for the population standard deviation, o, from the sample data is s
n-1
. The likely
difference between m and , i.e. the error in m, is s
m
, the standard error in the mean.

To understand more precisely what the standard error in the mean is, imagine taking a small
sample of data and calculating the mean. Imagine then taking another sample, the same size,
and calculating the mean again. Keep taking samples until you have a large number of values
for the mean. The mean of the means will be the same as the true mean of the population (if
you take a very large number of samples), but the sample means have a spread, because each
individual sample is quite small and therefore its mean has some error. The spread of these
means about the true mean is the standard error in the mean. To be exact the standard error is
the likely standard deviation of those sample means. The derivation of Eqn 4-8 is not difficult
and may be found in most statistics texts.

So finally we can summarise as follows. From a sample distribution of n measurements
equation 4-1 gives the best estimate of the population mean, while equation 4-6 gives the best
estimate of the standard deviation. Finally equation 4-8 gives the best estimate of the error in
the mean. The result of all this is a measurement quoted as:
m
s m
(4-9)
Most calculators, including the standard University approved calculator, have a statistical
mode that easily allows you to enter a set of data and obtain m, s
n-1
and n. There is therefore
no excuse for not taking multiple measurements of physical quantities in the laboratory and
quoting m s
m
. The final rule to remember is that the error should normally be quoted only
with one significant figure, and the mean should have the same number of decimal places as
the error (see section 4.4). Dont forget that the error might be much larger than your original
precision on the measuring apparatus you used to take the data (if the data has lots of intrinsic
variation or noise), or the error may even be much smaller than the original precision (if you
took lots of measurements)!
s
m
m
s
n-1

o
17
5. Probability Distributions

5.1 Introduction

This section will describe the properties of some of the most important probability
distributions in physics. The most frequently used, though you may sometimes be unaware of
it, is the Gaussian distribution. It is the familiar bell shaped curve that governs most physical
parameters. We also describe several other distributions which have specialised use, the
Binomial, Poisson and Lorentzian distributions. We start by looking at general properties of
probability distributions.

5.2 Probability density functions

The Gaussian distribution, for example, is for continuous variables. Continuous variables can
take any value with infinite precision (even if the measurement has reduced precision), and in
effect it is therefore impossible to get a measurement result exactly the same as some test
value. It therefore makes no sense to ask what is the probability of obtaining any particular
value? (as this is zero). Only the probability of obtaining a value between two limits has a
meaning. We therefore use a probability density function P(x) which gives the probability
density for any given test value x, but it is the area under the function between two limits that
gives the probability. The probability of the measurement lying between x and x+dx is
P(x)dx. For a finite sized interval the probability of lying between a and b is given by:

( )
}
=
b
a
dx x P p (5-1)

The sum of all probabilities must be unity, so the probability density function must be
normalised, i.e. it must satisfy the condition:

( )
}


=1 dx x P (5-2)

The most likely value from a p.d.f., i.e. its expectation value, written <x>, is given by:

( )
}


= dx x xP x (5-3)

In fact the expected value of any function f of x is

( )
}


= dx x P x f f ) ( (5-4)

This is useful if you know, say, the probability distribution for tolerances on a resistor, and
you know how different values affect the output of the circuit, you can work out the expected
output. The expected output is not necessarily just the output calculated from the expected
resistance, if the output is non-linear and the probability distribution is asymmetric and. We
can use equation 5-4 to calculate the likely value of f(x) = (x-)
2
, i.e. the likely square of the
deviation of a measurement from the expectation value, called the expectation variance

( ) ( ) ( )
}


= = dx x P x x
2 2 2
o (5-5)

18
5.3 Gaussian or Normal Distribution

The Central Limit Theorem (which has a quite simple proof, see for example ref [2]) states
that quantities which have many random contributions are governed by a distribution that
tends towards a Gaussian for large numbers of contributions. This is the reason why it
governs so many measurements in physics and other disciplines almost everything you can
measure is affected by many different random factors and types of noise. Measurement of a
piece of string will depend on the exact temperature (which may make the ruler expand
slightly), the steadiness of your hand, parallax, the humidity (making the fibres expand) etc
etc. Throwing one die gives a distribution that is equal for each result, throwing two gives a
triangular distribution peaked at 7, but throwing 10 dice gives a distribution that is very
nearly Gaussian, because of the large number (in this case 10) of random contributions to the
result. This is interesting considering that the distribution for results from each individual die
is flat, and nothing like a Gaussian. Similarly the mean value of a sample of data will obey a
Gaussian distribution due to the random contribution from each data point, even if each data
value is governed by another distribution. This fact is used in the proof of equation 4-8.

The family of Gaussian or Normal probability density functions with different means and
standard deviations o is usually written:

( )
(
(

|
.
|

\
|
=
2
2
1
exp
2
1
, ;
o

t o
o
x
x P
(5-6)

It can be shown quite simply by substituting equation 5-6 into 5-3 that the expectation value
<x> = , and into 5-5 that the expectation variance is o
2
. In fact it is in order to be able to
arrive at these results that the distribution is written in this way. Most importantly, it is
because o features explicitly in this distribution that it is so often used as the measure of
spread.


Fig 4.2 illustrated three distributions with mean value of zero and different values of the s.d.
Figure 5.1 shows the area under the distribution between o is 0.683, which means that there
is a 68.3% probability of any one observation lying within the limits o from the expectation
value. Conversely it also means that if a standard error is quoted then what it really means is
that there is 68.3% confidence that the true value lies with one error of the quoted value. It
also means that for any graph with error bars corresponding to one s.d., only 68.3% of the
data points will overlap the true line. The corresponding percentages for 2o, 3o and 4o
are 95.45%, 99.73% and 99.99%, respectively.

It can be seen from equation 5-6 that the width of the Gaussian distribution as described by
the standard deviation o, occurs where the height of the curve has dropped from e
-1/2
of its
peak value. The width of the distribution can also be characterised by its full-width at half
maximum (FWHM), I. This is also often called the half-width and is defined as the width of
the distribution between the values of x where the height of the distribution is half its
maximum value. From equation 5-6 it can be shown that:

I = 2.354o (5-7)

The relation between these parameters is also shown in Figure 5.1.
19
x
x +A x
x
-40 -30 -20 -10 0 10 20 30 40
P
(
x
;

,
o
)
0.00
0.01
0.02
0.03
0.04
0.05

o
I

Figure 5.1 Distribution showing area under an element of the distribution and the area
under . where in this case =0 and o=10.

Figure 5.2 shows the mathematical connection between the Gaussian distribution and some
other statistical distributions that are commonly encountered in physics. The mean and spread
(standard deviation or half-width) of the distributions are also described. They will be
discussed in the laboratory sessions and are referred to specifically in some of the
Introductory Physics Experiments. Further details are discussed by Bevington [2].

Binomial
Gaussian
Poisson
n
np >>1

n , np =

Figure 5.2 Schematic diagram showing the relationship between some of the probability
distributions discussed in the following sections. E.g. the Poisson distribution looks the same
as the Gaussian distribution in the limit of large .




20
5.4 Discrete probability functions

Variables can be discrete as well as continuous, and in these cases P(x) represents a
probability not a probability density. This changes the way we calculate normalisation,
expectation values, and expectation variances. The normalisation looks similar to equation 5-
2, but the integral is replaced by a sum:

( ) 1
0
=

=
n
x
x P (5-8)

where there are n different possible results for x. Similarly the expectation value is given by:

( )

=
= =
n
x
x xP x
0
(5-9)

and the expectation variance is:

( ) ( ) ( )

=
= =
n
x
x P x x
0
2 2 2
o (5-10)

5.5 Binomial Distribution

If in a trial involving n elements the probability of a successful event is p and of a failure q
(so that p + q = 1) then the probability of x successes is

( )
( )
x n x
q p
x x n
n
p n x P

=
! !
!
, ;
(5-11)

Note that this probability distribution is asymmetric unless p = q = 1/2.

As x varies from 0 to n the expressions for P(x;n,p) given by (5-11) are in fact the successive
terms in the binomial expansion of (p+q)
n
, and since p+q = 1 the distribution is normalised:

( ) ( ) 1 1 , ;
0
= = + =

=
n n
n
x
q p p n x P (5-12)

The expectation value is given by:
( ) np p n x xP x
n
x
= = =

=0
, ; (5-13)

A measure of the spread of this distribution about the mean value is given by the standard
deviation as described previously. In this case:

npq = o (5-14)

Example: What is the probability of throwing a 6 five times in ten throws of the dice?

Ten throws of the dice means that n = 10. p = the probability of throwing a six = 1/6, q = the
21
probability of failing to throw a six = 5/6. Putting these numbers into equation (5-11) gives
the probability distribution shown in Figure 5.3, below. By calculation and from the figure
the probability of throwing a 6 five times in ten throws of the dice is 0.013.
x
0 2 4 6 8 10
P
(
x
;
n
,
p
)
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
n =10, p =1/6, q = 5/6.

Figure 5.3 Binomial Probability Distribution for throwing a number x of 6's from ten
throws calculated using equation 5-11.


5.6 Poisson Distribution

This is appropriate for the statistics of random events, such as lightning strikes etc. where the
expectation number of events, , is known. The probability of having x events is given by:

( )
!
;
x
e
x P
x


=
(5-15)

The expectation standard deviation is given by:

o =
(5-16)

This distribution represents an approximation to the binomial distribution for the special case
where the average number of successes is comparatively rare, very much less than the
possible number, i.e. << n because p << 1. In this situation the very large number of
possible events makes the binomial distribution impossible to calculate.

It may also be the case that neither n nor p is known. However, the average number of events
, or its estimate m, may be known. It can be derived from the binomial distribution by
making p 0 and n in such a way that the expectation value = np stays finite. It then
22
follows from equation (5-11) that the Poisson distribution is also discrete and asymmetric,
but the asymmetry becomes less apparent with increasing expectation value . This is shown
by the two distributions in Figure 5.4 with different values of .
x
0 2 4 6 8
P
(
x
;

)
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
x
0 5 10 15 20 25
P
(
x
;

)
0.00
0.04
0.08
0.12
0.16
= 1.7 = 10

Figure 5.4 Two Poisson distributions. The asymmetry decreases as the mean increases. A
continuous curve is shown although the function is only defined at the integer values shown
by the dots.

For large the Poisson distribution closely approximates a Gaussian distribution described in
the next section. The mean value is given by:

np =
(5-17)

One very important aspect to note here is that the ration of the standard deviation to the mean
is:


o 1
=
(5-18)

This means that the percentage error in the mean decreases as the mean increases - a useful
fact to keep in mind for experiments where the Poisson distribution applies.

Example: Radioactive counting.

Suppose we have 0.5mg of 238U which contains about 1.3x10
18
nuclei. This number is n.
These undergo o-decay with a half-life of about 4.5x10
9
years which means that the
probability p of decay of one nucleus in one second is about 5x10
-18
. Thus in 0.5mg the mean
value in a one second interval of the number of counts is np=6.5. P(x; 6.5) is the
probability of observing x counts in a one second interval.
23

6. Propagation of Errors

6.1 Mathematical Background

In this section we set out the general rules for determining the error or precision of a derived
quantity in terms of the errors of each of the directly measured properties. This is known as
the propagation of errors.

In the last two chapters we have seen how best estimates of the mean and the standard error
in the mean can be calculated and the types of distributions that are frequently encountered.
However, in experimental science the best values of each of several direct measurements of
different quantities are often used to derive the value of another property. For example, the
volume of a box is found by multiplying the lengths of its sides while a velocity can be
derived from a direct measurement of a distance and of a time. These calculations involve
simple multiplication and division. If our best estimates for the dimensions of the box were
that it had a width w
0
, a height h
0
, and a depth d
0
then the best estimate of the volume of the
box is just:

0 0 0
d h w V =
(6-1)

If the errors in the sides were Aw, Ah and Ad, respectively then we could estimate the error in
volume by expanding the expression for V about the point (w
0
, h
0
, d
0
) in a Taylor series. This
gives the change in value AV as:

+ A + A + A = A
0 0 0 0 0 0
, , , h w d w d h
d
V
d
h
V
h
w
V
w V
c
c
c
c
c
c
(6-2)

where we have neglected higher order terms. The partial differential term:

0 0
,d h
w
V
c
c


(note the use of c/cx instead of d/dx) means differentiate V with respect to w keep h and d
constant with values of h
0
and d
0
, respectively:

( )
0 0 0 0
,
0 0
d h d wh
w d
d
w
V
d h
= =
c
c


Substituting the appropriate derivatives in to equation 6-2 gives:

d h w h d w w d h V A + A + A ~ A
0 0 0 0 0 0
(6-3)

Equation 6-3 gives the change in volume of the box for small changes in the lengths of each
dimension. You can see that it is correct, because it is just the area of each face times the
extra thickness perpendicular to the face. In writing equation 6-2 we have assumed that the
changes are small. If w, h and d all change there are some cross-terms missing, but negligible,
as can be seen by calculating the change in volume exactly:
24

( ) | | d h w d h w h d w w d h d h w h d w w d h
d h w d h w d h w h d w w d h d h w h d w w d h d h w
V d d h h w w V
A A A + A A + A A + A A + A + A + A =
A A A + A A + A A + A A + A + A + A + =
A + A + A + = A
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0
) )( )( (


where the round bracket contains the first-order terms which we keep in equation 6-3 and the
square bracket contains the second and third order terms we neglected.

The above equations discuss small changes in general, which can be positive or negative. In
the case of an error, the exact magnitude and sign is not known, and if we are interested in the
likely error in the volume of the box due to small uncertainties in the lengths we should not
simply add all three terms in equation 6-3. This is because it is actually quite unlikely that all
three lengths will have a positive error of one standard deviation. In all probability one
quantity might have a positive error, one a very small error, and the other a negative error,
and this would tend to cancel out a bit giving a smaller error in V than 6-3 would suggest. In
fact, it can be shown (quite simply, ref [2]) that the error terms should be squared and added
then square rooted, rather than simply being added. For the example of the box:

( ) ( ) ( )
2
0 0
2
0 0
2
0 0
d h
w V
s h w s d w s d h s + + ~ (6-4)

where we use the notation
V
s to mean the standard error in the mean value of V etc.

In general, if the derived quantity u is related to some independent directly measured
quantities (x, y, z, . . .) by a functional relationship:

( ) , , , z y x f u = (6-5)

where the function f may be additive, multiplicative, exponential or some other combination.
The function f is assumed to be continuous and differentiable.

We also assume that the probability distributions for x,y,z etc are well behaved so that the
expectation or mean value of the observed quantity u is defined by:

( ) , , , z y x f u = (6-6)

where the bar is used to signify the expectation or mean value. The expectation error in the
mean value of u is given by

2
, , ,
2
, , ,
2
, , ,
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
~
z y x
z
z y x
y
z y x
x u
z
u
s
y
u
s
x
u
s s
c
c
c
c
c
c
(6-7)

In other words differentiate u with respect to each of the variables that contain significant
error and evaluate the differentials at the mean values, multiply each differential by its
corresponding error, square each error term, and add them up and take the square root.

Note that it is only necessary to differentiate w.r.t. variables with significant error. Suppose
the error in z is insignificant, the z term in 6-7 will also be insignificant, and therefore may be
25
neglected.

Sometimes you may come across a function that is very long and tedious to differentiate. In
this case it may be quicker to calculate the terms in equation 6-7 from the following:

) , , , ( ) , , , (
, , ,

z y x f z y s x f
x
u
s
x
z y x
x
+ =
c
c
(6-8)

which is the change in u due to a change in x of its error. Do the same for each of the other
terms which have an error, and then substitute these terms into equation 6-7, i.e. square and
add the changes in u due to each variable, then square-root. Do NOT simply find the change
in u when all the variables increase by their error all at once.

You should note the assumptions made in deriving equation 6-7 (eg see Taylor [4]). In
particular it assumes that the fluctuations in x and y are small and uncorrelated.

6.2 Specific Examples

In the sections that follow a number of examples are given. In each case u is a function of x
and y, while a and b are constants.

6.2.1 Addition and Subtraction

Consider that the quantities x and y are related by.

by ax u =
(6-9)

The best value of the derived quantity is given by:

y b x a u =
(6-10)

If the errors in x and y are uncorrelated then, using equation 6-7, the standard error in the
mean value of u is given by:

2 2 2 2
y x u
s b s a s + = (6-11)

Notice that the error terms in equation 6-11 are added irrespective of whether there is a plus
or minus sign in 6-10. It is also important to note that it is the absolute errors in x and y
which are relevant here and not the fractional or percentage errors.

Example: Evaluate z = 3x y for x = 0.8 0.1 and y = 3.0 0.3.

For the sum the best estimate of z is 2.4 + 3.0 = 5.4. For the difference the best estimate of z
is 2.4 - 3.0 = -0.6.

If the errors in x, and in y, are uncorrelated then the standard error in the mean in both cases
is given by:

( ) ( ) 424 . 0 3 . 0 1 1 . 0 9
2 2
= + =
z
s
26

So the answer is 5.4 0.4 for the sum and -0.6 0.4 for the difference.

Notice how important the errors can become when subtracting two quantities and also the
difference when the measured quantities are correlated.


6.2.2 Multiplication and Division

For multiplication let the functional relationship have the form shown below:

axy u =
(6-12)

Then the best value of the derived quantity is:

y x a u =
(6-13)

For division let the functional relationship have the form shown below:

y
x
a u =
(6-14)

Then the best value of the derived quantity is:

y
x
a u =
(6-15)

For the product putting the differentials into equation 6-7 gives:

( ) ( )
2 2
y x u
s x a s y a s + = (6-16)

Now dividing through by the best estimate of u gives:

( )
( )
( )
( )
2
2
2
2
y x a
s x a
y x a
s y a
u
s
y
x u
+ =

2
2
|
|
.
|

\
|
+
|
.
|

\
|
=
y
s
x
s
u
s
y
x u
(6-17)

For the quotient putting the differentials into equation 6-7 gives:

2
2
|
.
|

\
|
+
|
|
.
|

\
|
=
y x u
s
x
a
s
y
a
s (6-18)

Now dividing through by the best estimate of u gives:

27
2
2
|
.
|

\
|
+
|
|
.
|

\
|
=
y x u
s
x a
y
x
a
s
x a
y
y
a
s

2
2
|
|
.
|

\
|
+
|
.
|

\
|
=
y
s
x
s
u
s
y
x u
(6-19)

From equation 6-18 and 6-19 it is clear that the fractional standard error in the mean is the
same in both cases, irrespective of whether u involves products or quotients.

Thus in the case of a product or quotient the fractional standard error on the mean depends on
the fractional standard errors of the directly measured quantities.

Example: Evaluate z = 3x/y for x = 0.8 0.1 and y = 3.0 0.3.

The best estimate of z is 2.4/3.0 = 0.8. The fractional errors in x and y are 0.125 and 0.10,
respectively. For the standard error in the mean:

16 . 0
0 . 3
3 . 0
8 . 0
1 . 0
2 2
=
|
.
|

\
|
+
|
.
|

\
|
=
z
s
z


So the answer is 0.8 16%, i.e. the error is 16% of the best estimate of z. Writing it as an
absolute error gives 0.80 0.13.

Notice the difference between quoting the result with a fractional error and an absolute error.

6.2.3 Power laws

Suppose u varies with a power law, which may be any power - positive, negative (for
reciprocals) or fractional (for roots):

b
ax u = (6-20)

Then the best value of the derived quantity is:

b
x a u = (6-21)

Putting the differential into equation 6-7 gives:

( )
2
1
x
b
u
s x ab s

= (6-22)

Now dividing through by the best estimate of u gives:

b
x
b
u
x a
s x ab
u
s
1
=

x
s
b
u
s
x u
= (6-23)
28

So the fractional error is just multiplied by the power.

6.2.4 Logarithms and Exponentials

If u is found by taking the logarithm of x, i.e.:

( ) ax u ln =
(6-24)

then
x x
u 1
=
c
c
, so that

x
s
s
x
u
= (6-25)

Alternatively, if u is related to x by:

( ) bx u exp =
(6-26)

then
( ) bu bx b
x
u
= = exp
c
c
so that:

x
u
bs
u
s
= (6-27)

Example: Evaluate z = ln(2x) and z = exp(2x) when x = 1.0 0.1.

The best estimate of z in the two cases is 0.693 and 7.389, respectively. For the first example
the error in the mean is 0.1 giving the answer 0.7 0.1 using equation 6-24. In the second
case using equation 6.26 the fractional error in the mean is 0.2 giving the answer 7.4 1.5.

29
7. Representing Data

7.1 Drawings, Photographs and Tables

Graphs, line drawings and tables are all concise and easily assimilated ways of presenting
information if they are properly prepared; they should be used in preference to burying the
information in the text.

7.1.1 Photographs

Photographs should not be used unless there is no other way of conveying the information;
line drawings are often to the preferred. For example a photograph of a piece of apparatus can
be confusing or useless because of the amount of irrelevant detail it contains, whereas a line
drawing can and should be made helpful by showing, schematically, only the essential
features.

Graphs, line drawings and photographs are all classified as figures. The figures should be
numbered serially in Arabic numerals in the order in which they are mentioned in the text and
should be referred to by their figure numbers; for example, A graph of stress against strain is
shown in Figure 1. Tables should be serially numbered in the same way as figures, and are
referred to as Table 1, Table 2, and so on. Tables and figures should be numbered
independently of one another.

Each figure and each table should have its own caption. The purpose of the caption is to add
enough information to what is in the figure or table itself to enable the reader to get the whole
message you want the figure or table to convey without having to search through the text of
the report. The caption starts with the figure or table number. For example, the caption to a
graph might read: Figure 4: Closed-loop voltage gain in dB of the amplifier shown in Figure
3 plotted against frequency in Hz on a logarithmic scale, for different amounts of feedback.
The value of the feedback resistor R is shown against each curve.

7.1.2 Tables

Tables should not be used to present data which are included in graphs, unless the data are to
a higher precision than can be shown in a graph. In that case you should consider whether the
graph should be omitted.

However, a table is a useful way of grouping data together where they can easily be found
and, if relevant, comparisons between them can easily be made.

The same rules regarding the displaying of units and the avoidance of superfluous zeros
apply to tables just as they do to graphs. The precision of a quantity, which on a graph would
be shown by an error bar, is shown after the number either with a sign or by enclosing the
error in brackets. An example of part of a table is shown in Table 7.1.

Sample Number Length (m) Length (m)
1 1.10.1 1.1(1)
2 3.80.2 3.8(2)

Table 7.1 Fictional data: the caption should explain in a more-or-less self-contained way the
meaning of the columns and rows and where the errors and uncertainties come from.
30

Like a graph, a table must have a caption. Apart from any other necessary information it
gives the caption should state what the error limits represent, for example standard deviation,
or estimated maximum error or whatever.

7.2 Graphs

It is usual to plot the independent variable (that is the one you set to a chosen sequence of values
in the experiment) along the horizontal axis (the abscissa) and the dependent variable along the
vertical axis (the ordinate).

7.2.1 Error Bars

It often happens that the value to be plotted represents the mean of several repeated
measurements. The extent of the spread of these measurements about the mean can be
expressed numerically in various ways, for example by calculating the standard deviation,
and can be represented on the graph by an error bar drawn through the plotted point and of
length equivalent to the spread. An example of a graph with error bars was given on page 11.
It is important to inform the reader about the precision of your measurements, and the use of
error bars on graphs is a convenient way to do so. What the error bar represents must be
explained in the caption. For example it may represent the standard deviation (calculated by
taking multiple measurements for one of the data points) or it may be your best estimate of
the precision likely to have been obtained.

You will find that many papers published in the scientific press contain graphs without error
bars. There may or may not be good reasons for their omission but you are not to follow their
example: as an undergraduate you must regard yourself as an apprentice physicist and must
put error bars in your graphs. If you think you have a good reason for not putting error bars
on your graphs this should be stated in the graphs caption.

7.2.2 Drawing the Line

Most, although not all graphs have in them not only plotted points representing the
measurements but also a line or lines (either straight or curved) relating to the plotted points
in some way. Such a line may be based on theory, it may be used to indicate an apparent
mathematical relationship between the quantities for which no theoretical basis is known to
the author, or it may simply be a smooth curve drawn as a guide to the eye. In any case the
basis for drawing the line should be stated, preferably in the caption.

7.2.3 More than One Line on the Same Graph

A convenient way to make comparisons is to plot two or more lines or curves in the same
graph. In such a case the lines must be distinguished in some way, for example by using
different plotting symbols, by using full ______ , dashed - - - - , or dotted lines ....... and/or by
labelling the lines.

You should not use different colours for this purpose: scientific journals require black and
white, and colours are not distinguished by the usual photocopiers. Make sure you can
distinguish between different lines and different data points if they come close or cross. Do
not obscure the lines in the graph by writing on the graph itself the explanation of what they
represent. Put this information in the caption, and keep the labelling on the graph to a
31
minimum. An example of a caption where this is done has already been given: part of the
caption reads The value of feedback resistor R is shown against each curve. Then the label
in the graph itself is just the resistance value. If for some reason this technique cannot be used
then the curves can simply be labelled A, B, C etc. and the significance of the labels
explained in the caption.

Label the axes to show what quantities are plotted and what units are being used. Units with
any prefixes attached to them should be enclosed in curved brackets. This is a convention
employed by many but not all scientific journals.

7.2.4 Linear Scales
Lengt h (10
-5
m)
0.0 0.5 1.0 1.5
Lengt h (m)
0.000000 0.000005 0.000010 0.000015
Lengt h (m)
0 5 10 15
(i)
(ii)
(iii)

Figure 7.1 Three linear scales showing different prefixes for the same range.

A linear scale is one where equal increments of a quantity are represented by equal
increments of distance along the scale. Logarithmic scales are discussed in a later section. For
a linear scale, mark graduations at equal spacing along the axis. Insert numerical values at
regular intervals which are spaced widely enough to avoid the axis becoming cluttered with
numbers. It is helpful to the reader (and it may help you when plotting) if a scale is chosen
and marked so as to make numerical interpolation easy.

Superfluous zeros in the axis markings should be eliminated by changing the unit in which
the value is expressed to a suitable multiple or submultiple of the unit. This should be done
by using the appropriate SI letter prefix. If, for some reason, none of the SI letter prefixes is
considered suitable, a numerical prefix can be employed; this is usually a power of ten. A
simple example of three equivalent representations is shown in Figure 4.1.

These three scales all convey exactly the same information. For example, 0.000015 = 15m
(= 15x10
-6
m) = 1.5x10
-5
m. However, (ii) and (iii) convey it more quickly than (i) and
therefore are to be preferred. Method (ii) uses a standard SI prefix () and is therefore to be
preferred to (iii) unless there is some positive reason for using (iii).

32
Note that the unit, including prefix where used, is placed in curved brackets. This is
particularly useful where numerical prefixes are used in emphasising that the prefix belongs
to the unit. Avoid ambiguities about whether a scaling factor is attached to the unit or to the
measure; thus Length (10-5m) could also correctly be written Length x 105 (m).

Note also that if the lengths in the above example had run from 1.000000, to 1.000015m
instead of from 0.000000m to 0.000015m and assuming that the precision of the
measurements justifies all these digits being given, then the zeros are not superfluous and
cannot be removed simply by changing the size of the unit.

Where the quantity to be plotted is dimensionless the option of using multiples or
submultiples of the unit is not available as there is no unit. In such a case zeros which
indicate the decimal point position can be removed by plotting a stated multiple or
submultiple of the quantity; for example:
Bir efr ingence x 10
4
0 1 2 3 4 5

Figure 7.2 Scale for a dimensionless quantity.

This means that the numbers marked on the axis represent birefringence values of 0x10-4,
1x10
-4
, 2x10
-4
, 3x10
-4
, and 4x10
-4
.

7.2.5 Logarithmic Scales

A logarithmic scale is one on which the logarithm of the quantity is plotted; thus equal
increments of distance along the scale correspond to equal increments in the logarithm of the
quantity. It is convenient here to consider separately scales where natural (Napierian)
logarithms are used; logarithms to base e denoted by ln, and scales where the logarithms are
to base 10 denoted by log.

7.2.5.1 Natural logarithmic (ln) Scales

Suppose ln p is to be plotted on
one axis of a graph and let p
represent, for example, pressure
measured in pascals (Pa). It is
usual to mark on the scale the
values of ln p; these values are
dimensionless and have no units.
However, the unit in which p is
measured must be shown.
Unfortunately there is no generally agreed way of doing this; for the purpose of your
laboratory reports you should do as follows:

The inner brackets, round Pa, show that Pa is the unit, and the outer brackets, round p (Pa)
ln[p (Pa)]
3.0 3.2 3.4 3.6 3.8 4.0
Figure 7.3: Natural logarithmic scale.
33
show that Pa belongs to p and not to the logarithm. The mathematical propriety of taking the
logarithm of a dimensional quantity is not discussed here!

7.2.5.2 Log Scales

Two methods are in common use for logarithms to base 10. Suppose that frequency f
measured in hertz (Hz) is to be plotted on a log scale from 0.1 Hz to 1000 Hz. The first
method is the same as for logarithms to base e. That is, the values of the logarithm are
marked on the axis as shown in Figure 7.4(i).

In the second method, the axis is labelled f (Hz) and marked with values of f in Hz rather than
with values of the logarithms of f. Thus equal increments of distance along the axis do not
represent equal increments of f: they represent equal increments in log f. It should be evident
from the frequency values shown that it is log f which is plotted rather than f. But you should
always state in the caption that it is log f which is plotted.

In your laboratory reports you may use specially printed log graph paper. Such paper is
available with log scales on both axes and with a log scale on one axis and a linear scale on
the other, and with one or more than one decade on the log axis. The log axis is marked off to
represent equal increments of the quantity whose log is being plotted - hence the markings
are not equally spaced, as indicated in Figure 7.4 (iii) which gives an expanded version of
two decades of the axis above. You would not insert all these numbers on your graph; they
are given here to emphasise the non-linearity of the scale and the manner in which the scale
repeats from one decade to the next with corresponding values multiplied by 10.
log [f (Hz)]
-1 0 1 2 3
f (Hz)
10
-1
10
0
10
1
10
2
10
3
f (Hz)
1 10 100
(i)
(ii)
(iii)

Figure 7.4 Log scale where in (i) the log of f has been plotted on a linear
scale while in (ii) and (iii) f has been plotted on a log scale.
34

8. Fitting Data using the Least Squares Technique

8.1 Introduction

It is often helpful to test the validity of a theoretical model by manipulating it so that a linear data
plot can be fitted using a straight line with an expression of the form:
c mx y + =
(8-1)

This is shown schematically in Figure 8.1.
x
y
A
B
C
D
E
F
c

Figure 8.1 Linear fit to data with error bars showing best straight line (solid curve). The
extremes of fit consistent with the error bars are shown by the dashed curves.

How do we estimate the uncertainty in the gradient and intercept?

For most laboratory experiments a graphical method will suffice. The best straight line is first
drawn AB. Try looking down the line to check this. The line has a gradient m and intercept c.
Then draw two additional lines CD and EF so as to go through the limits of the data and their
experimental errors. These lines are used to give the limits of the gradient and the intercept.

This is obviously a very subjective procedure, but to help make it rather more objective a few
points should be borne in mind;

1. All the experimental data should be given the same weight unless there are special
reasons for rejecting one or more points.

2. If the error bars have been estimated in the recommended way so that the true value is
twice as likely as not to lie within the range of the error bars the best line should go through
two thirds of the error bars and miss the remaining one third. On this same basis the lines CD
and EF should correspond to the extremes of the line AB that still go through about two
thirds of the error bars.

35

8.2 Best Straight Line Fit: Linear Regression

There is a commonly used procedure for computing the best fit of a straight line to a set of
data points using the principle of least squares. It is a good idea to be aware of this principle
even when drawing the best straight line by eye as describe above. If the data points are
assumed to only have significant error in their y values (NB if they have significant error in x
and not y then the axes must be swapped) then the principle is to choose m and c in equation
8-1 so that the mean square deviation is a minimum. The deviation of each point is measured
vertically to the fitted line, and is weighted by the size of its error bar. The sum of the squares
of the deviations is also known as _
2
:

( )

=
(

+
=
n
i i
i i
s
c mx y
1
2
2
_ (8-24)

This criterion leads to the formulae for the best estimates of the gradient m and intercept c on
the y-axis and additionally the best estimates of the standard deviations of the gradient and
intercept. The expressions below have been arranged so that they involve various sums which
are straightforward to tabulate. These are given for the special case when the uncertainties in
the y values are all the same and equal to s
y
.

2 2
x x
y x xy
m

= (8-3)

x m y c = (8-4)

The denominator in 8-3 is just the variance in the x data points, and the numerator is the
covariance of the x and y values (see next section). The error in the gradient and intercept are
given by

( )
y m
s
x x n
s
2 2
1

= (8-5)

m c
s x s
2
= (8-6)

You can see from equation 8-5 that in order to obtain a small error in m then the number of
data points should be large, the variance of the x values should be large (i.e. there should be a
large spread in x), and the error bars should be small. From Eqn 8-6, for a small error in c, the
error in the gradient should be small but in addition the root mean square value of x should
also be small, i.e. there should be plenty of values near x=0.

Many calculators and almost all good graphical software will do this type of data fitting.
Remember that in equatios 8-3 to 8-6 that the error in all the y data is assumed to be the same.
When this is not the case then the terms in the above equations have to be weighted. Further
details are given in a number of texts [2,3].



36

8.3 Correlation

In the above section we assumed that there was a linear relationship between the x and y data
pairs. However, an important experimental question often asked is whether there is a
relationship at all between two measured quantities. A parameter that provides a criteria to
assess this is called the coefficient of correlation given by [3]:

( )( ) | |
( ) ( ) ( ) ( ) | |
( )( ) | |
2 / 1
2 2 2 2
2 1
2 2
y y x x
y x xy
y y x x
y y x x
r
i i
i i


=


=


(8-7)

where r
2
can take any value from 0 to 1. The numerator is the covariance of x and y and the
denominator is the product of the s.d.s. When r is 0 there is no correlation while for perfect
correlation r
2
is 1. At what value of |r| should doubts about any correlation change to
confidence? One possible rule of thumb is that the error in the gradient is less than 33%, i.e.
s
m
< m/3. It can be shown that:

( )
( )
2
2
2
2
1
r n
r
m
s
m

= |
.
|

\
|
(8-8)

Hence our rule of thumb give that:

( )
2 1
7
3
+
>
n
r
(8-9)

and the minimum value of |r| that satisfies this condition decreases as the number of
observations n, increases.

8.4 The _
2
Distribution: Testing the Goodness of Fit

The minimum value of _
2
obtained during fitting has a probability distribution that can be
calculated assuming Gaussian statistics for each data point. The distribution has expectation
value v and variance 2v. This can be used to test if our fit is likely to be the true function if
we are confident of the data and the error values we gave. If _
2
is within v \(2v) then we
can be reasonably confident that our fit is good. Conversely if we dont know the size of the
error-bars, we can assume that the _
2
value is equal to v, we can work backwards to find the
error in our y-measurements.

The value of _
2
which corresponds to the least-squares straight-line fit is related to the
correlation coefficient and the spread in y values.

( ) r
s
y
y
= 1
) var(
2
2
_ (8-10)

It can be seen that to have a very good (tight) fit the correlation must be very good (r near to
1), the error bars should be small, and there should be a large range of y values.


37

8.5 Final Remarks

In this section we have only been able to discuss briefly linear least squares fitting of
experimental data. The details of fitting data where the error bars are not all the same size has
only been mentioned in passing and we have not discussed how the data should be analysed
when both the x and y data have significant errors associated with their measurement. Even
more important is the fact that not all experimental data can be fitted by a straight line. Non-
linear least squares fitting is also possible. All these subjects are discussed in more detail by
Bevington and his book provides a route to a more detailed analysis of this subject [2].






38
9. The Laboratory Report

9.1 General Comments

In the first semester you will be required to write up some experiments as formal laboratory
reports. The following notes are intended to help you in structuring your report. Whenever
possible, it is preferable to prepare your report using a word processor. A report should
consist of no more than 10 pages in total, including title page (with date experiment started),
brief abstract, text, tables, diagrams and references. The text should follow standard format,
i.e. aims of experiment, brief theory, description of apparatus and method (to the extent that
this differs from the laboratory script, which must accompany the report), results, discussion
and conclusions. The report should be written so that someone who is familiar with the
Physics course, but not with the particular experiment, can understand it. Try to carry the
reader with you so that at all stages of the report they will understand the relevance of what
they are reading. Be grammatical, concise and lucid. Poor spelling should be avoided; have a
good dictionary available during the writing of the report, and ensure you spell-check
documents thoroughly. Write legibly and strive for a tidy presentation. Avoid abbreviations.
Though some people choose to write reports in the first person it is more usual to write in the
third person.

Writing a report is intended to give you practice and prepare you for a likely future situation
in which you have to report on your work to a section leader or to colleagues, or write a
scientific paper. So keep it concise, and dont waste the readers time!

The following sections will be found suitable for most of the reports that you will need to
write. If you wish to depart from the suggested arrangement, you may well devise your own
section headings to suit the material; it is however important to impose a good structure on
your report. The Abstract, Introduction, Discussion and Conclusion sections are
essential. Any lengthy material, such as an algebraic derivation or details of a technique that
interrupts the flow of a report may be relegated to an appendix.

Your report should be presented for marking by handing it in at the FEPS student support
office (08AA02). Remember that the academic responsible for marking your report may also
want to see the associated entries in your laboratory notebook.

9.2.1 Plagiarism and Copying

Plagiarism, or copying other peoples work, is treated very seriously by the University and
any student found guilty of committing plagiarism will be subject to the penalties set out in
the Universitys regulations. The guidelines that this Department applies to plagiarism are
explained in the Undergraduate Student handbook, the main features of which are reproduced
below.

As part of a degree programme students are required to submit various types of coursework
for assessment (examples include essays, laboratory reports, computer programs and
dissertations). Whilst researching work students will normally read other people's work in
books, journals, conference papers and lecture notes and therefore students should be aware
that plagiarism occurs in the following cases:-

Reproduction of all or part of the work of any other student or external author;

39
Inclusion of portions of another text in your own work;

Copying of phrases or sentences, or direct paraphrasing of these;

Copying previously assessed work of your own without the agreement of your
lecturer.

In many cases it is necessary to include quotations, sentences and paragraphs of other
people's work, be it published or unpublished, in order to highlight a particular point. In such
cases, any included text from another source (apart from that containing common knowledge)
must be indicated by quotation marks or indented paragraphs that clearly identify the exact
extent of this borrowed text, together with appropriate references.

Within the context of writing your laboratory report, these guidelines apply in just the same way
as for writing an essay etc. In particular you should realise that your laboratory report must be
written by yourself and in your own words, and should be distinctly different from that of your
laboratory partner. We expect partners to discuss together the analysis and presentation of their
experimental data, but when it comes to writing the laboratory report this should be done
separately. We do not expect to see large elements of the laboratory script reproduced in the
report.

9.2.2 Title, Authors and Affiliation

You should think carefully about the title of your report. Use the title to give the reader some
information about what they are about to read. Remember to include the names of all the authors
and where the work was done. It may be obvious now, but it is not always the case.

9.2.3 Abstract

This should not be much longer than about one hundred words and should mention both the kind
of measurements made and the methods used. It should also summarize the numerical results
obtained, or, if relevant, state the qualitative results. It is most important that the prospective
reader should be able to find out whether the report is of interest to him or her based on this
section alone. This section is independent of the rest of the report and should be written when the
report itself is completed. It should make sense if it is read on its own in complete isolation from
the report itself. It should be concise and informative.

9.2.4 Introduction

This section should set the work in context, both in terms of previous research in the subject
and reasons why the experiment is interesting, or useful or relevant. The purpose of the
experiment and the choice of method should be mentioned. In a research report proper the
introduction would include a mention of previous work that is relevant. It is best to write the
Introduction last when you are quite clear what it is you are introducing. Do not copy
material directly from the lab scripts but do try to be selective.

9.2.5 Theory

Give the necessary theoretical background for the understanding of the experiment. Do not
derive results available in standard texts or derive the theory of any standard instrument. Instead
you should cite the appropriate references. Modifications of standard theory should be discussed
40
and, in all cases, the expression which is used to derive your final result should be stated and
symbols defined. Again this section is most easily written after you have written the results and
discussion sections as then you will know what parts of the theory are relevant.

9.2.6 Experimental Arrangements and Techniques

Include brief explanations of any diagrams of apparatus - remember that well thought out
diagrams are far superior to lengthy descriptions. Give detailed specifications of apparatus
wherever it is of critical importance. Call all diagrams "Figure 1, 2 . . ", and refer to them as such
in the text, drawing the reader's attention to their existence at the earliest point at which it would
be useful to do so. Provide all figures with a caption.

9.2.7 Procedure

Describe the experimental procedure adopted to obtain the data - but don't include the
obvious or the trivial. Include any precautions which were necessary.

9.2.8 Results

Results should be presented graphically wherever possible, otherwise numerical results
should be arranged in as convenient and concise a form as possible. A brief verbal account,
referring to the graphs and tables, is essential. Dont present results in both graphical and
tabular form unless this is necessary to provide all the information which you wish to convey.
Do not include arithmetic but make sure that the reader knows where the final numerical
results come from.

Since the results are meaningless without some quantitative estimation of the errors involved,
the errors should be included here. Do not reproduce standard error theory but make it clear
on what basis the errors are calculated.

9.2.9 Discussion

Try to make sense of all your data. Indicate where systematic errors may have affected your
results. Provide a convincing argument for your final conclusions. In many cases it is convenient
to discuss the results as you present them in which case these two sections can be combined.
Remember you should decide what to do on the basis of what makes it easier for the reader to
understand, not what makes it easier for you to write!

9.2.10 Conclusions

Briefly summarise those aspects of the results about which you should have convinced the reader
in the previous section. This section may be very short but make it informative. This last section
may be all that a busy scientist actually reads.

9.2.11 Acknowledgements

Most laboratory work is an exercise in co-operation. There are many cases where you rely on
others for help, advice. This section gives you a chance to thank them. It is wise to take the
opportunity otherwise you may not find them so obliging next time.


41
9.2.12 References

References are an important part of the report. They should guide the reader to additional
background information relevant to the experiment. This may include previous results,
background theory and experimental techniques.

The usual method is to add a number either as a superscript or in the text close to where it is
appropriate. For example:

The tunnelling of electrons in semiconductors was first reported in the late 1950s[1].
or
The tunnelling of electrons in semiconductors was first reported in the late 1950s[1].

The reference section of the report would then include an entry:

[1] L. Easki, Phys. Rev. 109, 603 (1958).

The reference includes the names of the authors, the publication, its volume number, page
number and year of publication. It is also sometimes useful to include the full title of the
article.
Another method of referencing a paper is to put the first author and year of publication in the
text. For example:

The tunnelling of electrons in semiconductors was first reported in the late 1950s[L.
Esaki (1958)].

Using this style the references are listed in the reference section in alphabetical order. So for
example:

Esaki l., 1958, Phys. Rev 109, 603.
Kitchen C.R., 1982, Early Emission Line Stars (Hilger) p.68.
Smith T., 1987a, J. Phys. D: Appl. Phys. 24, 68.
Smith T., 1987b, Phys. Rev. 123, 168.

Note that the volume number may either be underlined or in bold. The publisher where
appropriate is given in brackets. If an author has published more than one paper in the same
year these are distinguished by adding a letter in alphabetical order after the year.

42
10. Bibliography

The main books referenced in the text are as follows:

1. L. Kirkup, Experimental Methods: An Introduction to the Analysis and Presentation
of Data, J. Wiley and Sons (1994).

2. Bevington and D.K. Robinson, Data Reduction and Error Analysis for the Physical
Sciences, 2nd Ed. McGraw-Hill (1992). The first edition with only Bevington as the author is
in the library: 519.286 BEV.

3. N C Barford, Experimental Measurements: Precision, Error and Truth, 2nd Ed. J.
Wiley and Sons (1985).

You may also find the following additional books of use:

E M Pugh & G H Winslow, The Analysis of Physical Measurements, Addison-Wesley,
Massachusetts (1966).

L. Lyons, A Practical Guide to Data Analysis for Physical Science Students, Cambridge
(1992).

R.J. Barlow, Statistics: A Guide to the Use of Statistical Methods in the Physical Sciences,
John Wiley and Sons (1989).

R.A. Day, How to Write and Publish a Scientific Paper, CUP (1989).

J.R. Taylor, An Introduction to Error Analysis, OUP, California (1982).

S.L. Meyer, Data Analysis for Scientists and Engineers, John Wiley and Sons (1975).

L.G. Parrett, Probability and Experimental Errors in Science, John Wiley and Sons (1961).

H.M. Kanare, Writing the Laboratory Notebook, American Chemical Society (1985). In the
library: 415.1 KAN.



43
Appendix A: Summary formulae for Error Analysis

a) For a sample of n measurements the best estimate of the mean of the parent distribution is:

=
=
n
i
i
x
n
m
1
1


The best estimate of the variance of parent distribution is:


( )

=
n
i
i
x m
n
s
1
2 2
1
1


The best estimate of the standard error of the mean is:


( )
n
s
m s
2
2
=


For further information see Section 4.5


b) The general expression for the propagation of errors is:


( ) ( ) ( ) +
|
|
.
|

\
|
+
|
|
.
|

\
|
~
2
2
2
2 2
y
u
y s
x
u
x s u s
c
c
c
c


where the s(u
_
) is the standard error of the mean u
_
and the partial derivatives are
evaluated using the mean values of all the parameters concerned. In the following examples u
is a function of x and y while a and b are constants:


by ax u =

( ) ( ) ( ) | |
2 1
2 2 2 2
y s b x s a u s + =



axy u =

( ) ( ) ( )
2
2
2
2
2
2
y
y s
x
x s
u
u s
+ =



y
x
a u =

( ) ( ) ( )
2
2
2
2
2
2
y
y s
x
x s
u
u s
+ =



( ) ax u ln =

( )
( )
x
x s
u s =



( ) bx u exp =

( )
( ) x bs
u
u s
=



For further information see Chapter 6.
44
Appendix B: Key Units and Calculations in Radiation Physics

Prefixes for SI units
factor prefix symbol factor prefix symbol
10
18
exa E 10
-3
milli m
10
15
Peta P 10
-6
micro
10
12
Tera T 10
-9
nano n
10
9
Giga G 10
-12
pico p
10
6
Mega M 10
-15
femto f
10
3
kilo k

10
-18
atto a

Relationship between SI units and non-SI units
physical
quantity
SI unit non-SI unit relationship
activity
becquerel (Bq)
1 becquerel =
disintegration /sec
curie (Ci)
1 Bq = 2.70 x 10
-11
Ci
= 27.0 pCi
1 Ci = 3.7 x 10
10
Bq
= 37 GBq
absorbed
dose
gray (Gy)
1 Sv = 1 J/kg
rad (rad)
1 Gy = 100 rad
1 rad = 0.01 Gy
= 10 mGy
dose
equivalent
sievert (Sv)
1 Sv = 1 J/kg
rem (rem)
1 Sv = 100 rem
1 rem = 0.01 Sv
= 10 mSv
exposure
coulomb/kilogram
(C/kg)
roentgen (R)
1 C/kg = 3876 R
1 R = 2.58 x 10
-4
C/kg
=258 C/kg

Simple Conversion Table
Ci kBq Ci MBq
mCi MBq mCi GBq
Ci GBq Ci TBq
0.1 3.7 30 1.11
0.2 7.4 40 1.48
0.25 9.25 50 1.85
0.3 11.1 60 2.22
0.4 14.8 70 2.59
0.5 18.5 80 2.96
1 37 90 3.33
2 74 100 3.7
2.5 92.5 125 4.625
3 111 150 5.55
4 148 200 7.4
5 185 250 9.25
6 222 300 11.1
7 259 400 14.8
8 296 500 18.5
9 333 600 22.2
10 370 700 25.9
12 444 750 27.75
15 555 800 29.6
20 740 900 33.3
25 925 1000 37

45
Basic Calculations for Radiation Physics

Radioactive Decay
Classically : Or Simply : Which can be rearranged to :
t
e N N

=
0

2
1
) 2 (ln
0
t
t
e N N

=

2
1
2
0
t
t
N
N =

Where : N = Quantity at time t
0
+t
N
0
= Quantity at time t
0

t = Time difference
t
1/2
= Half life
= Decay constant

Example : S092.PH ;
152
Eu ; Orig. Act. 370.0kBq ; Act. Date. 16
th
June 1981

The activity on 4
th
September 2006 is :
yr
kBq
Activity
51 . 13
) 1981 / 06 / 16 2006 / 09 / 04 (
2
0 . 370

=

So the activity is = 101.51 kBq


Dose Rate
2
7
.
d
p E A
DR

=

Where : DR = Dose Rate in Svh
-1
A = Activity in MBq
E = Emission energy in MeV
p = Emission probability
d = Distance from source in m

Example : 370.0 kBq ;
57
Co source ; 122keV(0.847) and 136keV(0.104)

The dose rate at 1m is :

2
1 7
) 104 . 0 136 . 0 ( ) 847 . 0 122 . 0 ( 370 . 0

+
=

DR


So the dose rate is = 0.006 Svh
-1


46
Appendix C: GLOSSARY


A
ABSORBED DOSE
Absorbed dose is the amount of energy deposited in any
material by ionizing radiation. It is a measure of energy
absorbed per gram of material. The SI unit of absorbed
dose is the gray. The special unit of absorbed dose is
the rad.
ACCURACY
The degree of agreement between an individual
measurement or average of measurements and the
accepted reference value of the quantity being measured.
See also precision.
ACTIVATION ANALYSIS
A method of chemical analysis (for small traces of
material) based on the detection of characteristic
radionuclides in a sample after it has been subjected to
nuclear bombardment.
ADC
Analog to Digital Converter. A device which changes an
analog signal to a digital signal.
AIM
Acquisition Interface Module: a type of multichannel
analyzer.
ALARA
Since exposure to radiation always carries some risk,
the exposure should be kept "As Low As Reasonably
Achievable", as defined by 10 CFR 20.
ALGORITHM
A set of well-defined rules for solving a problem.
ALPHA PARTICLE [Symbol: a]
A particle made up of two neutrons and two
protons; it is identical to a helium nucleus and is
the least penetrating of the three common types of
radiation (the other two are beta particles and
gamma rays), being stopped by a sheet of paper or
a few centimeters of air. An alpha-emitting substance is
generally not dangerous to a biological system, such as
the human body, unless the substance has entered the
system. See decay.
AMPLIFICATION
The process by which weak signals, such as those from a
detector are magnified to a degree suitable for
measurement.
ANALOG MULTIPLEXER
An electronic instrument that accepts several inputs and
stores each one in a separate section of MCA memory.
Also called a mixer/router.


ANNIHILATION RADIATION
Radiation produced by the annihilation of a positron
and an electron. For particles at rest, two photons
with an energy of 511 keV each are produced.
ANTICOINCIDENCE CIRCUIT
A circuit with two inputs. The circuit delivers an output
pulse if one input receives a pulse within a predetermined
time interval, usually on the order of milliseconds, but not if
both inputs receive a pulse. A principle used in pulse
height analysis. See also coincidence
circuit.
AREA
The number of counts in a given region of a spectrum
that are above the continuum level.
ASCII
An acronym for American Standard Code for Information
Interchange, a method for encoding alphabetical, numeric,
and punctuation characters and some computer control
characters.
ATTENUATION CORRECTION
Correction to the observed signal for the attenuation of
radiation in a material between the sample and the
detector or within the sample itself.
B
BACKGROUND RADIATION
Radiation due to sources other than the sample, such
as cosmic rays, radioactive materials in the vicinity
of a detector or radioactive components of the
detection system other than the sample.
BACKGROUND SUBTRACTION
The statistical process of subtracting the background level
of radiation from a sample count.
BACKSCATTERING
The process of scattering or deflecting into the sensitive
volume of a measuring instrument radiation that
originally had no motion in that direction. The process is
dependent on the nature of the mounting material, the
shield surrounding the sample and the detector, the nature
of the sample, the type of energy of the radiation, and the
geometry. See also scattering.
BASELINE
In biology, a known base state from which changes are
measured. In electronics, a voltage state (usually zero
volts) from which a pulse excursion varies.
47
BECQUEREL [Symbol: Bq]
The SI unit of activity, defined as one disintegration
per second (dps).
BETA PARTICLE [Symbol: ]
An elementary particle emitted from a nucleus during
radioactive decay with a single electrical charge and a
mass equal to 1/1837 that of a proton. A negatively
charged beta particle is identical to an electron. A
positively-charged beta particle is called a positron.
BIOLOGICAL HALF-LIFE [Symbol: Tb]
The time required for a biological system to eliminate half
of the amount of a substance (such as radioactive
material) by natural processes. Compare effective
half-life and half-life.
BREMSSTRAHLUNG
Radiation produced by the sudden deceleration of an
electrically charged particle when passing through an
intense electrical field.
C
CASCADE SUMMING
Also referred to as true coincidence summing, it
occurs when two or more pulses from the same decay
are summed because they deposit energy in the
detector at the same time. It is a function of the
measurement efficiencies and occurs only with susceptible
cascading nuclides (60Co, 88Y, 152Eu, 133Ba, etc.)
CENTROID
The center of a peak; usually not an exact channel
number.
CHANNEL
One of an MCA's memory locations for storage of a
specific level of energy or division of time.
CHERENKOV RADIATION
Photons emitted from polarized molecules when
returning to their ground state following excitation by
charged particles traveling faster than the speed of light in
a transparent medium.
CHI-SQUARE TEST
A general procedure for determining the probability that
two different distributions are actually samples of the same
population. In nuclear counting measurements, this test is
frequently used to compare the observed variations in
repeat counts of a radioactive sample to the variation
predicted by statistical theory.
COINCIDENCE CIRCUIT
A circuit with two inputs. The circuit delivers an output
pulse if both inputs receive a pulse within a predetermined
time interval, usually on the order of milliseconds, but not if
just one input receives a pulse. A principle used in
pulse height analysis. See also
anticoincidence circuit.
COINCIDENCE SUMMING
A process where the signal from two or more gamma
rays emitted by a single decay of a single
radionuclide occur within the resolving time of the
detector end up being recorded together as a single
event so that the recorded event is not representative of
the original decay. Typically causes counts to be lost from
the full energy peaks, but may also cause addition to
the full energy peaks. Coincidence summing is a function
of the sample-to-detector geometry, and the
nuclide's decay scheme. It is not a function of the
overall count rate.
COLLECT
An MCA function that causes storage of data in
memory.
COMPTON SCATTERING
Elastic scattering of photons in materials, resulting
in a loss of some of the photon's energy.
CONFIDENCE FACTOR
It is common practice when reporting results to assign
them a confidence level: the value plus or minus one
standard deviation. Radiation protection
measurements are usually reported at the 95% confidence
level, meaning that the results would be expected to be
within plus or minus that range 95 out of 100 times. Also
called Confidence Level.
CONTINUUM
A smooth distribution of energy deposited in a gamma
detector caused by the partial absorption of energy from
processes such as Compton scattering or
bremsstrahlung.
CONVERSION GAIN
The number of discrete voltage levels (or channels)
that the ADC's full scale input is divided into.
CONVERSION TIME
The time required to change an input signal from one
format to another, such as analog to digital, or time
difference to pulse amplitude; contributes to dead
time.
COSMIC RAYS
Radiation, both particulate and electromagnetic, that
originates outside the earth's atmosphere.
COUNT
A single detected event or the total number of events
registered by a detection system.
CRITICAL LEVEL (Lc)
The level below which a net signal cannot reliably be
detected. See also detection level.
CROSSOVER ENERGY
In some efficiency calibration models, the
energy at which one calibration curve is changed into a
48
second calibration curve. This is used in the Dual
Efficiency Calibration in Genie software.
CURIE [Symbol: Ci]
The (approximate) rate of decay of 1 gram of radium;
by definition equal to 3.7 x 1010 becquerels (or
disintegrations per second). Also, a quantity of any
nuclide having 1 curie of radioactivity.
D
DATASOURCE
A hardware device or a file which stores data acquisition
parameters and spectral data.
DAUGHTER NUCLIDE
A radionuclide produced by the decay of a parent
nuclide.
DEAD TIME
The time that the instrument is busy processing an input
signal and is not able to accept another input; often
expressed as a percentage. See also live time.
DECAY
The disintegration of the nucleus of an unstable atom
by spontaneous fission, by the spontaneous emission of
an alpha particle or beta particle, isomeric
transitions, or by electron capture.
DEFAULT
The value of a parameter used by a program in the
absence of a user-supplied value.
DERIVED AIR CONCENTRATION The
concentration (Bq/m3) of a radionuclide in air that if
breathed by Reference Man for a working year (2000
hours) under light activity conditions would result in the
annual limit on intake (ALI) by inhalation.
DETECTION LEVEL
The level of net signal that can be predicted to be
detectable. See also critical level.
DETECTOR
A device sensitive to radiation which produces a
current or voltage pulse which may or may not correspond
to the energy deposited by an individual photon or
particle.
DIGITAL STABILIZATION
The monitoring of one or two reference peaks in a
spectrum, one for gain and one for zero, to correct for
drift in the system electronics.
DISCRIMINATOR
An electronic circuit which distinguishes signal pulses
according to their pulse height or voltage so that unwanted
counts can be excluded.
DISINTEGRATION
See decay.
DPM
Disintegrations per minute. One DPM equals 60
becquerels.
DOSE
The radiation delivered to the whole human body or to
a specified area or organ of the body. This term is used
frequently in whole body counting applications.
E
EFFECTIVE HALF-LIFE [Symbol: Teff]
The time required for a radioactive element in a biological
system, such as the human body, to be reduced by one-
half as a result of the combined action of radioactive
decay and biological elimination. Compare half-life
and biological half-life.
EFFICIENCY
The fraction of decay events from a standard sample
seen by a detector in the peak corresponding to the
gamma ray energy of the emission, and stored by a
detection system. Also called Peak Efficiency. Used to
calibrate the system for quantitative analyses. Also used
to specify germanium detectors, where the relative
efficiency of the germanium detector is compared to a
standard (3 x 3 in.) NaI(Tl) detector. Compare total
efficiency.
EFFICIENCY CALIBRATION
A function, a lookup table, or series of functions, which
correlate the number of counts seen by the detection
system in specific peaks with known activity
corresponding to such emission energies in a radioactive
sample.
ELASTIC SCATTERING
See scattering.
ELECTRODEPOSITION
A process for coating the surface of samples being
prepared for alpha spectroscopy and alpha/beta counting.
ELECTROMAGNETIC RADIATION
A general term to describe an interacting electric and
magnetic wave that propagates through vacuum at the
speed of light. It includes radio waves, infrared light,
visible light, ultraviolet light, X rays and gamma
rays.
ELECTRON [Symbol: e]
An elementary particle with a unit negative electrical
charge and a mass 1/1837 that of the proton. Electrons
surround the positively charged nucleus and
determine the chemical properties of the atom.
ELECTRON VOLT [Symbol: eV]
The amount of kinetic energy gained by an electron as
it passes through a potential difference of 1 volt. It is
equivalent to 1.602 x 10-19 joules per second. It is a unit
of energy, or work, not of voltage.
49
ENERGY CALIBRATION
A function which correlates each channel in the
displayed spectrum with a specific unit of energy.
Allows peaks to be identified by their location in the
calibrated spectrum.
ESCAPE PEAK
A peak in a gamma ray spectrum resulting
from the pair production process, the subsequent
annihilation of the photons produced, and
escape from the detector of the annihilation photons. If
both annihilation photons escape, and the rest of the
original gamma energy is fully absorbed, a double escape
peak is produced at an energy equal to the original
gamma ray energy minus 1.022 MeV. If only one
of the photons escapes, a single escape peak is produced
at an energy equal to the original gamma ray energy
minus 511 keV.
EXCITED STATE
The state of molecule, atom, or nucleus when it
possesses more than its ground state energy.
Excess molecular or atomic energy may be reduced
through emission of photons or heat. Excess nuclear
energy may be reduced through emission of gamma
rays or conversion electrons or by further decay
of a radionuclide.
eV
See electron volt.
F
FACTORS
The parameters used by an algorithm for its
calculations.
FULL ENERGY ABSORPTION
The absorption and detection of all of the energy of an
incident photon. May take place as a direct
photoabsorption or as a result of multiple Compton
scatterings of the incident photons within the
resolving time of the detection system.
FULL ENERGY PEAK
The peak in an energy spectrum of X-ray or
gamma-ray photons that occurs when the full
energy of the incident photon is absorbed by the
detector.
FWHM (Full Width at Half Maximum)
The full width of a peak measured at one-half of its
maximum amplitude with the continuum removed.
Defines the resolution of a spectroscopy system.
G
GAIN, ADC
See conversion gain.
GAIN, AMPLIFIER
The ratio of the amplifier's output signal to its input signal.
GAIN CONTROL
A control used to adjust the height of a pulse received
from the detecting system.
GAMMA RAY [Symbol: ]
A photon or high-energy quantum emitted from the
nucleus of a radioactive atom. Gamma rays are the
most penetrating of the three common types of
radiation (the other two are alpha particles and
beta particles) and are best stopped by dense
materials such as lead.
GAUSSIAN FIT
Calculating the parameters of a Gaussian (or Normal)
function to best match a set of empirical data (in
spectroscopy, the acquired photopeak histogram).
This calculation is typically performed using a least
squares method after subtracting the Compton
continuum underlying the peak.
GAUSSIAN PULSE SHAPE
A pulse shape resembling a statistical bell-curve, with little
or no distortion.
GEOMETRY
The detector to sample distance, the sizes and
shapes of the detector, the sample, and any shielding, all
of which affect the radiation seen by the detector. The
geometry helps define the efficiency of the detector.
GRAY [Symbol: Gy]
The SI unit of absorbed dose, defined as one joule per
kilogram of absorbing medium.
GROUND STATE
The state of a nucleus, atom or molecule at its lowest
energy level.
H
HALF-LIFE [Symbol: Tl/2]
The time in which one half of the atoms of a particular
radioactive substance decay to another nuclear form.
Half-lives vary from millionths of a second to billions of
years.
HISTOGRAM
A representation of data by vertical bars, the heights of
which indicate the frequency of energy or time events.
I
ION
An atom or molecule that has become electrically charged
by having lost or gained one or more electrons.
Examples of an ion are an alpha particle, which is a
helium atom minus its two electrons, and a proton,
which is a hydrogen atom minus its single electron.
50
INDEX
An MCA function that jumps the cursor from one region
of interest to another.
INPUT/OUTPUT
The process of loading data into or copying data from an
MCA or computer using a peripheral device, such as a
computer, a floppy disk, or a printer.
IN SITU COUNTING
Measurement and analysis of radioactivity
performed at the sample's location.
INTEGRAL
The total sum of counts in the region of interest.
INTENSIFY
To change the contrast of a displayed region of
interest to set it off from data regions of lesser
importance.
INTERACTIVE PEAK FIT
The process of refining and verifying the quality of a
peak fit. The fitting parameters, such as the centroid
location and the way the continuum is defined, can
be changed. The change in the quality of the fit is
displayed.
INTERFERING PEAK
A peak due to background radiation which is
produced at the location of a peak in the sample spectrum
or due to a peak produced by a radionuclide in the sample
at the location of another radionuclide's peak.
IN VIVO COUNTING
In vivo counting refers to directly measuring and analyzing
radionuclide activity levels in a living body.
IN VITRO COUNTING
In vitro counting refers to samples, such as tissue or
blood, being analyzed for radionuclide activity levels in an
artificial environment (outside of a living body).
IONIZATION
The process by which an electrically neutral atom acquires
a charge (either positive or negative).
IONIZING EVENT
Any process whereby an ion or group of ions is produced.
As applied to nuclear spectroscopy, this refers to the
passing of radiation through a gas, a crystal, or a
semiconductor.
ISOMERIC TRANSITION
The de-excitation of an elevated energy level of a
nucleus to the ground state of the same nucleus
by the emission of a gamma ray or a conversion
electron.

ISOTOPE
One of two or more atoms with the same atomic number
(the same chemical element) but with different atomic
weights. An equivalent statement is that the nuclei of
isotopes have the same number of protons (thus the
same chemical element) but different numbers of
neutrons (thus the different atomic weight). Isotopes
usually have very nearly the same chemical properties, but
somewhat different physical properties. See also
nuclide and stable isotope.
K
keV (kiloelectron volt)
One thousand electron volts.
KEY LINE
Designated in nuclide libraries for reporting
purposes only. It is intended to indicate the highest
abundance photopeak energy for nuclides with
multiple energy lines, or the line that is the least likely to
have interferences.
L
LAN
Local area network: a network of two or more computers
connected together.
Lc
See critical level.
LIBRARY DIRECTED PEAK SEARCH
Method of designating the location of peaks using all of
the lines from the specified nuclide library. All of the
nuclide library energies are assumed to have
photopeak present and the peak analysis is typically
required to verify or reject each peak. This limits the peak
search to the nuclides in the library but allows for
greater sensitivity than with typical unknown peak
searches. See also second difference peak
search.
LIMIT OF DETECTION
The minimum amount of the characteristic property being
measured that can be detected with reasonable certainty
by the analytical procedure being used under specific
measuring conditions. If the conditions change, the limit of
detection will also change, even if the analytical procedure
remains the same. See also lower limit of
detection.
LIVE TIME
The time that the ADC is not busy processing a signal.
See also dead time and real time.
LIVE TIME CORRECTION
In an MCA, the process of stopping the live time clock
whenever the processing circuits are busy and cannot
accept further information. Commonly used to extend the
collection time by accounting for the dead time.
LOWER LIMIT OF DETECTION (LLD)
The smallest net signal that can reliably be quantified. LLD
51
is a measure of the performance of a system in terms of
activity.
LOWER LEVEL DISCRIMINATOR (LLD)
An SCA's minimum acceptable energy level. Incoming
pulse amplitudes below this limit will not be passed. See
also upper level discriminator.
M
MARINELLI BEAKER
A standard sample container that fits securely over a
detector cryostat's endcap and is used when
calibrating voluminous samples (usually soil or water
solutions).
MASS NUMBER
The sum of the neutrons and protons in a
nucleus. It is the nearest whole number to an atom's
atomic weight. For instance, the mass number of 235U is
235.
MAXIMUM PERMISSIBLE
CONCENTRATION (MPC)
The concentration limit for a given radionuclide in air
or water in determining possible inhalation, ingestion or
absorption for health physics controls.
MCA
See multichannel analyzer.
MCS
See multichannel scaling.
MDA
Minimum detectable activity. See lower limit of
detection.
MEAN
The average of a group of numbers.
METASTABLE ISOTOPE
A long-lived energy state of a particular nuclide that is
not its ground state. Some nuclides have more than
one isomeric state. An isomeric state has the same
mass number and atomic number as the ground
state, but possesses different radioactive properties.
MeV (megaelectron volt)
One million electron volts.
MONITORING, PERSONNEL
Periodic or continuous observation of the amount of
radiation or radioactive contamination present in or on
an individual.
MULTICHANNEL ANALYZER (MCA)
An instrument which collects, stores and analyzes time-
correlated or energy-correlated events. See also
multichannel scaling and pulse height
analysis.
MULTICHANNEL SCALING (MCS)
The acquisition of time-correlated data in an MCA. Each
channel is sequentially allocated a dwell time (a
specified time period) for accumulating counts until all the
memory has been addressed. MCS is useful for studying
rapidly decaying radioactive sources.
MULTISPECTRAL SCALING
Multispectral scaling acquisition mode, also called ping-
pong mode, alternately collects data in two separate
memory regions, quickly collecting many spectra with
extremely low latency between acquisitions.
MULTIPLET
Peaks in a spectrum which overlap each other. Compare
singlet.
N
NATURALLY OCCURRING
RADIOACTIVE MATERIAL (NORM)
Radioactivity that is naturally present in the earth.
NEUTRON [Symbol: n]
An uncharged elementary particle with mass slightly
greater than that of the proton, and found in the
nucleus of every atom heavier than hydrogen.
NEUTRON ACTIVATION ANALYSIS
(NAA)
The process of activating materials by neutron
absorption then measuring the emission of characteristic
photons on decay to determine the relative abundance
of elements in an object.
NID
Nuclide Identification, the process of identifying
radionuclides by comparing peak energies detected with
entries in a nuclide library.
NIM
Nuclear Instrumentation Module. A nuclear instrument
conforming to the DOE/ER-00457T standard.
NOISE
Unwanted signals on or with a useful signal which can
distort its information content.
NON-DESTRUCTIVE ASSAY
An analysis method that does not destroy the sample. For
example: gamma spectroscopy, X-ray fluorescence and
neutron activation.
NUCLEAR SAFEGUARDS
The general topic of maintaining control and accountability
of special nuclear materials.
NUCLEUS
The positively charged core of an atom, which contains
nearly all of the atom's mass. All nuclei contain both
protons and neutrons, except the nucleus of
ordinary hydrogen, which consists of a single proton.
52
NUCLIDE
A general term applicable to the isotopes of all
elements, including both stable and radioactive forms
(radionuclides).
NUCLIDE LIBRARY
A file listing nuclides, their names, half-lives, types,
energies/lines, and line abundances. These files are used
with library directed peak searches, nuclide
identification (NID) and as aids in performing
calibrations.
O
OFFSET, ADC
A digitally performed shift in the ADC's channel
zero. Shifts the entire spectrum by the selected
amount.
OVERLAP
An MCA function allowing one section of memory to be
displayed over another.
P
PAIR PRODUCTION
Creation of an electron- positron pair by
gamma ray interaction in the field of a nucleus.
For this process to be possible, the gamma ray's energy
must exceed 1.022 MeV, twice the rest mass of an
electron.
PARAMETER
A variable that is given a constant value for a specific
application.
PARENT NUCLIDE
A radionuclide that produces a daughter nuclide
during decay.
PASSIVE NON-DESTRUCTIVE ASSAY
A method that uses radiation emitted by the sample itself,
without increasing the emission by bombarding the sample
with something, such as neutrons. The sample itself is
not changed in any way in the course of passive assay.
PEAK
A statistical distribution of digitized energy data for a single
energy.
PEAK CHANNEL
The channel number closest to the centroid of a
peak.
PEAK FIT
The optimization of parameters to match an expected
model shape to empirical data (see also gaussian
fit). This optimization is typically performed using a least
squares method.
PEAK-TO-TOTAL RATIO
The ratio of the observed counts in a full energy
peak to the counts in the entire spectrum, caused
by the interaction of radiation with the detector at
that emission energy only.
PERCENT SIGMA [Symbol: s]
An expression of the standard deviation as a
percentage. It is numerically equal to 100 times the
standard deviation divided by the mean.
PHA
See pulse height analysis.
PHOTOELECTRIC ABSORPTION
The process in which a photon interacts with an
absorber atom, the photon disappears completely, and the
atom ejects a photoelectron (from one of its bound
shells) in place of the photon.
PHOTOELECTRON
An electron released from an atom or molecule by
means of energy supplied by radiation, especially
light.
PHOTOMULTIPLIER TUBE (PMT)
A device for amplifying the flashes of light produced by a
scintillator.
PHOTON
In quantum theory, light is propagated in discrete packets
of energy called photons. The quantity of energy in each
packet is called a quantum.
PHOTOPEAK
See Peak.
PHYSICAL HALF-LIFE
See half-life.
POLE/ZERO
A method of compensating the preamplifier's output signal
fall-time and the amplifier's shaping time constant. Its use
improves the amplifier's high count rate resolution
and overload recovery.
PMT
See photomultiplier tube.
POSITRON [Symbol: +]
An elementary particle, an "anti-electron" with the mass of
an electron but having a positive charge. It is emitted
by some radionuclides and is also created in pair
production by the interaction of high-energy
gamma rays with matter.
POSITRON ANNIHILATION
A process where a positron combines with an
electron, producing two annihilation photons of
511 keV each.
53
PRECISION
The degree of agreement between several measurements
of the same quantity under specific conditions. See also
accuracy.
PRIMORDIAL NUCLIDE
A nuclide as it exists in its original state.
PROGENY
See daughter nuclide.
PROMPT GAMMA ANALYSIS
A form of neutron activation analysis where
gammas, emitted during capture of neutrons, are
used for analysis instead of gammas of subsequent beta
decay.
PROTON
An elementary particle with a single positive electrical
charge and a mass approximately 1837 times that of the
electron. The atomic number (Z) of an atom is equal to
the number of protons in its nucleus.
PROTON INDUCED X-RAY EMISSION
(PIXE)
The emission of X rays when a sample is bombarded
by protons. The X rays emitted are characteristic of the
elements present in the sample. Used for trace analysis.
PULSE HEIGHT ANALYSIS (PHA)
The acquisition of energy-correlated data in the MCA.
Each channel, defined as an energy window, is
incremented by one count for each event that falls
within the window, producing a spectrum which
correlates the number of energy events as a function of
their amplitude.
PULSE PAIR RESOLUTION
The ability to discriminate between two pulses close
together in time.
PULSE PILEUP
A condition, where two energy pulses arrive at nearly the
same time, which could produce false data in the
spectrum.
PULSE PILEUP REJECTOR (PUR)
An electronic circuit for sensing the pulse pileup
condition and rejecting these pulses so that only single
pulses are counted.
Q
QUANTUM
The unit quantity of energy according to quantum theory. It
is equal to the product of the frequency of the
electromagnetic radiation and Planck's
constant (6.626 x 10-34 J/s).
R
RAD
A special unit of absorbed dose. Equal to 0.01
gray.
RADIATION
The emission or propagation of energy through matter or
space by electromagnetic disturbances which display both
wave-like and particle-like behavior. Though in this context
the "particles" are known as photons, the term
radiation has been extended to include streams of fast-
moving particles. Nuclear radiation includes alpha
particles, beta particles, gamma rays and
free neutrons emitted from an atomic nucleus
during decay.
RADIOACTIVITY
The emission of radiation from the spontaneous
disintegration (decay) of an unstable nuclide.
RADIONUCLIDE
A radioactive isotope. See also nuclide.
RANDOM SUMMING
A process where the signal from two or more separate
decays of the same radionuclide or different
radionuclides that occur within the resolving time of the
detector end up being recorded together as a single
event so that the recorded event is not representative of
the original decays. Typically causes counts to be lost
from the full energy peaks. Random summing is a
function of the overall count rate, or the activity of the
sample being measured.
RANDOM SUMMING LOSS
The loss of counts from the full energy peaks due to
random summing.
RANGE, ADC
The full-scale address (number of channels) of the
ADC's assigned memory segment.
REAL TIME
Elapsed clock time; also called true time. Compare live
time.
RECOILING NUCLEUS
A nucleus that gains significant kinetic energy from its
decay.
REGION OF INTEREST (ROI)
A user-defined area of the spectrum which contains
data of particular interest, such as a peak.
REM (Roentgen Equivalent Man)
A unit of dose equivalency; equal to 0.01 sievert. See
also Roentgen.
RESOLUTION
The ability of a spectroscopy system to differentiate
54
between two peaks that are close together in energy.
Thus, the narrower the peak, the better the resolution
capability. Measured as FWHM.
ROENTGEN
The Roentgen, the international unit of X radiation or
gamma radiation, is the amount of radiation
producing, under ideal conditions in one cc ionization
of either sign equal to one electrostatic unit of charge.
ROI
See region of interest.
S
SCA
Single Channel Analyzer. A device which recognizes
events (pulses) occurring between the settings of the
lower level discriminator and the upper
level discriminator. In an MCA, each event
within these limits is counted; events outside of these
limits are discarded.
SCATTERING
A process that changes a particle's trajectory. Scattering is
caused by particle collisions with atoms, nuclei and
other particles or by interactions with electric or magnetic
fields. If there is no change in the total kinetic energy of
the system, the process is called elastic scattering. If the
total kinetic energy changes due to a change in internal
energy, the process is called inelastic scattering. See also
backscattering.
SCINTILLATOR
A type of detector which produces a flash of light as
the result of an ionizing event. See also
photomultiplier tube.
SECOND DIFFERENCE PEAK SEARCH
A technique for locating photopeaks by calculating
the second difference for each channel in a spectrum,
then locating areas of negative concavity. See also
library directed peak search.
SEGMENTED GAMMA SCANNER
A gamma spectroscopy system that analyzes a sample by
counting it in discrete segments.
SELF ABSORPTION
Absorption of the photons emitted by the radioactive
nuclides in the sample by the sample material itself.
SHADOW SHIELD
An attenuating enclosure that shields the detector
from direct background radiation without being a 4p
shield. Typically used in whole body counting.
SHAPE CALIBRATION
The process of establishing a relationship between the
expected peak shape and energy. A shape calibration
can be established by using two or more peak
FWHM/energy (or FWHM/channel) pairs or by
using a least squares fit algorithm.
SIEVERT [Symbol: Sv]
The SI unit of dose equivalency (a quantity used in
radiation protection). The sievert is the dose equivalent
when the absorbed dose of ionizing radiation
multiplied by the dimensionless factor Q (quality factor)
and N (product of any other multiplying factors) stipulated
by the International Commission on Radiological
Protection is one joule per kilogram.
SINGLE CHANNEL ANALYZER
See SCA.
SINGLET
A single peak in a spectrum, well separated from
other peaks. Compare multiplet.
SMOOTHING
To decrease the effects of statistical uncertainties in
computerized spectrum analysis, the content of each
channel is replaced by a weighted average over a
number of adjacent channels.
SPECIFIC ACTIVITY
The quantity of radioactivity per unit mass; for
example, dpm/g or Bq/g.
SPECIAL NUCLEAR MATERIAL (SNM)
Material containing fissionable isotopes suitable for
nuclear weapons.
SPECTRUM
A distribution of radiation intensity as a function of
energy or time.
SPECTROMETER
A device used to count an emission of radiation of a
specific energy or range of energies to the exclusion of all
other energies. See also multichannel analyzer.
STABLE ISOTOPE
An isotope that does not undergo radioactive decay.
STANDARD DEVIATION [Symbol: s]
A measure of the dispersion about the mean value of a
series of observations expressed in the same units as the
mean value.
STRIPPING
Subtracting a specified fractional part of the data in one
section of memory from the data in another section of
memory.
SYSTEM BUSY TIME
The dead time of an entire spectroscopy system.
T
TOTAL DETECTOR EFFICIENCY
All pulses from the detector are accepted,
55
TOTAL EFFICIENCY
The ratio of all pulses recorded in the MCA's memory
(in all channels) to the gamma quanta emitted by
the sample. Compare efficiency.
TRANSURANIC (TRU)
Possessing an atomic number higher than that of uranium
(92).
TRUE COINCIDENCE SUM PEAK
A spectral peak, the energy of which equals the sum of
the energies of two or more gamma rays or X
rays from a single nuclear event.
TRUE TIME
See real time.
U
UNCERTAINTY
In a nuclear decay measurement, uncertainty refers to
the lack of complete knowledge of a sample's decay rate
due to the random nature of the decay process and the
finite length of time used to count the sample.
UPPER LEVEL DISCRIMINATOR (ULD)
An SCA's maximum acceptable energy level. Incoming
pulse amplitudes above this limit will not be passed. See
also lower level discriminator.
W
WHOLE BODY COUNTING (WBC)
In vivo determination of radionuclide activity levels in the
human body. Used to determine compliance with the
regulations of various governmental bodies regarding
radiation exposure.
WINDOW
A term describing the upper and lower limits of
radiation energy accepted for counting by a
spectrometer.
X
X RAY
A penetrating form of electromagnetic
radiation emitted during electron transitions in an
atom to a lower energy state; usually when outer orbital
electrons give up some energy to replace missing inner
orbital electrons.
Z
ZERO, ADC
An ADC control which aligns its zero energy output with a
specific channel in the MCA's memory (usually
channel zero).
56