Beruflich Dokumente
Kultur Dokumente
Chapter 2
SCOPE
This section will describe the concepts, principles, and general procedures of
quality assurance (QA) as it applies to the analysis of soil samples and materials
related to soil science. Although the primary focus will be on analytical chem-
istry, sampling will be discussed to some extent because of its importance to the
overall accuracy of the generated data. Data produced from the analysis of the
sample are representative of the population and are reliable only if the original
samples themselves are representative of the population.
Sampling
Sampling activities require quality assurance principles such as the devel-
opment and use of standardized sampling procedures, training and documentation
of sampling personnel, and the creation of traceable and defensible information
through the use of labels and chains of custody. A sampling plan should be writ-
ten to describe the location, number, size, and frequency of samples to be taken.
Specific preservation or handling procedures which are not readily inferred must
be used to prevent the contamination of the sample. There are a wide variety of
sampling strategies which can be employed to obtain samples which will result
in the accumulation of desired information. When compositing techniques are to
be used, specific information regarding the number of increments to be compos-
ited and the size of each increment must be included in the sampling plan. When
any deviations from the sampling plan take place, clear, complete documentation
is required to produce a permanent record that some nonconforming practices
were performed. The reason for the deviation also must be included.
Physical Testing
Quality assurance principles are applicable to physical testing. Reliability
of information, precise and accurate data, and defensibility are desirable conse-
Copyright 1996 Soil Science Society of America and American Society of Agronomy, 677 S.
Segoe Rd., Madison, WI 53711, USA. Methods of Soil Analysis. Part 3. Chemical Methods-SSSA
Book Series no. 5.
19
20 KLESTA & BARTZ
quences that occur when standardized procedures are used, record keeping prac-
tices include all pertinent information, test performers are properly trained, com-
plete training documentation is maintained, and a quality control (QC) system is
in place to determine errors, to identify bias, and to require corrective action
when needed. The importance of physical testing is often overlooked. Parameters
such as compressibility, permeability, or particle size distribution can be extreme-
ly important for the success or failure of a particular project. The quality assur-
ance system must include these areas of testing to be an acceptable system.
Chemical Testing
POLICY STATEMENT
Authority
Goal
The goal should be included in the policy statement. The goal of the qual-
ity system must be documented in terms that allow for measurable evaluation.
One of the key components of a quality system is an assessment phase. The
assessment of the compliance to requirements of the program cannot be readily
made if the ultimate goal is not understood.
QUALITY ASSURANCE & QUALITY CONTROL 21
Responsibility
DEFINITIONS
Quality Assurance
Quality Control
Defensibility
Traceability
One of the prerequisites for having and maintaining a good quality system
rests on the need to have all members of the staff operating on "common
ground." Writing and using standard procedures are the first steps toward achiev-
ing comparability. People involved in activities tend to develop their own way of
doing things. Creativity under the guise of improvement can result in changes in
procedures which could result in undesirable consequences and unusable data.
Management of the organization must determine where and when Standard
Operating Procedures (SOPs) are needed.
Organization
Analytical Structure
Standard operating procedures are useful for clarification and standardiza-
tion of analytical methods. All analysts should be performing the analytical pro-
cedures exactly the same way. Because some analytical methods do not contain
QUALITY ASSURANCE & QUALITY CONTROL 23
the specificity needed to ensure complete consistency, it is necessary for the lab-
oratory management to clarify these points of confusion and to specify details
where the methods are lacking. The development of SOPs covering analytical
methods satisfies this requirement. Specification of calculations, choice of labo-
ratory equipment, or time limits for certain steps not addressed in the method are
examples of points to be included in an analytical SOP. General laboratory pro-
cedures which can affect the analytical results should be included in the scope of
SOPs. Glass cleaning procedures, safety concerns and rules, and sample handling
procedures and disposal practices are examples of issues to be covered.
Management Responsibility
The management of a sampling or analytical operation has a responsibili-
ty to develop an organized quality system. Leadership is an essential factor to
ensure that a quality program is successful. The use of standardized procedures
results in conformity and clarity. The repeatability of analysis is greatly enhanced
by conformance to standardized procedures. Management must set the "ground
rules" for the rest of the organization. Approval of the standard procedures
should be done by management in a systematic process.
The quality management function is responsible for instituting standard
procedures for the QA and QC practices that are to be performed universally.
Input from the staff performing the sampling and the analyses is an important
consideration for management to solicit. The practicality and usefulness of the
standard procedures will be greatly improved by using the input of the staff.
Adherence to the procedures follows readily when staff members have an inte-
gral part in the development of those procedures.
Employee Responsibility
Each member of the sampling or analytical staff is required to be knowl-
edgeable of all SOPs which impact on their job responsibilities. The current SOP
must be used to perform the sampling or analytical tasks. When it is necessary to
alter the procedure for any number of legitimate reasons, the employee has the
responsibility to notify the management that the SOP is in need of revision. Stan-
dard operating procedures which are written with input from the employee are
generally the most practical and useful. Cooperation in writing and revising the
documents is one of the key attributes of a successful quality program.
24 KLESTA & BARTZ
Analytical Methods
Written analytical methods are a necessity to maintain QA at an acceptable
level. The analysts must follow the steps of a procedure exactly the way they are
written. The comparability of analytical data from analyst to analyst depends on
adherence to written analytical methods. The laboratory should maintain a cur-
rent methods manual which contains all of the methods that are being used in the
laboratory. Methods which are no longer used should be maintained in historical
files, but should be removed from all manuals found in the laboratory. Records
must be maintained to document when a change is made from one version of a
method to a later version of a method. The defensibility of the data depends on
the ability to correlate the data to the version of the method in use at the time of
generation.
Standard Methods
Analytical methods which have undergone a rigorous process for review,
validation, and promulgation are the most desirable methods to be used. Soci-
eties and associations which publish methods have a variety of approval process-
es which are used to quality the method before it is given the final approval of
the organization. The defensibility of analytical data is improved considerably
when data are generated using a standard method. The scope and application of
the method must be appropriate for the material to be analyzed. The misuse of a
standard method is just as unacceptable as using no standard method at all. Some
of the common issuers of methods are: American Society for Testing and Mater-
ials (ASTM), American Public Health Association (APHA), Soil Science Soci-
ety of America (SSSA), Association of Official Analytical Chemists-Interna-
tional (AOAC), United States Environmental Protection Agency (USEPA).
Validation of Methods
Procedures for the validation of analytical methods are needed in the qual-
ity assurance manual. Whether the method is a standard method or one that was
developed within the laboratory, it is critical to demonstrate defensibly that the
analysts are capable of generating results that meet the acceptance criteria of the
method. The number of replicate analyses to be performed and the material to be
used for the validation of the method should be specified clearly in the applica-
ble SOP.
Modification of Methods
When some specific step or procedure of a standard method does not apply
to particular sample matrices or technical improvements can be made, then a
method modification should be written, reviewed, and approved. The nature of
the modification cannot change the basic chemistry of the procedure. Technical
concerns should be reviewed by staff members for correctness and technical
merit. Validation of the modified method should be performed to identify any
potential bias. Modifications must be in writing and should be placed within the
analytical methods manual for ready reference. A standard form for modifica-
tions is helpful to ensure that all necessary issues for a modification to be
QUALITY ASSURANCE & QUALITY CONTROL 2S
approved are included in the modification. The method modification must not be
used before it receives approval.
Approval
An approval system should be put in place to prevent the unauthorized use
of nonstandard methods or method modifications. The person responsible for
method approvals should have significant experience in the use and development
of analytical methods. The approval should be made by the staff member with the
most responsibility for ensuring that analytical data of the highest quality are pro-
duced. In multilaboratory situations, an appointed official should be given the
authority and responsibility to review and approve all method modifications and
nonstandard methods. Distribution of the modification to all holders of the meth-
ods manual also is the responsibility of that appointed official.
Record Keeping
Logbooks
Logbooks for recording information and data should be bound and
designed for the identification of the book and the uniqueness of the page and line
numbers. Logbooks should be sequentially numbered, traceable from the issuer
to the user, and subject to a system for inventory and archive. Meeting these
requirements will result in completely defensible records.
Benchsheets
Various laboratory organization schemes may necessitate the use of
benchsheets for recording analytical data. The practical benefit of using
benchsheets is that they can be designed and used for specific analytical tests.
Having customized forms can be very useful in assuring the complete documen-
tation of all critical parameters. The analyst is somewhat forced to complete all
of the blanks on a benchsheet. Times, temperatures, flow rates, and calculations
can be incorporated in the format resulting in improved record keeping.
Benchsheets should be pre labeled with unique sequential numbers. When a sig-
nificant number are accumulated, the benchsheets should be identified and bound
in such a way that they cannot be lost or stolen.
Data Recording
Data should be recorded in permanent ink. Black or blue ink is preferred
because it can be photocopied and does not bleed or become illegible as other col-
26 KLESTA & BARTZ
ors do. Data should never be obliterated by crossing out, using opaque correction
fluids, or covered by tape. Corrections to data may be made as described in the
section on "Data Management."
Archiving
Defensibility of information includes the ability to reproduce the informa-
tion at some future date which may occur days, months, or many years after the
information was first generated. To achieve this objective, a system for archiving
must be in place. The procedures for the distribution, use, recovery, and storage
of logbooks must be described. The accumulation and storage of benchsheets and
instrument printouts must be detailed. Secure, fireproof storage is required for all
paper records. An organized system will facilitate retrieval of information. The
system can be set up by date, project, client, or state as long as a test of the sys-
tem during the quality assessment phase of the program proves to be successful.
Procedures for magnetic media should be determined and clearly documented in
an SOP. The use of compact discs-read only memory (CD-ROMs) and laser disks
are becoming more acceptable each year because of the advantages of size reduc-
tion, search capability, and permanence. Use of magnetic or optical media may
not be acceptable as the only means of archiving. The responsible authority for
oversight of the work should be consulted before implementing an optical or
magnetic system which replaces the paper records completely.
Computers
The use of computers to capture, calculate, and store analytical data great-
ly enhances the quality and accessibility of the data. The transcription of data
from one location to another can result in a significant number of errors. Putting
the information into an electronic medium reduces the chance of transcription
errors. Information from a computerized system is retrieved in a fraction of the
time that a "paper" system would take. There are some quality assurance consid-
erations that must be used when developing an electronic system.
Security
Access to the computer system must be controlled. The use of passwords
and a hierarchal organization add to the overall security. Managers and supervi-
sors will have access to a greater amount of information than the analyst or tech-
nician. A magnetic or electronic "audit trail" or history fill should be included in
the computer system. Whenever access is granted to stored information, a sepa-
rate nonaccessible record is made of the transaction. Whenever changes to stored
data are made, a justification should be required by the system before the change
can be completed. Both the original information and the corrected information
should be stored permanently. Instrumentation is becoming more computerized,
and the transfer of data by electronic and magnetic means is becoming more com-
monplace. Because of these developments, security is an essential principle of the
quality system.
QUALITY ASSURANCE & QUALITY CONTROL 27
Backup Procedures
Computer systems require a specialized procedure which is not needed in
typical paper record systems. Because of the possibilities of magnetic distortion
or mechanical failure, duplication of essential data is required to prevent the per-
manent loss of data. The ease of duplication and the relatively small amount of
space needed to store the duplicate records make backup procedures suitable to
computer systems. The concept of maintaining a backup to all paper records is
impractical and limited by space. The development and use of an SOP is essen-
tial to ensure that the backup procedures are done at the required frequency and
in the correct manner.
Tape. When a tape system is used for archiving and backup procedures,
there are some specific requirements to ensure the integrity of the tapes for future
access. Tapes should be rewound periodically (approximately every 6 mo) and
data should be transferred to new tapes on a regular basis (every 5 yr or less). It
also is necessary to maintain the hardware needed to be able to read the tapes. As
technology improves, size, speed, and capacity also change. Storing tapes which
cannot be read because the hardware has not been maintained should be prevent-
ed. Tapes must be stored in secure, hardened, fireproof facilities to prevent cata-
strophic loss.
Alternate Media. The development of alternate media is improving the
longevity of archiving in some condensed form. The use of CD-ROMs or laser
disks which use optical means rather than magnetic means to store information
will increase the amount of time that information can be stored before the trans-
fer to new media is needed. These optical media are estimated to be stable for
approximately 20 yr. Developments of technology such as these will improve the
ability to retrieve data when long-term storage is needed.
Training
Having qualified personnel is one of the primary components of a quality
system. To assure that the people performing the sampling, analysis, and quality
assessment functions are both competent and conscientious, proper training must
be conducted and documented. Although technically educated people have
knowledge which can be applied to the tasks at hand, it is very important that spe-
cific training in procedures is conducted. Consistency between personnel is
improved through proper training. Specific procedural steps or quality control
practices may not be readily discernible even to a technically educated person.
Serial training should be avoided because it allows for "folklore" to creep into the
system. Combining standardized procedures with a good training program will
result in the highest probability for precise and accurate information.
Methods
All personnel who are to perform a specific sampling or analytical method
should receive training in that method and should have documented evidence of
28 KLESTA & BARTZ
proficiency in the method. The training scenario should include the following: (i)
The trainee reads and understands the written method. (ii) The trainee observes
the trainer perform the method in its entirety. (iii) The trainee performs the
method with observation by the trainer. (iv) The trainee performs the method a
second time using a check sample or a reference material to demonstrate profi-
ciency. (In the case of sampling, proficiency is measured by comparing analyti-
cal results from the trainee's sample to those from the trainer's sample.) (v)
Short-term follow-up by the trainer should occur to ensure that the method is
being followed as written and to answer any questions which may need answers
or clarification. One of the major sources of error in sampling or in the laborato-
ry is the result of nonconformance to the method.
Documentation
Ail training records must be kept up-to-date and complete. Developing a
training plan is one way to assure that all required training occurs at the proper
frequency. Each sampling or analytical staff member must have a training file
which includes a record of what training is required for that person and docu-
mented evidence that the training has occurred.
Certification
Upon completion of sampling, analytical, or quality assurance/quality con-
trol training, the management of the organization should certify that the person is
qualified to perform the procedures without additional supervision. Annual recer-
tification should be done for all procedures.
Facilities
Maintenance
Written procedures and implementation of decontamination of sampling
equipment are an integral part of assuring quality. Proper use of equipment was
covered to some extent in the section on training. It is essential to keep sampling
equipment in proper working order by replacing expendable parts and by follow-
ing a maintenance plan.
Analytical instruments require servicing at proper intervals. The quality as-
surance manual should contain an instrument maintenance plan. Daily, weekly,
monthly, and yearly preventive maintenance procedures should be enumerated
for each type of instrument. Scheduling of preventative maintenance service calls
by the instrument manufacturer also should be included. A daily instrument
check compared to statistically based control limits should be required as part of
the quality control procedures. This will assist in isolating instrument malfunc-
tions. Maintaining spare parts and keeping a current inventory of parts will assure
reductions in down time and prevent using old or inadequate parts because of the
pressures of data production.
30 KLESTA & BARTZ
Calibration
Calibration is required before the generation of sample data begins. A SOP
should be written and implemented which includes the frequency of calibration,
the number of calibration standards to be used, and the acceptance criteria for the
calibration curve. The number of calibration points is inversely proportional to
the linearity of the curve, as described in the section "Calibration." The range of
concentration to be used for acceptable quantitation should be documented and
followed. The nature of analytical chemistry is based primarily on the compari-
son of the unknown concentrations, i.e., samples, to known concentrations, i.e.,
standards. The use of current, accurate calibration curves cannot be overempha-
sized.
List
The sampling and analytical personnel should maintain a current list of all
sampling and analytical equipment. The instrument list should contain the model
number, serial number, date of purchase, and general condition. The list should
be kept current, reflecting recent acquisitions or retirement of equipment. The list
helps to show current assets of the organization in an easy manner and to assist
management in budgeting for future replacement of old equipment.
Data Management
A SOP should be written for all phases of data management. The collec-
tion, calculation, verification, and reporting of data are to be included. A system
for organization of the data and the process for recovery of archived data should
developed and documented.
Primary Data
Primary data are sometimes referred to as "raw" data. The first record of
the data constitutes primary data. Instruments provide primary data in the form of
printouts, chromatograms, or spectra. All of these formats must be dated and ini-
tialed by the analyst. Sampling events also include forms of primary data in field
notebooks, chains of custody, and sample labels. Analytical procedures that do
not produce printed records require that all primary data be recorded in bound,
numbered logbooks or on prenumbered benchsheets that will be subsequently
bound. The accuracy and completeness of primary data are essential for having
truly defensible records.
Secondary Data
Secondary data ensues when primary data are used to calculate an analyti-
cal result or when primary data are copied into another format. The SOP on data
management must include mechanisms for the review of secondary data and the
procedures for corrective action when errors are found. Quality control limits of
acceptability are another form of secondary data. If the acceptance limits are inac-
curate, a significant amount of analytical data can be generated that appears to be
acceptable when, in fact, it is not.
QUALITY ASSURANCE & QUALITY CONTROL 31
Calculations
All calculations should be clearly understandable. Logbook or benchsheet
formats which include the primary data, the analytical factor, the dilution factor,
and the results are helpful to ensure the accuracy of the calculation and to verify
during data review. Dependence on calculator programs or computer programs
without independent verification of the accuracy of the program can lead to the
generation of a significant number of errors.
Corrections
The SOP should include the correct procedure for making corrections in
primary or secondary data. It is generally accepted to draw a single line through
the incorrect information, add the correct information, and then date and initial
the correction. When it is not obvious why the correction is being made, then
additional notes of explanation should be included. The true test of defensible
data occurs when the trail from primary data to analytical result can be followed
without any explanations from the personnel involved. The records should tell the
"whole story."
Documentation
Documentation includes two concepts: written documents and the process
of documentation. The number, types, distribution, and revision process for all
written manuals must be included in the SOP on documentation. These may
include the sampling procedures, analytical methods, and quality assurance man-
ual along with any other documents that the organization deems necessary. The
personnel responsible for the generation, distribution, and revision of the manu-
als should be designated in the SOP.
ASSESSMENT
A functional quality assurance system includes a mechanism for the assess-
ment of system implementation and performance. The quality assurance system
uses the assessment phase to impart corrective action and subsequent improve-
ments are made to the system. Typically, quality assurance audits are conducted
to determine the status of conformance and to develop action plans to eliminate
occurrences of nonconformance. The assessment can and should be made both
internally and externally.
Internal
Supervisors, managers, or quality assurance officers are the primary agents
for performing internal assessment of the quality assurance system. A plan should
be developed to review various aspects of the program on a frequency that corre-
sponds to the importance of the particular aspect. Some quality assurance activi-
ties may only need to be reviewed on a quarterly basis, whereas others may need
attention weekly. The assessor should use observation of activities along with
reviews of documentation to determine the degree of conformance. Training
32 KLESTA & BARTZ
needs may be determined from the internal assessment function. Good managers
not only know where improvements are needed, but also will take the necessary
steps to correct the situation.
External
PROFICIENCY TESTING
Proficiency testing schemes play an important role in a quality assurance
system. Samples of known concentration are submitted to a group of laboratories.
The samples are "blind" in as much as the laboratories do not know the true value
for the sample. Some programs may include "double blind" samples. In this case,
the laboratories do not known that the sample is a proficiency sample. The mate-
rials are submitted under the pretense of being a routine sample. In either case,
the results are used to determine the correctness of the laboratory's data and may
be used to certify the laboratory for future work. The "true value" of the profi-
ciency sample can be determined in a variety of ways. The concentration of a syn-
thetic sample can be determined by weights and volumes. All of the non outlier
data received from the laboratories can be used to determine the "true value."
This is commonly referred to as the consensus method. A small group of previ-
ously qualified laboratories may be used in a preliminary round to determine the
"true value" that will be used for subsequent rounds. Reports from the organizers
of the proficiency testing scheme should be used as a feedback mechanism to the
laboratory personnel.
QUALITY CONTROL
Analytical chemistry without QC is guesswork. For the words "quality con-
trol," one may substitute other words such as calibration, contamination control,
stability, precision, or accuracy which are all part of QC.
QUALITY ASSURANCE & QUALITY CONTROL 33
DEFINmONS
Quality control is based primarily on the use of statistics. A thorough
review of statistical concepts is suggested. Several statistical terms are defmed
here to aid the discussion. In addition, several analytical quality control terms are
defined.
Test Portion
The test portion is the volume or weight of material that is prepared for
measuring the parameter of interest. The amount must be sufficient to be repre-
sentative of the sample. In tum, the sample is assumed to be representative of the
population of interest. For solid materials, particle size reduction by crushing or
grinding may be necessary to allow the use of a smaller test portion weight. Such
mechanical sample manipulation may alter physical characteristics that, in tum,
affect sample attributes such as adsorption, mineral structure, or reactivity.
Replicate
The term replicate describes multiple test portions or multiple instrument
measurements on one prepared test portion. The data may be evaluated by exam-
ining the relative percent difference, relative standard deviation, or coefficient of
variation.
34 KLESTA & BARTZ
Duplicates
Duplicates are specific replicates. This term usually describes two separate
test portions that are subjected to the same preparation procedure and then to the
same measurement procedure. As for replicates, the data may be evaluated by
examining the relative percent difference, relative standard deviation, or coeffi-
cient of variation.
(NIST). The certified values and acceptance criteria are determined for the mate-
rial based on a variety of analytical methods. Not all matrix types are available
from NIST. Confidence in the accuracy of the analytical procedure is increased
significantly when a laboratory can demonstrate satisfactory performance for
SRM analysis.
Control Charts
Control charts are the real-time plots of data derived from control samples
such as duplicates, spikes, and quality control check samples. Acceptance crite-
ria are determined and used to evaluate the data. The plots are made and evaluat-
ed by the analyst so that any out-of-control situation may be corrected immedi-
ately. It is important to note that control charts which are not plotted at the time
of analysis are merely historical records of performance and are not viable mech-
anisms for the control of the analytical process.
Precision
Reproducibility
Accuracy
Accuracy is the agreement of a measured value to the true value of the para-
meter of interest. Accuracy is often expressed as percentage recovery or the per-
centage difference from the certified value of a standard reference material.
Bias
Bias refers to a systematic difference between the determined value and the
true value. Bias is the measurement of systematic error.
STATISTICS
Control Charts
Four consecutive points outside the one standard deviation limits also is a cause
for concern.
A systematic trend in QC data also indicates an out-of-control situation.
Such trends may be shown by a series of seven values that occurs above or below
the mean or by patterns that appear in the data, which may relate to variables such
as room temperature, time of day, or the analyst.
As additional data are obtained, warning and control limits need to be
updated on a periodic basis. Depending upon the amount of data generated, this
updating may occur weekly, monthly, yearly, or after a certain number of values
are obtained. Examination of the data will indicate if the control limits are ade-
quate. If data consistently fall within one standard deviation of the mean, the con-
trollimits are too wide to be useful in "controlling" the analytical system. Alter-
nately, if more than 5% of the data fall outside two standard deviations from the
mean, either the control limits do not adequately address the variability of the
analytical system and need to be updated or the system is severely out-of-control.
When the control limits are updated, all data that have been accumulated
should be used for estimating the statistics: mean and standard deviation. This
can best be accomplished by pooling the data. A statistical text can be consulted
as a source for the appropriate equations.
When control charts are maintained and evaluated at the time of analysis,
corrective actions may be taken immediately. This will result in significant time
savings, because routine samples are not analyzed when the measurement system
is out-of-control. Some samples may be subject to a holding time during which
the parameter of interest must be measured. Plotting and evaluating control charts
in real time will allow for any reruns to be done before the samples are invali-
dated because of holding times.
Outliers
An outlier is something that does not belong in the population that is
described by the accumulated statistical data. An outlier may refer to a sample, a
test portion, an analytical measurement, or a grouping of data from one analyti-
cal run or from a laboratory in a proficiency program.
A suspected outlier may be identified on a control chart as a result that is
out-of-control. When all replicate measurements are ranked in order of magni-
tude, a suspected outlier may appear as an extraneous value. Often the outlier is
due to a transcription or calculation error, and the value can be corrected. In other
cases, contamination or analyte-loss may be suspected, especially when a QC
check sample indicates possible contamination or loss. Those outlier values can
be eliminated from the data set for cause.
Several statistical tests have been developed to evaluate data for outliers,
including Dixon, Grubbs, Cochran, and Youden tests. A statistical text (Barnett &
Lewis, 1978) can be consulted as a source for the appropriate test for identifying
outliers in a data set.
As a word of caution, data should not be eliminated capriciously; there
must be either an attributable cause or a statistical basis for the elimination of out-
liers from a data set. After outliers are removed, the descriptive statistics (i.e.,
mean, variance, and standard deviation) are determined.
38 KLESTA & BARTZ
PRECISION
s
RSD = -=- x 100
X
Statistically, the total variance equals the sum of the variances attributable
to each source
r = (2...J2)sr
R = (2...J2)SR
Taylor (1987) uses these concepts to describe the short-term and long-term
standard deviations. The short-term standard deviation is usually smaller than the
long-term standard deviation or, in other words, the measurement system is usu-
ally more precise over short periods of time than over long periods of time. This
is because some sources of variability may not vary over short intervals of time.
For example, the same calibration curve may be used for sample measurements
or the same calibration standards may be used to derive the calibration curve.
Samples may be prepared with the same lots of reagents. The long-term standard
deviation is subject to a greater amount of variability from the identified sources.
ACCURACY
True Value
The true value is the mean of the population of interest. This value cannot
be known, but can be estimated by the arithmetic mean derived from samples.
QUALITY ASSURANCE & QUALITY CONTROL 41
The best estimate of the true value is made by the sample mean of measurements
that are free of systematic error, that is, measurements that are unbiased. Both
precise and imprecise measurement systems can yield good estimates of the true
value. However, imprecise measurement systems (e.g., an analyte with a large
standard deviation) need a larger number of replicates to yield a good estimate of
the true value. The more precise a measurement system is, the fewer replicates
are needed to yield a good estimate of the true value.
Bias
CONTAMINATION
Contamination of the sample may occur during sampling, shipment, prepa-
ration, storage, or laboratory analysis. The types of blank samples that can be
QUALITY ASSURANCE & QUALITY CONTROL 43
Scenario One
In this situation, analytical data are generated for a specific project. The
split samples are part of the project design. Either the project involves the use of
several laboratories, or one laboratory analyzes samples over several years or sea-
sons. Ultimately, the data from all laboratories or all seasons are to be combined
for use.
44 KLESTA & BARTZ
Blind Duplicates
For meaningful statistics to be obtained for intralaboratory precision, each
laboratory (or one laboratory in each season of the project) analyzes at least seven
pairs of blind duplicates for each analyte at each concentration range of interest.
The incorporation of these additional blind duplicates is unnecessary if the dupli-
cates described in the section "Evaluating Sources of Imprecision" (i) are chosen
randomly or are designated by the project management; (ii) are not given prefer-
ential treatment by the analyst, that is, the duplicate samples are prepared and
analyzed in the same manner as are routine samples; and (iii) are sufficient in
number to derive meaningful statistics for within laboratory precision.
Split Samples
For the assessment of interlaboratory precision and bias, splits from a sta-
ble, homogeneous sample are analyzed by each laboratory (or by one laboratory
during each season) for each analyte at each concentration of interest. Again, at
least seven splits should be provided to each laboratory (or to one laboratory dur-
ing each season) for analysis of each analyte in each concentration range of inter-
est. Statistics are used to evaluate estimates of precision and bias between labo-
ratories.
If interlaboratory bias is negligible (Le., there are no significant differences
between analyte means) and precision is not significantly different between lab-
oratories, the data are comparable. If there are significant differences between
analyte means, but the estimates of within laboratory precision are not signifi-
cantly different, the statistical data from the split samples may be used to derive
correction factors to normalize the routine data prior to further evaluation. If data
generated from the split samples can be correlated to SRM performance data
from the laboratories during the same time frame, accuracy can be evaluated for
all laboratories (or each season) in addition to the relative interlaboratory bias.
Scenario Two
In this situation, data from multiple laboratories are used to make decisions,
such as fertilizer recommendations or whether to excavate contaminated soil or
not. Usually these laboratories are commercial or "production" laboratories. Ulti-
mately the management and the customer want assurance that the data from dif-
ferent sources can be used in combination and will result in the same recommen-
dation.
Blind Duplicates
Blind duplicates are submitted by the laboratory management or quality
assurance personnel on a periodic basis for each analyte of interest. The identity
of the sample is not known by the analyst, and the analyst is not given any infor-
mation about the expected analyte concentration; these samples are referred to as
"double blinds." The submitter examines the resulting data in comparison to the
original, routine data and evaluates the results according to the expectation for
within laboratory precision. When data for any analyte do not meet the accep-
QUALITY ASSURANCE & QUALITY CONTROL 45
tance criteria, possible errors in areas such as calculation, weighing, diluting, and
calibration are investigated. Reanalysis or the submission of another blind dupli-
cate may occur if attributable causes cannot be identified. This is an effective way
to identify parts of the analytical process which may need corrective action or
additional oversight.
Split Samples
Split samples for "production" laboratories may be run in conjunction with
a reference laboratory or may be from a round robin.
When a reference laboratory is used, the originating laboratory provides a
split sample to the reference laboratory. The reference laboratory must use the
same analytical procedure that was used to generate the original, routine data.
The data from the originating and reference laboratories are evaluated according
to the objective for interlaboratory precision. Data which do not meet criteria are
investigated in both the originating and the reference laboratories for attributable
errors. This is another opportunity for identifying parts of the analytical process
which may need corrective action or additional oversight. If a group of "produc-
tion" laboratories uses the same reference laboratory, interpretations can be made
regarding the interlaboratory bias or comparability of data among the "produc-
tion" laboratories. Trends in data can be observed even if interlaboratory bias is
not rigorously determined.
If a proficiency testing program (round robin) is employed for identifying
interlaboratory bias, splits from one homogeneous material are sent from the ref-
eree to the various participating laboratories. All participating laboratories may
not use the same analytical procedure for the analyses unless it is a requirement
of the program. The data are reported to the referee within a certain time frame.
Then the referee assembles the data, performs statistical evaluation of the data,
and issues a report to the participating laboratories. In this way, the proficiency
of the laboratory is assessed against the consensus statistics. If the data are pre-
sented in a graphic display, the relative interlaboratory bias is illustrated.
CALIBRATION
Range of Analysis
The method detection limit (MDL) is based on the ability of the method to
determine the concentration of an analyte in a sample matrix. The MDL is calcu-
lated as three times the standard deviation (so) of at least seven replicate mea-
surements of method blanks (if an instrumental response can be measured) or low
QUALITY ASSURANCE & QUALITY CONTROL 47
level samples (at approximately 3 to 5 times the estimated MOL). To assess the
potential variability associated with different analysts, different days of prepara-
tion, and different instrument calibrations, the seven replicate measurements (six
degrees of freedom) should be made on different days or shifts.
MDL = 3so
Because MOLs are based on the standard deviation which is not an addi-
tive statistic, MOLs cannot be averaged. However, data that are obtained over a
period of time can be pooled to obtain an MOL. Often a low-concentration sam-
ple is included in each analytical run as an MOL check sample. The resulting data
can be used to determine whether the stated MOL is maintained over time.
Limit of Quantitation
The limit of quantitation (LOO) is the lowest level at which analytical mea-
surement becomes meaningful in quantitating a result. Analytical results below
this limit are reported as "less than" values. Although the LOO has been arbi-
trarily defined by an American Chemical Society committee as 10 times the stan-
dard deviation of the blank, the data user should examine the data to decide if this
definition is justified. This value is probably suitable for spiked reagent water, but
is not suitable for more complex matrices. It is more likely that, for soils, the
appropriate LOO is a higher mUltiple of the standard deviation. An empirical
value for the LOO can be derived by examining the inflection point in the curve
of RSO vs. increasing analyte concentration for analytical duplicates, as
described in the section "Evaluating Sources of Imprecision."
Limit of Linearity
Linear curves are most commonly used for laboratory analysis. A straight-
line relationship between the concentration and the response is easily understood
and allows direct calculation of the result. A correlation coefficient can be calcu-
lated to determine the "straightness" of the curve. It is commonly accepted that a
calibration curve should have no less than three standards distributed from low to
high concentration. Criteria can be established to evaluate the acceptability of the
curve before it is used. Mter determining the linearity of a curve by using multi-
ple standards, curves with very high correlation coefficients can be determined on
a routine basis with as few as two standards.
Nonlinear calibration curves can be used effectively to analyze samples.
The shape of the curve must be reproducible both in curvature and magnitude. A
nonlinear curve can be used to increase the range of analysis. In atomic absorp-
48 KLESTA & BARTZ
tion spectroscopy, for example, a less sensitive wavelength for an element can be
chosen to eliminate the need for serial dilutions. It is possible to choose a wave-
length less influenced by interferences which has a nonlinear curve for an ele-
ment to improve the accuracy of the results.
The number of calibration standards to use for a nonlinear curve is inverse-
ly proportional to the linearity. When a nonlinear curve is to be used for analysis,
the number of calibration standards must be increased significantly.
Plotting nonlinear curves manually is an acceptable procedure. Day-to-day
comparisons of curvature and magnitude can be made readily.
Computer software used on analytical instruments allows for the use of
nonlinear calibration curves. These computers and software packages which per-
form curve-fitting operations can be used effectively. Quadratic equations or
polynomial equations can be used to define the calibration curve. The analyst
must have sufficient experience to choose the correct equation for the curve. Care
should be taken to assure that the correct equation is used for the curve and that
the system generating the curve is reproducible.
REFERENCES
American Society for Testing and Materials. 1991. Annual book of ASTM standards. ASTM,
Philadelphia, PA.
American Public Health Association. 1992. Standard methods for the examination of water and
wastewater. 18th ed. Am. Public Health Assoc., Am. Water Works Assoc., Water Environ.
Fed., Washington, DC.
Association of Official Analytical Chemists-International. 1990. Official methods of analysis. 15th
ed. AOAC-Int., Arlington, VA.
Barnett, v., and T. Lewis. 1978. Outliers in statistical data. John Wiley & Sons, Chichester, England.
Cochran, W.G. 1947. Some consequences when the assumptions for analysis of variance are not sat-
isfied. Biometrics 3:22-38.
Code of Federal Regulations. Good laboratory practice for nonclinicallaboratory studies. Title 21,
Part 58. U.S. Gov. Print. Office, Washington, DC.
Code of Federal Regulations. Good laboratory practice standards. Title 40, Parts 160 and 792. U.S.
Gov. Print. Office, Washington, DC.
Dixon, W.J., and EJ. Massey, Jr. 1969. Introduction to statistical analysis. 3rd ed. McGraw-Hili Book
Co., New York.
Grubbs, EE. 1969. Procedures for detecting outlying observations in samples. p. 1-21. In Techno-
metrics. Vol. 11.
Klute, A. 1986. Methods of soil analysis. 2nd ed. Agron. Monogr. 9. ASA and SSSA, Madison, WI.
Taylor, J.K. 1987. Quality assurance of chemical measurements. Lewis Publ., Inc., Chelsea, MI.
U.S. Environmental Protection Agency. 1986. Test methods for evaluating solid waste. SW-846. 3rd
ed. U.S. Gov. Print. Office, Washington, DC.
Youden, w.I., and E.H. Steiner. 1975. Statistical manual of the association of official analytical
chemists. AOAC, Washington, DC.