Sie sind auf Seite 1von 26

No single NDT method will work for all flaw detection or measurement applications.

Each of the methods has advantages and disadvantages when compared to other methods. The table below summarizes the scientific principles, common uses and the advantages and disadvantages for some of the most often used NDT methods. Penetrant Testing Magnetic Particle Testing Ultrasonic Testing Eddy Current Testing Radiographic Testing

Scientific Principles
Penetrant solution is A magnetic field is High frequency sound Alternating electrical X-rays are used to applied to the surface established in a waves are sent into a current is passed produce images of of a precleaned component made from material by use of a through a coil objects using film or component. The liquid ferromagnetic transducer. The sound producing a magnetic other detector that is is pulled into surface- material. The magnetic waves travel through field. When the coil is sensitive to radiation. breaking defects by lines of force travel the material and are placed near a The test object is capillary action. through the material, received by the same conductive material, placed between the Excess penetrant and exit and reenter transducer or a second the changing magnetic radiation source and material is carefully the material at the transducer. The field induces current detector. The thickness cleaned from the poles. Defects such as amount of energy flow in the material. and the density of the surface. A developer is crack or voids cannot transmitted or received These currents travel material that X-rays applied to pull the support as much flux, and the time the in closed loops and are must penetrate affects trapped penetrant back and force some of the energy is received are called eddy currents. the amount of to the surface where it flux outside of the analyzed to determine Eddy currents produce radiation reaching the is spread out and forms part. Magnetic the presence of flaws. their own magnetic detector. This variation an indication. The particles distributed Changes in material field that can be in radiation produces indication is much over the component thickness, and changes measured and used to an image on the easier to see than the will be attracted to in material properties find flaws and detector that often actual defect. areas of flux leakage can also be measured. characterize shows internal features and produce a visible conductivity, of the test object. indication. permeability, and dimensional features.

Main Uses
Used to locate cracks, porosity, and other defects that break the surface of a material and have enough volume to trap and hold the penetrant material. Liquid penetrant testing is used to inspect large areas very efficiently and will work on most nonporous materials. Used to inspect Used to locate surface Used to detect surface Used to inspect almost ferromagnetic and subsurface defects and near-surface flaws any material for materials (those that in many materials in conductive surface and subsurface can be magnetized) for including metals, materials, such as the defects. X-rays can defects that result in a plastics, and wood. metals. Eddy current also be used to locates transition in the Ultrasonic inspection inspection is also used and measures internal magnetic permeability is also used to measure to sort materials based features, confirm the of a material. the thickness of on electrical location of hidden Magnetic particle materials and conductivity and parts in an assembly, inspection can detect otherwise characterize magnetic permeability, and to measure surface and near properties of material and measures the thickness of materials. surface defects. based on sound thickness of thin velocity and sheets of metal and

attenuation measurements.

nonconductive coatings such as paint.

Main Advantages
Large surface areas or large volumes of parts/materials can be inspected rapidly and at low cost. Large surface areas of Depth of penetration complex parts can be for flaw detection or inspected rapidly. measurement is superior to other Can detect surface and methods. subsurface flaws. Parts with complex Only single sided geometry are routinely Surface preparation is access is required. inspected. less critical than it is in penetrant inspection. Provides distance Indications are information. produced directly on Magnetic particle surface of the part indications are Minimum part providing a visual produced directly on preparation is image of the the surface of the part required. discontinuity. and form an image of the discontinuity. Method can be used Equipment investment for much more than is minimal. Equipment costs are just flaw detection. relatively low. Detects surface and near surface defects. Test probe does not need to contact the part. Method can be used for more than flaw detection. Minimum part preparation is required. Can be used to inspect virtually all materials. Detects surface and subsurface defects. Ability to inspect complex shapes and multi-layered structures without disassembly. Minimum part preparation is required.

Disadvantages
Detects only surface breaking defects. Only ferromagnetic materials can be inspected. Surface must be accessible to probe and couplant. Skill and training required is more extensive than other technique. Only conductive materials can be inspected. Ferromagnetic materials require special treatment to address magnetic permeability. Extensive operator training and skill required. Access to both sides of the structure is usually required.

Surface preparation is critical as Proper alignment of contaminants can mask magnetic field and defects. defect is critical.

Requires a relatively Large currents are Orientation of the smooth and nonporous needed for very large Surface finish and radiation beam to nonsurface. parts. roughness can Depth of penetration is volumetric defects is interfere with limited. critical. inspection. Post cleaning is Requires relatively necessary to remove smooth surface. Flaws that lie parallel Field inspection of chemicals. Thin parts may be to the inspection probe thick section can be difficult to inspect. coil winding direction time consuming. Paint or other can go undetected. Requires multiple nonmagnetic operations under coverings adversely Linear defects oriented Relatively expensive parallel to the sound Skill and training equipment investment

controlled conditions. affect sensitivity. Chemical handling precautions are necessary (toxicity, fire, waste).

beam can go undetected.

Demagnetization and post cleaning is Reference standards usually necessary. are often needed.

required is more extensive than other techniques. Surface finish and roughness may interfere. Reference standards are needed for setup.

is required. Possible radiation hazard for personnel.

Penetrant Testing

Magnetic Particle Testing

Ultrasonic Testing

Eddy Current Testing

Radiographic Testing

NDT Method Selection

Each NDT method has its own set of advantages and disadvantages and, therefore, some are better suited than others for a particular application. The NDT technician or engineer must select the method that will detect the defect or make the measurement with the highest sensitivity and reliability. The cost effectiveness of the technique must also be taken into consideration. The following table provides some guidance in the selection of NDT methods for common flaw detection and measurement applications.

Eddy Current Inspection Formula -

Ohm's Law

Impedance

Phase Angle

Magnetic Permeability

Where:
I = V = Z = Current (amp) Voltage (volt) Impedance (ohm)

Where:
Z = Impedance (ohm) R = Resistance (ohm) XL = Inductance Reactance(ohm)

Where:
= Phase Angle (deg) Inductance Reactance(ohm) Resistance (ohm)

Where:
= Magnetic Permeability (Henries/meter) Magnetic Flux Density (Tesla) Magnetizing Force (Am/meter)

XL = R =

More Information Ohm's Law Calculator Relative Magnetic Permeability

More Information Conductivity & Resistivity

More Information Phase Angle Calculator Electrical Conductivity (%IACS)


When resistivity is known

B = H =

Electrical Conductivity (%IACS)


When conductivity in S/m or S/cm is known

Where: Where:
r = = Relative Magnetic Permeability (dimensionless) Any Given Magnetic Permeability (H/m) Magnetic Permeability in Free Space (H/m), which is 1.257 x 10-6 H/m

o Where: r
Electrical = Conductivity (% IACS) Electrical = Resistivity (ohm-cm) Electrical = Conductivity (Siemens/cm)

Electrical = Conductivity Siemens/m = Electrical Resistivity (ohm-m)

%IACS S/cm

Where:

%IACS S/m S/cm

o =

More Information More Information Current Density Standard Depth of Penetration


When electrical conductivity (S/m) is known.

More Information Standard Depth of Penetration


When electrical conductivity (% IACS) is known. In mm

Electrical = Conductivity (% IACS) Electrical = Conductivity (Siemens/meter) Electrical = Conductivity (Siemens/cm)

More Information Standard Depth of Penetration


When electrical resistivity ( ohm-cm) is known. In mm

Where:
Current Density (amps/m2) Jo Current Density at = Surface (amps/m2) Base Natural Log = e = 2.71828 x = Distance Below Surface = Standard Depth of Penetration Jx =

In inches

In inches

Where:
f = = = = Standard Depth of Penetration (m) 3.14 Test Frequency (Hz) Magnetic Permeability (H/m) (1.257 x 10-6 H/m for nonmagnetic mat'ls) Electrical Conductivity (Siemens/m)

Where:
Standard Depth of Penetration (mm or in) f = Test Frequency (Hz) Relative Magnetic r = Permeability (dimensionless) Electrical Conductivity = (%IACS) =

Where:
f = = = Standard Depth of Penetration (mm or in) Electrical Resistivity (ohm-cm) Test Frequency (Hz) Relative Magnetic Permeability (dimensionless)

More Information
=

r =

More Information

More Information

More Information
Depth of Pen Calculator

Standard Depth of Penetration Versus Frequency Chart

Eddy Current Field Phase Lag


In Radians In Degrees

Eddy Current Field Phase Lag


When electrical conductivity (S/m) is known.

Eddy Current Field Phase Lag


When electrical conductivity (% IACS) is known. In mm In inches

Eddy Current Field Phase Lag


When electrical resistivity ( ohm-cm) is known. In mm In inches

Where:
x Phase Lag (Rad or Degrees) Distance Below Surface = (in or mm) =

Where:
= x = = f = r = Phase Lag (degrees) Distance Below Surface (m) 3.14 Test Frequency (Hz) Relative Magnetic Permeability Electrical Conductivity (Siemens/m)

Where:
= Phase Lag (degrees) Distance Below Surface x = (mm) f = Test Frequency (Hz) Relative Magnetic r = Permeability (dimensionless) Electrical Conductivity = (%IACS)

Where:
x f = = = = Phase Lag (degrees) Distance Below Surface (inch) Electrical Resistivity (ohm-cm) Test Frequency (Hz) Relative Magnetic Permeability (dimensionless)

r =

More Information

= Standard Depth of Penetration (in or mm)

More Information

More Information

More Information Standard Depth of Penetration and Phase Angle


Relative Std. Strength Depth of EC 0 e0=100% d e-1=37% 2d e-2=14% 3d e-3=5% 4d e-4=2% 5d e-5=0.7%

Material Thickness Requirement for Resistivity or Conductivity Measurement


When measuring resistivity or conductivity, the thickness of the material should be at least 3 times the depth of penetration to minimize material thickness effects

Frequency Selection for Thickness Measurement of Thin Materials


Selecting a frequency that produces a standard depth of penetration that exceeds the material thickness by 25% will produce a phase angle of approximately 90obetween the liftoff signal and the material thickness change signal.

Frequency Selection for Flaw Detection and Nonconductive Coating Thickness Measurements Defect Detection
A test frequency that puts the standard depth of penetration at about the expected depth of the defect will provide good phase separation between the defect and liftoff signals.

Phase Lag 0 rad = 0o 1 rad = 57.3o 2 rad = 114.6 o 3 rad = 171.9 o 4 rad = 229.2 o 5 rad = 286.5 o

Where: t = Material Thickness


= Standard Depth of Penetration

More Information More Information

Nonconductive Coating Thickness Measurement To minimize effects from the base metal the highest practical frequency should be used.

Ultrasonic Formula -

Longitudinal Wave Velocity

Shear Wave Velocity

Wavelength

Where:
VL = E = = = Longitudinal Wave Velocity Modulus of Elasticity Density Poisson's Ratio

Where:
Vs = E = = = G = Shear Wave Velocity Modulus of Elasticity Density Poisson's Ratio Shear Modulus

Where:
= V = F = Wavelength Velocity Frequency

more information: Longitudinal Wave Velocity Calculations Refraction (Snell's Law)

more information: Shear Wave Velocity Calculations Acoustic Impedance

more information: Wavelength Calculations Reflection Coefficient

Where:
Angle of the Incident Wave Angle of the Reflected R = Wave Velocity of Incident V1 = Wave Velocity of Reflected V2 = Wave =

Where:
Z = = V = Acoustic Impedance Density Velocity

Where:
R = Reflection Coefficient Acoustic Impedance of Z1 = Medium 1 Acoustic Impedance of Z2 = Medium 2

more information: Refracted Angle Calculations

more information: more information: Reflection Coefficient Acoustic Impedance Calculations Calculations

Near Field

Beam Spread Half Angle

Decibel (dB) Gain or Loss

Where:
N = D = = V = Near Field Transducer Diameter Wavelength Velocity

Where:
D V F = = = = Wavelength Transducer Diameter Velocity Frequency

Where:
dB = P1 = P2 = Decibel Presure Amplitude 1 Presure Amplitude 2

more information: Near Field Calculations Near Field Calculator

more information: Beam Spread Calculations Beam Divergence Calculator

more information: dB Gain or Loss Calculations

Angle Beam Inspection Calculations

When performing an angle beam inspection, it is important to know where the sound beam is encountering an interface and reflecting. The reflection points are sometimes referred to as nodes. The location of the nodes can be obtained by using the trigonometric functions or by using the trig-based formulas which are given below.

Nodes - surface points where sound waves reflect. Skip Distance - surface distance of two successive nodes. Leg 1 (L1) - sound path in material to 1st node. Leg 2 (L2) - sound path in material from 1st to 2nd node. R - refracted sound wave angle.

Skip Distance Formulas

Surface Distance Formulas

Leg 1 and Leg 2 Formulas

Flaw Depth (1st Leg)

Flaw Depth (2nd Leg)

Standards and Specifications


A standard as something that is established for use as basis of comparison. There are standards for practically everything that can be measured or evaluated ... from time to materials to processes. Congress created the National Institute of Standards and Technology in 1901 at the start of the industrial revolution to provide the measurements and standards needed to resolve and prevent disputes over trade and to encourage standardization. NIST develops technologies, measurement methods and standards that help US companies compete in the global marketplace. NDT personnel are sometimes required to use calibration standards that are traceable back to a standard held by NIST. This might be a conductivity standard, which can be shown to have the same electrical conductivity as a NIST standard; or it could be a setup standard that was measured with a micrometer that was calibrated using a NIST standard. A notable development of the twentieth century is the preparation and use of standard specifications to improve the consistency of manufacturing materials and processes, and the resulting products. A specification is a detailed description as to how to produce something or how to perform a particular task. Anytime a product is marked as meeting a specification or a contract requires use of a specification, the product or service must meet the requirements of document. A standard specification is the result of agreement among the involved parties and usually involves acceptance for use by some organization. Standard specifications do not, however, necessarily imply a degree of permanence (like dimensional or volumetric standards), because technical advances in a given field usually calls for periodic revisions to the requirements. Properly prepared, standards can be of great value to industry. Some of the advantages are:

They usually represent the combined knowledge of large group of individuals including producers, consumers and other interested parties, and, thus, reduce the possibility of misinterpretation. They give the manufacturer a standard of production and, therefore, tend to result in a more uniform process or product. They lower unit cost by making standard processes and mass production possible. They permit the consumer to use a specification that has been tried and is enforceable. They set standards of testing and measurement and hence permit the comparison of results.

The disadvantage of standard specifications is they tend to "freeze" practices sometimes based on little data or knowledge, and slow the development of better practices.

Standards always represent an effort by some organized group of people. Any such organization, be it public or private, becomes the standardizing agency. Various levels of these agencies exist, ranging from a single business to local government to national groups to international organizations. The professional and industrial organizations in the United States that lead the development of standards relative to the field of NDT include: the ASTM International, the Society of Automotive Engineers (SAE), the American Iron and Steel Institution (AISI), the American Welding Society (AWS) and the ASME International. Many specifications have also been developed by US government agencies such as the Department of Defense (DOD). However, the US government is downscaling its specification efforts and many military specifications are being converted to specification controlled by industry groups. For example, MIL-I-25135 has historically been the controlling document for both military and civilian penetrant material uses. The recent change in military specification management has lead to the requirement of the Mil specification be incorporated into SAE's AMS 2644 and industry is transition towards the use of this specification. Generally, the desired tendency is for a given standard to become more uniformly used and accepted. One method of increasing standardization is for a large agency to adopt a standard developed by a smaller one. In the US, thousands of standard specifications are recognized by the American National Standards Institute (ANSI), which is a national, yet private, coordinating agency. At the international level, the International Organization for Standardization (ISO) performs this function. The ISO was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The Unites States is represented within the ISO by the ANSI. Additional information and links to the standards and specification organizations previously mentioned are provided below. ASTM International Partial list of Founded in 1898, ASTM International is a not-for-profit organization that ASTM standards provides a global forum for the development and publication of voluntary relative to NDT consensus standards for materials, products, systems, and services. Formerly known as the American Society for Testing and Materials, ASTM Link to the International provides standards that are accepted and used in research and ASTM web site development, product testing, quality systems, and commercial transactions around the globe. Over 30,000 individuals from 100 nations are the members of ASTM International, who are producers, users, consumers, and representatives of government and academia. In over 130 varied industry areas, ASTM standards serve as the basis for manufacturing, procurement, and regulatory activities. Each year, ASTM publishes the Annual Book of ASTM Standards, which consists of approximately 70 volumes. Most of the NDT related documents can be found in Volume 03.03, Nondestructive Testing. E-03.03 is under the jurisdiction of ASTM Committee E-7. Each standard practice or guide is the direct responsibility of a subcommittee. For example, document E-94 is the responsibility of subcommittee E07.01 on Radiology (x and gamma) Methods.

This committee, comprised of technical experts from many different industries, must review the document every five years and if not revised, it must be reapproved or withdrawn. The Society of Automotive Engineers (SAE) Partial list of The Society of Automotive Engineers is a professional society that serves as SAE standards resource for technical information and expertise used in designing, building, relative to NDT maintaining, and operating self-propelled vehicles for use on land or sea, in air or space. Over 83,000 engineers, business executives, educators, and students Link to the from more than 97 countries form the membership who share information and SAE web site exchange ideas for advancing the engineering of mobility systems. SAE is responsible for developing several different documents for the aerospace community. These documents include: Aerospace Standards (AS), Aerospace Material Specifications (AMS), Aerospace Recommended Practices (ARP), Aerospace Information Reports (AIR) and Ground Vehicle Standards (JStandards). The documents are developed by SAE Committee K members, which are technical experts from the aerospace community. ASME International ASME International was founded in 1880 as the American Society of Mechanical Engineers. It is a nonprofit educational and technical organization serving a worldwide membership of 125,000. ASME maintains and distributes 600 codes and standards used around the world for the design, manufacturing and installation of mechanical devices. One of these codes is called the Boiler and Pressure Vessel Code. This code controls the design, inspection, and repair of pressure vessels. Inspection plays a big part in keeping the components Link to the ASME web site. operating safely. More information about the B&PV Code can be found at the links to the left. More information on the Boiler & Pressure Vessel Code The American Welding Society Partial list of AWS Standards and Documents relative to NDT Link to the AWS web site The American Welding Society (AWS) was founded in 1919 as a multifaceted, nonprofit organization with a goal to advance the science, technology and application of welding and related joining disciplines. AWS serves 50,000 members worldwide. Membership consists of engineers, scientists, educators, researchers, welders, inspectors, welding foremen, company executives and officers, and sales associates.

The International Organization for Standardization (ISO)

Partial list of ISO standards and documents relative to NDT Link to the ISO web site

The International Organization for Standardization (ISO) was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The ISO is headquartered in Geneva, Switzerland. The Unites States is represented by the ANSI.

The Air Transport Association (ATA) Link to the Founded by a group of 14 airlines in 1936, the ATA was the first, and today ATA web site. remains, the only trade organization for the principal US airlines. The purpose of the ATA is to support and assist its members by promoting the air transport industry and the safety, cost effectiveness, and technological advancement of its operations; advocating common industry positions before state and local governments; conducting designated industry-wide programs; and assuring governmental and public understanding of all aspects of air transport. There are two ATA documents that serve as guidelines for the training of inspection personnel.

ATA Specification 105, Guidelines for Training and Qualifying Personnel in Non-Destructive Testing Methods. This document serves as a guideline for the development of a training program for personnel who accomplish nondestructive testing tasks. While partially derived from more universal training standards such as ASNT SNT-TC-1A and NAS 410, this document is dedicated to preparing a curriculum for an airline's maintenance training program and qualifying individuals to conduct aircraft inspections. ATA Specification 107, Visual Inspection Personnel Training and Qualification Guide for FAR Part 121 Air Carriers. This document addresses training and qualification needs of the aircraft inspection technician and recommends a minimum list of required inspection items.

The Aerospace Industries Association

Link to the The Aerospace Industries Association represents the nation's major AIA's web site. manufacturers of commercial, military and business aircraft, helicopters, aircraft engines, missiles, spacecraft, materials, and related components and equipment. The AIA has been a aerospace industry trade association since 1919. It was originally known as the Aeronautical Chamber of Commerce (ACCA). The AIA is responsible for two NDT related documents, which are:

NAS 410, Certification & Qualification Of Nondestructive Test Personnel. This document is a widely used document in the aerospace industry as it replaces MIL-STD-410E: Military Standard, Nondestructive Testing Personnel Qualification and Certification.. NAS 999, Nondestructive Inspection of Advanced Composite Structure.

The American National Standards Institute (ANSI) Link to the ANSI web site. ANSI is a private, nonprofit organization that administers and coordinates the US voluntary standardization and conformity assessment system. The Institute's mission is to enhance both the global competitiveness of US business and the US quality of life by promoting and facilitating voluntary consensus standards and conformity assessment systems, and safeguarding their integrity.

US Department of Defense Specifications - A list of DOD specifications (Mil Specs, NAV, Etc.) was not prepared since the trend is to move away from their use and more documents are being canceled or made inactive everyday. Information on DOD specifications can be found at the following web site. The Department of Defense Single Stock Point for Military Specifications, Standards and Related Publications

Greek Letters

Greek letters were introduced into mathematics long ago to provide a collection of useful symbols to stand for abstract objects, such as numbers, sets, functions, and spaces. At the time they were introduced, most scholars had been taught some Greek during their education, so the letters were familiar. Today, outside of the Greek community, these symbols may seem quite foreign, and difficult to remember. The table below lists all of the letters in the Greek alphabet, upper-case and lower-case, with their names and pronunciations. The lower-case letters are most often used for variables, such as angles and complex numbers, and for functions and formulas, while the upper-case letters more commonly stand for sets and spaces.

CAP / lower

Name & Pronunciation CAP / lower Name & Pronunciation


ALPHA (AL-fuh) BETA (BAY-tuh) GAMMA (GAM-uh) DELTA (DEL-tuh) EPSILON (EP-sil-on)
The two lower-case versions are used interchangeably.

NU (NOO) XI (KS-EYE) OMICRON (OM-i-KRON) PI (PIE) RHO (ROW) SIGMA (SIG-muh) TAU (TAU) UPSILON (OOP-si-LON) PHI (FEE)
The two lower-case versionsare used interchangeably.

ZETA (ZAY-tuh) ETA (AY-tuh) THETA (THAY-tuh) IOTA (eye-OH-tuh) KAPPA (KAP-uh) LAMBDA (LAM-duh) MU (MYOO)

CHI (K-EYE) PSI (SIGH) OMEGA (oh-MAY-guh)

The Decibel

The equation used to describe the difference in intensity between two ultrasonic or other sound measurements is:

where: DI is the difference in sound intensity expressed in decibels (dB), P1 and P2 are two different sound pressure measurements, and the log is to base 10. What exactly is a decibel? The decibel (dB) is one tenth of a Bel, which is a unit of measure that was developed by engineers at Bell Telephone Laboratories and named for Alexander Graham Bell. The dB is a logarithmic unit that describes a ratio of two measurements. The basic equation that describes the difference in decibels between two measurements is:

where: delta X is the difference in some quantity expressed in decibels, X1 and X2 are two different measured values of X, and the log is to base 10. (Note the factor of two difference between this basic equation for the dB and the one used when making sound measurements. This difference will be explained in the next section.) Why is the dB unit used? Use of dB units allows ratios of various sizes to be described using easy to work with numbers. For example, consider the information in the table. Ratio between Measurement 1 and 2 Equation 1/2 dB = 10 log (1/2) 1 dB = 10 log (1) 2 dB = 10 log (2) 10 dB = 10 log (10) 100 dB = 10 log (100) 1,000 dB = 10 log (1000) 10,000 dB = 10 log (10000) 100,000 dB = 10 log (100000) 1,000,000 dB = 10 log (1000000) 10,000,000 dB = 10 log (10000000) 100,000,000 dB = 10 log (100000000) 1,000,000,000 dB = 10 log (1000000000) dB -3 dB 0 dB 3 dB 10 dB 20 dB 30 dB 40 dB 50 dB 60 dB 70 dB 80 dB 90 dB

From this table it can be seen that ratios from one up to ten billion can be represented with a single or double digit number. Ease to work with numbers was particularly important in the days before the advent of the calculator or computer. The focus of this discussion is on using the dB in measuring sound levels, but it is also widely used when measuring power, pressure, voltage and a number of other things. Use of the dB in Sound Measurements Sound intensity is defined as the sound power per unit area perpendicular to the wave. Units are typically in watts/m2 or watts/cm2. For sound intensity, the dB equation becomes:

However, the power or intensity of sound is generally not measured directly. Since sound consists of pressure waves, one of the easiest ways to quantify sound is to measure variations in pressure (i.e. the amplitude of the pressure wave). When making ultrasound measurements, a transducer is used, which is basically a small microphone. Transducers like most other microphones produced a voltage that is approximately proportionally to the sound pressure (P). The power carried by a traveling wave is proportional to the square of the amplitude. Therefore, the equation used to quantify a difference in sound intensity based on a measured difference in sound pressure becomes:

(The factor of 2 is added to the equation because the logarithm of the square of a quantity is equal to 2 times the logarithm of the quantity.) Since transducers and microphones produce a voltage that is proportional to the sound pressure, the equation could also be written as:

where: delta I is the change in sound intensity incident on the transducer and V1 and V2 are two different transducer output voltages. Revising the table to reflect the relationship between the ratio of the measured sound pressure and the change in intensity expressed in dB produces: Ratio between Measurement 1 and 2 1/2 Equation dB = 20 log (1/2) dB - 6 dB

1 2 10 100 1,000 10,000 100,000 1,000,000 10,000,000 100,000,000 1,000,000,000

dB = 20 log (1) dB = 20 log (2) dB = 20 log (10) dB = 20 log (100) dB = 20 log (1000) dB = 20 log (10000) dB = 20 log (100000) dB = 20 log (1000000) dB = 20 log (10000000) dB = 20 log (100000000) dB = 20 log (1000000000)

0 dB 6 dB 20 dB 40 dB 60 dB 80 dB 100 dB 120 dB 140 dB 160 dB 180 dB

From the table it can be seen that 6 dB equates to a doubling of the sound pressure. Alternately, reducing the sound pressure by 2, results in a 6 dB change in intensity. Absolute" Sound Levels Whenever the decibel unit is used, it always represents the ratio of two values. Therefore, in order to relate different sound intensities it is necessary to choose a standard reference level. The reference sound pressure (corresponding to a sound pressure level of 0 dB) commonly used is that at the threshold of human hearing, which is conventionally taken to be 210 5 Newton per square meter, or 20 micropascals (20 Pa). To avoid confusion with other decibel measures, the term dB(SPL) is used.

Accuracy, Error, Precision, and Uncertainty


Introduction All measurements of physical quantities are subject to uncertainties in the measurements. Variability in the results of repeated measurements arises because variables that can affect the measurement result are impossible to hold constant. Even if the "circumstances," could be precisely controlled, the result would still have an error associated with it. This is because the scale was manufactured with a certain level of quality, it is often difficult to read the scale perfectly, fractional estimations between scale marking may be made and etc. Of course, steps can be taken to limit the amount of uncertainty but it is always there. In order to interpret data correctly and draw valid conclusions the uncertainty must be indicated and dealt with properly. For the result of a measurement to have clear meaning, the value cannot consist of the measured value alone. An indication of how precise and accurate the result is must also be included. Thus, the result of any physical measurement has two essential components: (1) A numerical value (in a specified system of units) giving the best estimate possible of the quantity measured, and (2) the degree of uncertainty associated with this estimated value. Uncertainty is a parameter characterizing the range of values within which the value of the measurand can be said to lie within a specified level of confidence. For example, a measurement of the width of a table might yield a result such as 95.3 +/- 0.1 cm. This result is basically communicating that the person making the measurement believe the value to be closest to 95.3cm but it could have been 95.2 or 95.4cm. The uncertainty is a quantitative indication of the quality of the result. It gives an answer to the question, "how well does the result represent the value of the quantity being measured?" The full formal process of determining the uncertainty of a measurement is an extensive process involving identifying all of the major process and environmental variables and evaluating their effect on the measurement. This process is beyond the scope of this material but is detailed in the ISO Guide to the Expression of Uncertainty in Measurement (GUM) and the corresponding American National Standard ANSI/NCSL Z540-2. However, there are measures for estimating uncertainty, such as standard deviation, that are based entirely on the analysis of experimental data when all of the major sources of variability were sampled in the collection of the data set. The first step in communicating the results of a measurement or group of measurements is to understand the terminology related to measurement quality. It can be confusing, which is partly due to some of the terminology having subtle differences and partly due to the terminology being used wrongly and inconsistently. For example, the term "accuracy" is often used when "trueness"

should be used. Using the proper terminology is key to ensuring that results are properly communicated. True Value Since the true value cannot be absolutely determined, in practice an accepted reference value is used. The accepted reference value is usually established by repeatedly measuring some NIST or ISO traceable reference standard. This value is not the reference value that is found published in a reference book. Such reference values are not "right" answers; they are measurements that have errors associated with them as well and may not be totally representative of the specific sample being measured Accuracy and Error Accuracy is the closeness of agreement between a measured value and the true value. Error is the difference between a measurement and the true value of the measurand (the quantity being measured). Error does not include mistakes. Values that result from reading the wrong value or making some other mistake should be explained and excluded from the data set. Error is what causes values to differ when a measurement is repeated and none of the results can be preferred over the others. Although it is not possible to completely eliminate error in a measurement, it can be controlled and characterized. Often, more effort goes into determining the error or uncertainty in a measurement than into performing the measurement itself. The total error is usually a combination of systematic error and random error. Many times results are quoted with two errors. The first error quoted is usually the random error, and the second is the systematic error. If only one error is quoted it is the combined error. Systematic error tends to shift all measurements in a systematic way so that in the course of a number of measurements the mean value is constantly displaced or varies in a predictable way. The causes may be known or unknown but should always be corrected for when present. For instance, no instrument can ever be calibrated perfectly so when a group of measurements systematically differ from the value of a standard reference specimen, an adjustment in the values should be made. Systematic error can be corrected for only when the "true value" (such as the value assigned to a calibration or reference specimen) is known. Random error is a component of the total error which, in the course of a number of measurements, varies in an unpredictable way. It is not possible to correct for random error. Random errors can occur for a variety of reasons such as:

Lack of equipment sensitivity. An instrument may not be able to respond to or indicate a change in some quantity that is too small or the observer may not be able to discern the change. Noise in the measurement. Noise is extraneous disturbances that are unpredictable or random and cannot be completely accounted for. Imprecise definition. It is difficult to exactly define the dimensions of a object. For example, it is difficult to determine the ends of a crack with measuring its length. Two people may likely pick two different starting and ending points.

Trueness and Bias Trueness is the closeness of agreement between the average value obtained from a large series of test results and an accepted true. The terminology is very similar to that used in accuracy but trueness applies to the average value of a large number of measurements. Bias is the difference between the average value of the large series of measurements and the accepted true. Bias is equivalent to the total systematic error in the measurement and a correction to negate the systematic error can be made by adjusting for the bias. Precision, Repeatability and Reproducibility Precision is the closeness of agreement between independent measurements of a quantity under the same conditions. It is a measure of how well a measurement can be made without reference to a theoretical or true value. The number of divisions on the scale of the measuring device generally affects the consistency of repeated measurements and, therefore, the precision. Since precision is not based on a true value there is no bias or systematic error in the value, but instead it depends only on the distribution of random errors. The precision of a measurement is usually indicated by the uncertainty or fractional relative uncertainty of a value. Repeatability is simply the precision determined under conditions where the same methods and equipment are used by the same operator to make measurements on identical specimens. Reproducibility is simply the precision determined under conditions where the same methods but different equipment are used by different operator to make measurements on identical specimens. Uncertainty Uncertainty is the component of a reported value that characterizes the range of values within which the true value is asserted to lie. An uncertainty estimate should address error from all possible effects (both systematic and random) and, therefore, usually is the most appropriate means of expressing the accuracy of results. This is consistent with ISO guidelines. However, in many measurement situations the systematic error is not address and only random error is included in the uncertainty measurement. When only random error is included in the uncertainty estimate, it is a reflection of the precision of the measurement. Summary Error is the difference between the true value of the measurand and the measured value. The total error is a combination of both systematic error and random error. Trueness is the closeness of agreement between the average value obtained from a large series of test results and the accepted true. Trueness is largely affected by systematic error. Precision is the closeness of agreement between independent measurements. Precession is largely affected by random error. Accuracy is an expression of the lack of error. Uncertainty characterizes the range of values within which the true value is asserted to lie with some level of confidence. References: Royal Society of Chemistry, Analytical Methods Committee Technical Brief, No. 13, September 2003.

ANSI/NCSL, Z540-2-1997, U.S. Guide to the Expression of Uncertainty in Measurement, 1st ed., October 1997. Eurachem/CITAC, Quantifying Uncertainty in Analytical Measurement, 2nd edition, 2000. NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/, 2006 ISO 5725-1, Accuracy (trueness and precision) of measurement methods and results Part 1: General principles and definitions.

Technical Reports

NDT personnel write technical reports for two primary purposes. Technical reports are used to communicate information to customers, colleagues and managers, and they are used to document the equipment and procedures used in testing or research and the results obtained so that the work can be repeated if necessary or built upon. The content and style of technical reports vary widely depending on the primary purpose and the audience. Many companies and organizations have developed their own standard format. The sections generally included in technical reports are shown to the right. Qualities of Good Technical Reports Regardless of the specific format used, all quality technical reports will posses the following qualities: Accuracy Great care should be taken to ensure that the information is presented accurately. Make sure values are transferred correctly into the report and calculations are done properly. Since many people proof read right over their own typographical errors, it is often best to have another person proofread the report. Mistakes may cause the reader to doubt other points of the report and reflect on the professionalism of the author. Objectivity Data must be evaluated honestly and without bias. Conclusions should be drawn solely from the facts presented. Opinions and conjecture should be clearly identified if included at all. Deficiencies in the testing or the results should be noted. Readers should be informed of all assumptions and probable sources of errors if encountered. Clarity The author should work to convey an exact meaning to the reader. The text must be clear and unambiguous, mathematical symbols must be fully defined, and the figures and tables must be easily understood. Clarity must be met from the readers' point of view. Dont assume that readers are familiar with previous work or previous reports. When photographs are included in a report, a scale or some object of standard size should be included in the photograph to help your readers judge the size of the objects shown. Simply stating the magnification of a photograph can cause uncertainty since the size of photographs often change in reproduction. Conciseness Most people are fairly busy and will not want to spend any more time than necessary reading a report. Therefore, technical reports should be concisely written. Include all the details needed to fully document and explain the work but keep it as brief as possible. Conciseness is especially important in the abstract and conclusion sections.

Continuity Reports should be organized in a logical manner so that it is easy for the reader to follow. It is often helpful to start with an outline of the paper, making good use of headings. The same three step approach for developing an effective presentation can be used to develop an effective report: 1) Introduce the subject matter (tell readers what they will be reading about) 2) Provide the detailed information (tell them what you want them to know) 3) Summarize the results and conclusions (re-tell them the main points) Make sure that information is included in the appropriate section of the report. For example, dont add new information about the procedure followed in the discussion section. Information about the procedure belongs in the procedure section. The discussion section should focus on explaining the results, highlighting significant findings, discussing problems with the data and noting possible sources of error, etc. Be sure not to introduce any new information in the conclusion sections. The conclusion section should simple state the conclusion drawn from the work. Writing Style A relatively formal writing style should be used when composing technical reports. The personal style of the writer should be secondary to the clear and objective communication of information. Writers should, however, strive to make their reports interesting and enjoyable to read.

Das könnte Ihnen auch gefallen