Sie sind auf Seite 1von 119

Biomedical Engineering and Medical Physics YBB0050

Table of contents
Table of contents ........................................................................................................................ 2 Table of figures .......................................................................................................................... 5 Introduction ................................................................................................................................ 7 1. Professional societies ........................................................................................................... 12 1.1 Biomedical Engineering Societies in the World ............................................................ 12 1.1.1 American Institute for Medical and Biological Engineering (AIMBE).................. 12 1.1.2 IEEE Engineering in Medicine and Biology Society (EMBS) ............................... 13 1.1.3 Canadian Medical and Biological Engineering Society.......................................... 14 1.1.4 European Society for Engineering in Medicine (ESEM)........................................ 14 1.1.5 French Groups for Medical and Biological Engineering ........................................ 14 1.1.6 International Federation for Medical and Biological Engineering (IFMBE).......... 15 1.1.7 International Union for Physics and Engineering Sciences in Medicine (IUPESM) .......................................................................................................................................... 15 1.1.8 International Council of Scientific Unions (ICSU)................................................. 15 2. Biomedical Sensors .............................................................................................................. 17 2.1 Variable Resistance Sensor ............................................................................................ 17 2.2 Strain Gauge................................................................................................................... 18 2.3 Inductance Sensors......................................................................................................... 20 2.3.1 Mutual Inductance................................................................................................... 20 2.3.2 Variable Reluctance ................................................................................................ 20 2.4 Linear Variable Differential Transformer ...................................................................... 21 2.5 Capacitive Sensors ......................................................................................................... 21 2.6 Sonic and Ultrasonic Sensors......................................................................................... 22 2.6.1 Velocity Measurement ............................................................................................ 22 2.6.2 Magnetic Induction ................................................................................................. 22 2.6.3 Doppler Ultrasound ................................................................................................. 23 2.7 Accelerometers............................................................................................................... 24 2.8 Force............................................................................................................................... 25 2.9 Measurement of Fluid Dynamic Variables .................................................................... 25 2.10 Pressure Measurement.................................................................................................. 25 2.11 Measurement of Flow................................................................................................... 27 2.12 Temperature ................................................................................................................. 29 2.13 Metallic Resistance Thermometers .............................................................................. 30 2.14 Thermistors................................................................................................................... 31 2.15 Thermocouples ............................................................................................................. 32 3. Physiological signals ............................................................................................................ 35 3.1 Electrocardiogram ECG ................................................................................................. 35 3.1.2 The ambulatory ECG .............................................................................................. 38 3.1.3 Patient Monitoring................................................................................................... 38 3.1.4 High-Resolution ECG ............................................................................................. 38 3.2 Electromyography EMG ............................................................................................... 39 3.2.1 The Origin of Electromyograms ............................................................................. 39 3.2.2 Electromyographic Recordings ............................................................................... 40 3.2.2.4 Variation................................................................................................................... 42 3.2.3 Single-Fiber EMG ................................................................................................... 42 3.2.4 Macro EMG............................................................................................................. 43 3.3 EEG Electroencephalography ........................................................................................ 43

3.3.1 History..................................................................................................................... 43 3.3.2 EEG Recording Techniques .................................................................................... 44 3.4 Magnetoencephalography MEG ............................................................................... 45 3.4.1 MEG Recording Device .......................................................................................... 45 3.5 Mapping-Based on EEG or MEG .................................................................................. 46 3.6 Digital Biomedical Signal Acquisition and Processing ................................................. 47 3.6.1 Acquisition .............................................................................................................. 47 3.6.2 Signal Processing .................................................................................................... 48 3.6.3 Digital Filters........................................................................................................... 48 3.6.4 Signal Averaging..................................................................................................... 49 3.6.5 Spectral Analysis..................................................................................................... 52 4. X-Ray Equipment................................................................................................................. 54 4.1 Production of X-Rays..................................................................................................... 54 4.1.1 X-Ray Tube ............................................................................................................. 54 4.1.2 Generator................................................................................................................. 56 4.1.3 Image Detection: Screen Film Combinations ......................................................... 56 4.1.4 Image Detection: X-Ray Image Intensifiers with Televisions ................................ 57 4.1.5 Biomedical Imaging ................................................................................................ 59 4.1.6 Image Detection: Digital Systems........................................................................... 59 4.2 Computed Tomography.................................................................................................. 60 4.2.1 Instrumentation........................................................................................................ 60 4.2.2 Data-Acquisition Geometries.................................................................................. 62 4.2.3 First Generation: Parallel-Beam Geometry............................................................. 62 4.2.4 Second Generation: Fan Beam, Multiple Detectors................................................ 63 4.2.5 Third Generation: Fan Beam, Rotating Detectors................................................... 63 4.2.6 Fourth Generation: Fan Beam, Fixed Detectors...................................................... 63 4.2.7 Fifth Generation: Scanning Electron Beam ............................................................ 64 4.2.8 Spiral/Helical Scanning........................................................................................... 64 4.2.9 X-Ray System ......................................................................................................... 65 4.2.10 Computer System .................................................................................................. 68 4.3 Magnetic Resonance Imaging (MRI) ............................................................................. 69 4.3.1 Fundamentals of MRI.............................................................................................. 69 4.3.2 Fundamentals of MRI Instrumentation ................................................................... 70 4.3.3 Static Field Magnets................................................................................................ 70 4.3.4 Gradient Coils ......................................................................................................... 71 4.3.5 Radiofrequency Coils.............................................................................................. 72 4.3.6 Functional MRI ....................................................................................................... 73 4.4 Positron-Emission Tomography (PET).......................................................................... 74 4.4.1 Background ............................................................................................................. 74 4.4.2 PET Theory ............................................................................................................. 75 4.4.3 Physical Factors Affecting Resolution.................................................................... 79 5. Ultrasound ............................................................................................................................ 82 5.1 Transducers .................................................................................................................... 82 5.1.1 Transducer Materials............................................................................................... 82 5.2 Scanning with Array Transducers .................................................................................. 83 5.3 Ultrasonic Imaging......................................................................................................... 84 5.4 Blood Flow Measurement Using Ultrasound................................................................. 86 5.5 Single Sample Volume Doppler Instruments................................................................. 86 5.6 Color Flow Mapping ...................................................................................................... 87 6. LASERS IN MEDICAL DIAGNOSTICS........................................................................... 88

6.1 History............................................................................................................................ 88 6.2 Wavelengths of different lasers...................................................................................... 88 6.3 Characteristics of a typical helium-neon laser ............................................................... 89 6.4 Absorption characteristics of tissue constituents. .......................................................... 90 6.5 Ophthalmology............................................................................................................... 90 6.6 Holography..................................................................................................................... 91 6.7 Pulse Oximetry............................................................................................................... 91 6.7.1 Limitations .............................................................................................................. 91 6.8 Blood flow velocity measurements ................................................................................ 92 6.8.3 Measuring principle................................................................................................. 92 6.9 Lasers in Cardiovascular Diagnostics ............................................................................ 94 6.9.3 Method for optical self-mixing ............................................................................... 94 6.9.4 Pulse profile and pulse wave velocity ..................................................................... 97 6.9.5 Pulse wave velocity measurement......................................................................... 100 6.9.6 Blood flow measurements ..................................................................................... 101 7. Clinical engineer: safety, standards and regulations .......................................................... 104 7.1 What Is a Clinical Engineer?........................................................................................ 104 7.2 Evolution of Clinical Engineering................................................................................ 104 7.3 Hospital Organization and the Role of Clinical Engineering....................................... 106 7.3.1 Governing Board (Trustees).................................................................................. 106 7.3.2 Hospital Administration ........................................................................................ 107 7.4 Major Functions of a Clinical Engineering Department .............................................. 107 7.4.1 Technology Management...................................................................................... 107 7.4.2 Risk Management.................................................................................................. 107 7.4.3 Technology Assessment ........................................................................................ 108 7.4.4 Facilities Design and Project Management ........................................................... 108 7.4.5 Training ................................................................................................................. 108 7.5 The Health Care Delivery System................................................................................ 108 7.5.1 Major Health Care Trends and Directions ............................................................ 109 7.6 Technology Assessment ............................................................................................... 109 7.6.1 Technology Assessment Process........................................................................... 109 7.7 Risk Management........................................................................................................ 110 8. Home care and rehabilitation ............................................................................................. 112 8.1 Introduction .................................................................................................................. 112 8.2 Rehabilitation Concepts ............................................................................................... 112 8.3 Engineering Concepts in Sensory Rehabilitation......................................................... 113 8.4 Engineering Concepts in Motor Rehabilitation............................................................ 116 8.5 Engineering Concepts in Communications Disorders ................................................. 117 8.6 Appropriate Technology .............................................................................................. 117 8.7 The Future of Engineering in Rehabilitation................................................................ 118 8.8 Future Developments ................................................................................................... 118

Table of figures
Figure 1. Examples of displacement sensors. .......................................................................... 18 Figure 2. Strain gauges on a cantilever (konsool) structure to provide temperature compensation.................................................................................................................... 19 Figure 3. Fundamental structure of an accelerometer. ............................................................. 24 Figure 4. See the structure of an unbonded strain gauge pressure sensor................................ 26 Figure 5. Fundamental structure of an electromagnetic flowmeter. ........................................ 27 Figure 6. Structure of an ultrasonic Doppler flowmeter with the major blocks of the electronic signal processing system. ................................................................................................. 29 Figure 7. Common forms of thermistors.................................................................................. 31 Figure 8. Circuit arrangement for a thermocouple showing the voltage-measuring device. ... 33 Figure 9. The 12-lead ECG. ..................................................................................................... 36 Figure 10. Simulated currents and extracellular potentials of frog sartorius muscle fiber (radius a = 50m). ............................................................................................................ 40 Figure 11. EMG needle electrodes........................................................................................... 41 Figure 12. MUP amplitude and duration.................................................................................. 41 Figure 13. Measurement of interpotential interval (IPI). ......................................................... 43 Figure 14. EEG measurement .................................................................................................. 44 Figure 15. Schematic diagram of a multisensor MEG system (left) along with a detection coil and SQUID in a single channel (right)............................................................................. 46 Figure 16. A whole-head MEG system with 148 recording channels operated in a magnetically shielded room. ............................................................................................ 46 Figure 17. General block diagram of the acquisition procedure of a digital signal. ................ 48 Figure 18. General block diagram of a digital filter. The output digital signal y(n) is obtained from the input x(n) by means of a transformation T[] which identifies the filter. .......... 48 Figure 19. Equivalent frequency response for the signal-averaging procedure for different values of N. ...................................................................................................................... 51 Figure 20. Enhancement of evoked potential (EP) by means of averaging technique. The EEG noise is progressively reduced, and the EP morphology becomes more recognizable as the number of averaged sweeps (N) is increased. ............................................................ 52 Figure 21. X-ray tube ............................................................................................................... 55 Figure 22. X-ray image intensifier. .......................................................................................... 58 Figure 23. Schematic drawing of a typical CT scanner installation, consisting of (1) control console, (2) gantry stand, (3) patient table, (4) head holder, and (5) laser imager. (Courtesy of Picker International, Inc.)............................................................................ 61 Figure 24. Typical CT images of (a) brain, (b) head showing orbits, (c) chest showing lungs, and (d) abdomen............................................................................................................... 61 Figure 25. Four generations of CT scanners illustrating the parallel- and fan-beam geometries [Robb, 1982]..................................................................................................................... 62 Figure 26. The major internal components of a fourth-generation CT gantry are shown in a photograph with the gantry cover removed (upper) and identified in the line drawing (lower). (Courtesy of Picker International, Inc.).............................................................. 64 Figure 27. Photograph of the slip rings used to pass power and control signals to the rotating gantry. (Courtesy of Picker International, Inc.) ............................................................... 65 Figure 28. Spiral scanning causes the focal spot to follow a spiral path around the patient as indicated. (Courtesy of Picker International, Inc.) ........................................................... 65

Figure 29. (a) A solid-state detector consists of a scintillating crystal and photodiode combination. (b) Many such detectors are placed side by side to form a detector array that may contain up to 4800 detectors.............................................................................. 67 Figure 30. Gas ionization detector ........................................................................................... 67 Figure 31. The computer system controls the gantry motions, acquires the x-ray transmission measurements, and reconstructs the final image. The system shown here uses 12 68000family CPUs. (Courtesy of Picker International, Inc.)..................................................... 68 Figure 32. Schematic drawing of a superconducting magnet .................................................. 71 Figure 33. Birdcage resonator. ................................................................................................. 72 Figure 34. Functional MR image demonstrating activation of the primary visual cortex. ...... 73 Figure 35. Functional MRI mapping of motor cortex for preoperative planning. ................... 74 Figure 36. The MRI image shows the arteriovenous malformation (AVM) as an area of signal loss due to blood flow. ..................................................................................................... 75 Figure 37. The physical basis of positron-emission tomography............................................. 76 Figure 38. Most modern PET cameras are multilayered with 15 to 47 levels or transaxial layers to be reconstructed................................................................................................. 77 Figure 39. The arrangement of scintillators and phototubes is shown..................................... 78 Figure 40. Factors contributing to the resolution of the PET tomograph. The contribution most accessible to further reduction is the size of the detector crystals.................................... 79 Figure 41. The evolution of resolution.................................................................................... 80 Figure 42. Resolution astigmatism in detecting off-center events. .......................................... 80 Figure 43. Array-element configurations and the region scanned by the acoustic beam......... 83 Figure 44. Schematic representation of the signal received from along a single line of sight in a tissue. ............................................................................................................................. 84 Figure 45. Completed M-mode display obtained by showing the M-lines side by side. ......... 85 Figure 46. Schematic representation of a heart and how a 2D image is constructed by scanning the transducer. ................................................................................................... 85 Figure 47. Operating environment for the estimation of blood velocity.................................. 86 Figure 48. Primary components of a laser................................................................................ 89 Figure 49. Measuring principle of blood flow velocity. ......................................................... 92 Figure 50. Method for optical self-mixing. .............................................................................. 94 Figure 51. Method for optical self-mixing . ............................................................................ 95 Figure 52. Pigtail Diode Laser: ................................................................................................ 95 Figure 53. Mixed signal amplitude dependent from laser current. .......................................... 96 Figure 54. Measured dependence of a self-mixing interference on the distance between laser and target (first five maximums) and the spectrum of laser diode................................... 96 Figure 55. Minimum registered optical power, reflected back to laser cavity, when self-mixing output signal S/N ratio was 1, was 19 pW. ...................................................................... 97 Figure 56. Measured pulse profile at the arm artery. ............................................................... 98 Figure 57. Frame of recorded pulse profile signal. .................................................................. 99 Figure 58. Processed pulse profile amplitude at the arm artery and processing algorithm...... 99 Figure 59. Pulse wave velocity measurement ........................................................................ 100 Figure 60. Recorded ECG, pulse profile and processed pulse profile signals. ...................... 100 Figure 61. Pulse delay measured from different locations of human body and processing algorithm. ....................................................................................................................... 101 Figure 62. Block diagram of the equipment for blood flow measurements........................... 102 Figure 63. Blood flow measurements signals. .................................................................... 103 Figure 64. Differences between measured and calculated Doppler frequencies.................... 103 Figure 65. Diagram illustrating the range of interactions of a clinical engineer.................... 105 Figure 66. Double-edged sword concept of risk management............................................... 111

Introduction
Biomedical Engineering is no longer an emerging discipline; it has become an important vital interdisciplinary field. Biomedical engineers are involved in many medical ventures. They are involved in the design, development and utilization of materials, devices (such as pacemakers, lithotripsy, etc.) and techniques (such as signal processing, artificial intelligence, etc.) for clinical research and use; and serve as members of the health care delivery team (clinical engineering, medical informatics, rehabilitation engineering, etc.) seeking new solutions for difficult heath care problems confronting our society. To meet the needs of this diverse body of biomedical engineers, this handbook provides a central core of knowledge in those fields encompassed by the discipline of biomedical engineering as we enter the 21st century. Before presenting this detailed information, however, it is important to provide a sense of the evolution of the modern health care system and identify the diverse activities biomedical engineers perform to assist in the diagnosis and treatment of patients. Evolution of the Modern Health Care System Before 1900, medicine had little to offer the average citizen, since its resources consisted mainly of the physician, his education, and his little black bag. In general, physicians seemed to be in short supply, but the shortage had rather different causes than the current crisis in the availability of health care professionals. Although the costs of obtaining medical training were relatively low, the demand for doctors services also was very small, since many of the services provided by the physician also could be obtained from experienced amateurs in the community. The home was typically the site for treatment and recuperation, and relatives and neighbors constituted an able and willing nursing staff. Babies were delivered by midwives, and those illnesses not cured by home remedies were left to run their natural, albeit frequently fatal, course. The contrast with contemporary health care practices, in which specialized physicians and nurses located within the hospital provide critical diagnostic and treatment services, is dramatic. The changes that have occurred within medical science originated in the rapid developments that took place in the applied sciences (chemistry, physics, engineering, microbiology, physiology, pharmacology, etc.) at the turn of the century. This process of development was characterized by intense interdisciplinary cross-fertilization, which provided an environment in which medical research was able to take giant strides in developing techniques for the diagnosis and treatment of disease. For example, in 1903, Willem Einthoven, the Dutch physiologist, devised the first electrocardiograph to measure the electrical activity of the heart. In applying discoveries in the physical sciences to the analysis of biologic process, he initiated a new age in both cardiovascular medicine and electrical measurement techniques. New discoveries in medical sciences followed one another like intermediates in a chain reaction. However, the most significant innovation for clinical medicine was the development of x-rays. These new kinds of rays, as their discoverer W. K. Roentgen described them in 1895, opened the inner man to medical inspection. Initially, x-rays were used to diagnose bone fractures and dislocations, and in the process, x-ray machines became commonplace in most urban hospitals. Separate departments of radiology were established, and their influence spread to other departments throughout the hospital. By the 1930s, x-ray visualization of practically all organ systems of the body had been made possible through the use of barium salts and a wide variety of radiopaque materials. X-ray technology gave physicians a powerful tool that, for the first time, permitted accurate diagnosis of a wide variety of diseases and

injuries. Moreover, since x-ray machines were too cumbersome and expensive for local doctors and clinics, they had to be placed in health care centers or hospitals. Once there, x-ray technology essentially triggered the transformation of the hospital from a passive receptacle for the sick to an active curative institution for all members of society. For economic reasons, the centralization of health care services became essential because of many other important technological innovations appearing on the medical scene. However, hospitals remained institutions to dread, and it was not until the introduction of sulfanilamide in the mid-1930s and penicillin in the early 1940s that the main danger of hospitalization, i.e., cross-infection among patients, was significantly reduced. With these new drugs in their arsenals, surgeons were able to perform their operations without prohibitive morbidity and mortality due to infection. Furthermore, even though the different blood groups and their incompatibility were discovered in 1900 and sodium citrate was used in 1913 to prevent clotting, full development of blood banks was not practical until the 1930s, when technology provided adequate refrigeration. Until that time, fresh donors were bled and the blood transfused while it was still warm. Once these surgical suites were established, the employment of specifically designed pieces of medical technology assisted in further advancing the development of complex surgical procedures. For example, the Drinker respirator was introduced in 1927 and the first heartlung bypass in 1939. By the 1940s, medical procedures heavily dependent on medical technology, such as cardiac catheterization and angiography (the use of a cannula threaded through an arm vein and into the heart with the injection of radiopaque dye for the x-ray visualization of lung and heart vessels and valves), were developed. As a result, accurate diagnosis of congenital and acquired heart disease (mainly valve disorders due to rheumatic fever) became possible, and a new era of cardiac and vascular surgery was established. Following World War II, technological advances were spurred on by efforts to develop superior weapon systems and establish habitats in space and on the ocean floor. As a byproduct of these efforts, the development of medical devices accelerated and the medical profession benefited greatly from this rapid surge of technological finds. Consider the following examples: 1. Advances in solid-state electronics made it possible to map the subtle behavior of the fundamental unit of the central nervous system the neuron as well as to monitor various physiologic parameters, such as the electrocardiogram, of patients in intensive care units. 2. New prosthetic devices became a goal of engineers involved in providing the disabled with tools to improve their quality of life. 3. Nuclear medicine an outgrowth of the atomic age emerged as a powerful and effective approach in detecting and treating specific physiologic abnormalities. 4. Diagnostic ultrasound based on sonar technology became so widely accepted that ultrasonic studies are now part of the routine diagnostic workup in many medical specialties. 5. Spare parts surgery also became commonplace. Technologists were encouraged to provide cardiac assist devices, such as artificial heart valves and artificial blood vessels, and the artificial heart program was launched to develop a replacement for a defective or diseased human heart. 6. Advances in materials have made the development of disposable medical devices, such as needles and thermometers, as well as implantable drug delivery systems, a reality.

7. Computers similar to those developed to control the flight plans of the Apollo capsule were used to store, process, and cross-check medical records, to monitor patient status in intensive care units, and to provide sophisticated statistical diagnoses of potential diseases correlated with specific sets of patient symptoms. 8. Development of the first computer-based medical instrument, the computerized axial tomography scanner, revolutionized clinical approaches to noninvasive diagnostic imaging procedures, which now include magnetic resonance imaging and positron emission tomography as well. The impact of these discoveries and many others has been profound. The health care system consisting primarily of the horse and buggy physician is gone forever, replaced by a technologically sophisticated clinical staff operating primarily in modern hospitals designed to accommodate the new medical technology. This evolutionary process continues, with advances in biotechnology and tissue engineering altering the very nature of the health care delivery system itself. The Field of Biomedical Engineering Today, many of the problems confronting health professionals are of extreme interest to engineers because they involve the design and practical application of medical devices and systems processes that are fundamental to engineering practice. These medically related design problems can range from very complex large-scale constructs, such as the design and implementation of automated clinical laboratories, multiphasic screening facilities (i.e., centers that permit many clinical tests to be conducted), and hospital information systems, to the creation of relatively small and simple devices, such as recording electrodes and biosensors, that may be used to monitor the activity of specific physiologic processes in either a research or clinical setting. They encompass the many complexities of remote monitoring and telemetry, including the requirements of emergency vehicles, operating rooms, and intensive care units. The American health care system, therefore, encompasses many problems that represent challenges to certain members of the engineering profession called biomedical engineers. Biomedical Engineering: A Definition Although what is included in the field of biomedical engineering is considered by many to be quite clear, there are some disagreements about its definition. For example, consider the terms biomedical engineering, bioengineering, and clinical (or medical ) engineering which have been defined in Pacelas Bioengineering Education Directory [Quest Publishing Co., 1990]. While Pacela defines bioengineering as the broad umbrella term used to describe this entire field, bioengineering is usually defined as a basic researchoriented activity closely related to biotechnology and genetic engineering, i.e., the modification of animal or plant cells, or parts of cells, to improve plants or animals or to develop new microorganisms for beneficial ends. In the food industry, for example, this has meant the improvement of strains of yeast for fermentation. In agriculture, bioengineers may be concerned with the improvement of crop yields by treatment of plants with organisms to reduce frost damage. It is clear that bioengineers of the future will have a tremendous impact on the quality of human life. The potential of this specialty is difficult to imagine. Consider the following activities of bioengineers: Development of improved species of plants and animals for food production

Invention of new medical diagnostic tests for diseases Production of synthetic vaccines from clone cells Bioenvironmental engineering to protect human, animal, and plant life from toxicants and pollutants Study of protein-surface interactions Modeling of the growth kinetics of yeast and hybridoma cells Research in immobilized enzyme technology Development of therapeutic proteins and monoclonal antibodies In reviewing the above-mentioned terms, however, biomedical engineering appears to have the most comprehensive meaning. Biomedical engineers apply electrical, mechanical, chemical, optical, and other engineering principles to understand, modify, or control biologic (i.e., human and animal) systems, as well as design and manufacture products that can monitor physiologic functions and assist in the diagnosis and treatment of patients. When biomedical engineers work within a hospital or clinic, they are more properly called clinical engineers. Activities of Biomedical Engineers The breadth of activity of biomedical engineers is significant. The field has moved significantly from being concerned primarily with the development of medical devices in the 1950s and 1960s to include a more wide-ranging set of activities. As illustrated below, the field of biomedical engineering now includes many new career areas. These areas include: Application of engineering system analysis (physiologic modeling, simulation, and control) to biologic problems Detection, measurement, and monitoring of physiologic signals (i.e., biosensors and biomedical instrumentation) Diagnostic interpretation via signal-processing techniques of bioelectric data Therapeutic and rehabilitation procedures and devices (rehabilitation engineering) Devices for replacement or augmentation of bodily functions (artificial organs) Computer analysis of patient-related data and clinical decision making (i.e., medical informatics and artificial intelligence) Medical imaging, i.e., the graphic display of anatomic detail or physiologic function The creation of new biologic products (i.e., biotechnology and tissue engineering) Typical pursuits of biomedical engineers, therefore, include: Research in new materials for implanted artificial organs Development of new diagnostic instruments for blood analysis Computer modeling of the function of the human heart Writing software for analysis of medical research data Analysis of medical device hazards for safety and efficacy Development of new diagnostic imaging systems Design of telemetry systems for patient monitoring Design of biomedical sensors for measurement of human physiologic systems variables Development of expert systems for diagnosis of disease Design of closed-loop control systems for drug administration Modeling of the physiologic systems of the human body Design of instrumentation for sports medicine

10

Development of new dental materials Design of communication aids for the handicapped Study of pulmonary fluid dynamics Study of the biomechanics of the human body Development of material to be used as replacement for human skin Biomedical engineering, then, is an interdisciplinary branch of engineering that ranges from theoretical, nonexperimental undertakings to state-of-the-art applications. It can encompass research, development, implementation, and operation. Accordingly, like medical practice itself, it is unlikely that any single person can acquire expertise that encompasses the entire field. Yet, because of the interdisciplinary nature of this activity, there is considerable interplay and overlapping of interest and effort between them. For example, biomedical engineers engaged in the development of biosensors may interact with those interested in prosthetic devices to develop a means to detect and use the same bioelectric signal to power a prosthetic device. Those engaged in automating the clinical chemistry laboratory may collaborate with those developing expert systems to assist clinicians in making decisions based on specific laboratory data. The possibilities are endless. Perhaps a greater potential benefit occurring from the use of biomedical engineering is identification of the problems and needs of our present health care system that can be solved using existing engineering technology and systems methodology. Consequently, the field of biomedical engineering offers hope in the continuing battle to provide high-quality health care at a reasonable cost; if properly directed toward solving problems related to preventive medical approaches, ambulatory care services, and the like, biomedical engineers can provide the tools and techniques to make our health care system more effective and efficient.

11

1. Professional societies
The field of biomedical engineering, which originated as a professional group on medical electronics in the late fifties, has grown from a few scattered individuals to very wellestablished organization. There are approximately 50 national societies throughout the world serving an increasingly growing community of biomedical engineers. The scope of biomedical engineering today is enormously diverse. Over the years, many new disciplines such as molecular biology, genetic engineering, computer-aided drug design, nanotechnology, and so on, which were once considered alien to the field, are now new challenges a biomedical engineer faces. Professional societies play a major role in bringing together members of this diverse community in pursuit of technology applications for improving the health and quality of life of human beings. Intersocietal cooperations and collaborations, both at national and international levels, are more actively fostered today through professional organizations such as the IFMBE, AIMBE, CORAL, and the IEEE. These developments are strategic to the advancement of the professional status of biomedical engineers. Some of the self-imposed mandates the professional societies should continue to pursue include promoting public awareness, addressing public policy issues that impact research and development of biologic and medical products, establishing close liaisons with developing countries, encouraging educational programs for developing scientific and technical expertise in medical and biologic engineering, providing a management paradigm that ensures efficiency and economy of health care technology [Wald, 1993], and participating in the development of new job opportunities for biomedical engineers.

1.1 Biomedical Engineering Societies in the World Globalization of biomedical engineering (BME) activities is underscored by the fact that there are several major professional BME societies currently operational throughout the world. The various countries and continents to have provided concerted action groups in biomedical engineering are Europe, the Americas, Canada, and the Far East, including Japan and Australia. while all these organizations share in the common pursuit of promoting biomedical engineering, all national societies are geared to serving the needs of their local memberships. The activities of some of the major professional organizations are described below. 1.1.1 American Institute for Medical and Biological Engineering (AIMBE) The United States has the largest biomedical engineering community in the world. Major professional organizations that address various cross sections of the field and serve over 20,000 biomedical engineers include (1) the American College of Clinical Engineering, (2) the American Institute of Chemical Engineers, (3) The American Medical Informatics Association, (4) the American Society of Agricultural Engineers, (5) the American Society for Artificial Internal Organs, (6) the American Society of Mechanical Engineers, (7) the Association for the Advancement of Medical Instrumentation, (8) the Biomedical Engineering Society, (9) the IEEE Engineering in Medicine and Biology Society, (10) an interdisciplinary Association for the Advancement of Rehabilitation and Assistive Technologies, (11) the Society for Biomaterials, (12) Orthopedic Research Society, (13) American Society of Biomechanics, and (14) American Association of Physicist in Medicine. In an effort to unify

all the disparate components of the biomedical engineering community in the United States as represented by these various societies, the American Institute for Medical and Biological Engineers (AIMBE) was created in 1992. The AIMBE is the result of a 3-year effort funded by the National Science Foundation and led by a joint steering committee established by the Alliance of Engineering in Medicine and Biology and the U.S. National Committee on Biomechanics. The primary goal of AIMBE is to serve as an umbrella organization for the purpose of unifying the bioengineering community, addressing public policy issues, identifying common themes of reflection and proposals for action, and promoting the engineering approach in societys effort to enhance health and quality of life through the judicious use of technology [Galletti, 1994]. AIMBE serves its role through four working divisions: (1) the Council of Societies, consisting of the 11 constituent organizations mentioned above, (2) the Academic Programs Council, currently consisting of 46 institutional charter members, (3) the Industry Council, and (4) the College Fellows. In addition to these councils, there are four commissions, Education, Public Awareness, Public Policy, and Liaisons. With its inception in 1992, AIMBE is a relatively young institution trying to establish its identity as an umbrella organization for medical and biologic engineering in the United States. As summarized by two of the founding officials of the AIMBE, Profs Nerem and Galletti: What we are all doing, collectively, is defining a focus for biological and medical engineering. In a society often confused by technophobic tendencies, we will try to assert what engineering can do for biology, for medicine, for health care and for industrial development, We should be neither shy, nor arrogant, nor self-centered. The public has great expectations from engineering and technology in terms of their own health and welfare. They are also concerned about side effects, unpredictable consequences and the economic costs. Many object to science for the sake of science, resent exaggerated or empty promises of benefit to society, and are shocked by sluggish or misdirected flow from basic research to useful applications. These issues must be addressed by the engineering and medical communities. 1.1.2 IEEE Engineering in Medicine and Biology Society (EMBS) The Institute of Electrical and Electronic Engineers (IEEE) is the largest international professional organization in the world and accommodates 37 different societies under its umbrella structure. Of these 37, the Engineering in Medicine and Biology Society represents the foremost international organization serving the needs of nearly 8000 biomedical engineering members around the world. The field of interest of the EMB Society is application of the concepts and methods of the physical and engineering sciences in biology and medicine. Each year, the society sponsors a major international conference while cosponsoring a number of theme-oriented regional conferences throughout the world. A growing number of EMBS chapters and student clubs across the major cities of the world have provided the forum for enhancing local activities through special seminars, symposia, and summer schools on biomedical engineering topics. These are supplemented by EMBSs special initiatives that provide faculty and financial subsidies to such programs through the societys distinguished lecturer program as well as the societys Regional Conference Committee. Other feature achievements of the society include its premier publications in the form of three monthly journals (Transactions on Biomedical Engineering, Transactions on Rehabilitation Engineering, and Transactions on Information Technology in Biomedicine) and a bi-monthly EMB Magazine (the IEEE Engineering in Medicine and Biology Magazine).

13

EMBS is a transnational voting member society of the International Federation for Medical and Biological Engineering. 1.1.3 Canadian Medical and Biological Engineering Society The Canadian Medical and Biological Engineering Society (CMBES) is an association covering the fields of biomedical engineering, clinical engineering, rehabilitation engineering, and biomechanics and biomaterials applications. CMBES is affiliated with the International Federation for Medical and Biological Engineering and currently has 272 full members. The society organizes national medical and biological engineering conferences annually in various cities across Canada. In addition, CMBES has sponsored seminars and symposia on specialized topics such as communication aids, computers, and the handicapped, as well as instructional courses on topics of interest to the membership. To promote the professional development of its members, the society as drafted guidelines on education and certification for clinical engineers and biomedical engineering technologists and technicians. CMBES is committed to bringing together all individuals in Canada who are engaged in interdisciplinary work involving engineering, the life sciences, and medicine. The society communicates to its membership through the publication of a newsletter as well as recently launched academic series to help nonengineering hospital personnel to gain better understanding of biomedical technology. 1.1.4 European Society for Engineering in Medicine (ESEM) Most European countries are affiliated organizations of the International Federation for Medical and Biological Engineering (IFMBE). The IFMBE activities are described in another section of this chapter. In 1992, a separate organization called the European Society for Engineering in Medicine (ESEM) was created with the objective of providing opportunities for academic centers, research institutes, industry, hospitals and other health care organizations, and various national and international societies to interact and jointly explore BME issues of European significance. These include (1) research and development, (2) education and training, (3) communication between and among industry, health care providers, and policymakers, (4) European policy on technology and health care, and (5) collaboration between eastern European countries in transition and the western European countries on health care technology, delivery, and management. To reflect this goal the ESEM membership constitutes representation of all relevant disciplines from all European countries while maintaining active relations with the Commission of the European Community and other supranational bodies and organizations. The major promotional strategies of the ESEMs scientific contributions include its quarterly journal Technology and Health Care, ESEM News, the Societys Newsletter, a biennial European Conference on Engineering and Medicine, and various topic-oriented workshops and courses. ESEM offers two classes of membership: the regular individual (active or student) membership and an associate grade. The latter is granted to those scientific and industrial organizations which satisfy the society guidelines and subject to approval by the Membership and Industrial Committees. The society is administered by an Administrative Council consisting of 13 members elected by the general membership. 1.1.5 French Groups for Medical and Biological Engineering The French National Federation of Bioengineering (Genie Biologique et Medical, GMB) is a multidisciplinary body aimed at developing methods and processes and new biomedical 14

materials in various fields covering prognosis, diagnosis, therapeutics, and rehabilitation. These goals are achieved through the creation of 10 regional centers of bioengineering, called the poles. The poles are directly involved at all levels, from applied research through the industrialization to the marketing of the product. Some of the actions pursued by these poles include providing financial seed support for innovative biomedical engineering projects, providing technological help, advice, and assistance, developing partnerships among universities and industries, and organizing special seminars and conferences. The information dissemination of all scientific progress is done through the Journal of Innovation and Technology in Biology and Medicine. 1.1.6 International Federation for Medical and Biological Engineering (IFMBE) Established in 1959, the International Federation for Medical and Biological Engineering (IFMBE) is an organization made up from an affiliation of national societies including membership of transnational organizations. The current national affiliates are Argentina, Australia, Austria, Belgium, Brazil, Bulgaria, Canada, China, Cuba, Cyprus, Slovakia, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, Japan, Mexico, Netherlands, Norway, Poland, South Africa, South Korea, Spain, Sweden, Thailand, United Kingdom, and the United States. The first transnational organization to become a member of the federation is the IEEE Engineering in Medicine and Biology Society. At the present time, the federation has an estimated 25,000 members from all of its constituent societies. The primary goal of the IFMBE is to recognize the interests and initiatives of its affiliated member organizations and to provide an international forum for the exchange of ideas and dissemination of information. The major IFMBE activities include the publication of the federations bimonthly journal, the Journal of Medical and Biological Engineering and Computing, the MBEC News, establishment of close liaisons with developing countries to encourage and promote BME activities, and the organization of a major world conference every 3 years in collaboration with the International Organization for Medical Physics and the International Union for Physical and Engineering Sciences in Medicine. The IFMBE also serves as a consultant to the United Nations Industrial Development Organization and has nongovernmental organization status with the World Health Organization, the United Nations, and the Economic Commission for Europe. 1.1.7 International Union for Physics and Engineering Sciences in Medicine (IUPESM) The IUPESM resulted from the IFMBEs collaboration with the International Organization of Medical Physics (IOMP), culminating into the joint organization of the triennial World Congress on Medical Physics and Biomedical Engineering. Traditionally, these two organizations held their conferences back to back from each other for a number of years. Since both organizations were involved in the research, development, and utilization of medical devices, they were combined to form IUPESM. Consequently, all members of the IFMBEs national and transnational societies are also automatically members of the IUPESM. The statutes of the IUPESM have been recently changed to allow other organizations to become members in addition to the founding members, the IOMP and the IFMBE. 1.1.8 International Council of Scientific Unions (ICSU) The International Council of Scientific Unions is nongovernmental organization created to promote international scientific activity in the various scientific branches and their applications for the benefit of humanity. 15

ICSU has two categories of membership: scientific academies or research councils, which are national, multidisciplinary bodies, and scientific unions, which are international disciplinary organizations. Currently, there are 92 members in the first category and 23 in the second. ICSU maintains close working relations with a number of intergovernmental and nongovernmental organizations, in particular with UNESCO. In the past, a number of international programs have been launched and are being run in cooperation with UNESCO. ICSU is particularly involved in serving the interests of developing countries. Membership in the ICSU implies recognition of the particular field of activity as a field of science. Although ICSU is heralded as a body of pure scientific unions to the exclusion of cross and multidisciplinary organizations and those of an engineering nature, IUPESM, attained its associate membership in the ICSU in the mid-1980s. The various other international scientific unions that are members of the ICSU include the International Union of Biochemistry and Molecular Biology (IUBMB), the International Union of Biological Sciences (IUBS), the International Brain Research Organization (IBRO), and the International Union of Pure and Applied Biophysics (IUPAB). The IEEE is an affiliated commission of the IUPAB and is represented through the Engineering in Medicine and Biology Society [ICSU Year Book, 1994].

16

2. Biomedical Sensors

Physical variables associated with biomedical systems are measured by a group of sensors known as physical sensors. Sensors for these variables, whether they are measuring biomedical systems or other systems, are essentially the same. There is, however, one notable exception regarding the similarity of these sensors: the packaging of the sensor and attachment to the system being measured. Mechanisms inherent in this tissue for trying to eliminate the sensor as a foreign body; sensors used for fluidic measurements such as pressure and flow are quite different from systems for measuring pressure and flow in nonbiologic environments.

Sensor Variable resistor Foil strain gauge Liquid metal strain gauge Silicon strain gauge Mutual inductance coils Variable reluctance LVDT Parallel plate capacitor Sonic/ultrasonic

Comparison of Displacement Sensors Electrical Measurement Sensitivity Variable Circuit Voltage divider, Resistance High ohmmeter, bridge, current source Resistance Bridge Low Resistance Ohmmeter, bridge Moderate Resistance Bridge High Inductance Inductance Inductance Capacitance Time Impedance bridge, inductance meter Impedance bridge, inductance meter Voltmeter Impedance bridge, capacitance meter Timer circuit Moderate to high High High Moderate to high High

Precision Moderate Moderate Moderate Moderate Moderate to low Moderate High Moderate High

Range Large Small Large Small Moderate to large Large High Moderate to large Large

Table 1 2.1 Variable Resistance Sensor One of the simplest sensors for measuring displacement is a variable resistor. The resistance between two terminals on this device is related to the linear or angular displacement of a sliding tap along a resistance element. Precision devices are available that have a reproducible, linear relationship between resistance and displacement. These devices can be connected in circuits that measure such resistance as an ohmmeter or bridge, or they can be used as a part of a circuit that provides a voltage that is proportional to the displacement. Such circuits include the voltage divider or driving a known constant current through the resistance and measuring the resulting voltage across it. This sensor is simple and inexpensive and can be used for measuring relatively large displacements.

Figure 1. Examples of displacement sensors. (a) Variable resistance sensor, (b) foil strain gauge, (c) linear variable differential transformer (LVDT), (d) parallel plate capacitive sensor, and (e) ultrasonic transit time displacement sensor. 2.2 Strain Gauge Another displacement sensor based on an electrical resistance change is the strain gauge. If a long narrow electrical conductor such as a piece of metal foil or a fine gauge wire is stretched within its elastic limit, it will increase in length and decrease in cross-sectional area. Because the electric resistance between both ends of this foil or wire can be given by , where A - is the electrical resistivity of the foil or wire material, - l is its length - A is its cross-sectional area, this stretching will result in an increase in resistance. The change in length can only be very small for the foil to remain within its elastic limit, so the change in electric resistance will also be small. The relative sensitivity of this device is given by its gauge factor , which is defined as

R=

18

= R / R / l / l , where
- R is the change in resistance when the structure is stretched by an amount l. Foil strain gauges are the most frequently applied and consist of a structure such as shown in b. A piece of metal foil that is attached to an insulating polymeric film such as polyimide that has a much greater compliance than the foil itself is chemically etched into the pattern shown in b. When a strain is applied in the sensitive direction, the direction of the individual elements of the strain gauge, the length of the gauge will be slightly increased, and this will result in an increase in the electrical resistance seen between the terminals. Since the displacement or strain that this structure can measure is quite small for it to remain within its elastic limit, it can only be used to measure small displacements. If the strain gauge is attached to one surface of the beam as shown in Fig, a fairly large displacement at the unsupported end of the beam can be translated to a relatively small displacement on the beams surface. It would be possible for this structure to be used to measure larger displacements.

Figure 2. Strain gauges on a cantilever (konsool) structure to provide temperature compensation. (a) cross-sectional view of the cantilever and (b) placement of the strain gauges in a half bridge or full bridge for temperature compensation and enhanced sensitivity.

19

Its sensitivity is roughly the same as a foil or wire strain gauge, but it is not as reliable. The mercury can easily become oxidized or small air gaps can occur in the mercury column. These effects make the sensors characteristics noisy and sometimes results in complete failure. Another variation on the strain gauge is the semiconductor strain gauge. These devices are frequently made out of pieces of silicon with strain gauge patterns formed using semiconductor microelectronic technology. The principal advantage of these devices is that their gauge factors can be more than 50 times greater than that of the solid and liquid metal devices. They are available commercially, but they are a bit more difficult to handle and attach to structures being measured due to their small size and brittleness. A more compliant structure that has found applications in biomedical instrumentation is the liquid metal strain gauge. Instead of using a solid electric conductor such as the wire or metal foil, mercury confined to a compliant, thin wall, narrow bore elastomeric tube is used. The compliance of this strain gauge is determined by the elastic properties of the tube. Since only the elastic limit of the tube is of concern, this sensor can be used to detect much larger displacements than conventional strain gauges.

2.3 Inductance Sensors 2.3.1 Mutual Inductance

The mutual inductance between two coils is related to many geometric factors, one of which is the separation of the coils. Thus, one can create a very simple displacement sensor by having two coils that are coaxial but with different separation. By driving one coil with an ac signal and measuring the voltage signal induced in the second coil, this voltage will be related to how far apart the coils are from one another. When the coils are close together, the mutual inductance will be high, and so a higher voltage will be induced in the second coil; when the coils are more widely separated, the mutual inductance will be lower as will the induced voltage. The relationship between voltage and separation will be determined by the specific geometry of the coils and in general will not be a linear relationship with separation unless the change of displacement is relatively small. Nevertheless, this is a simple method of measuring separation that works reasonably well provided the coils remain coaxial. If there is movement of the coils transverse to their axes, it is difficult to separate the effects of transverse displacement from those of displacement along the axis.
2.3.2 Variable Reluctance

A variation on this sensor is the variable reluctance sensor wherein a single coil or two coils remain fixed on a form which allows a high reluctance slug to move into or out of the coil or coils along their axis. Since the position of this core material determines the number of flux linkages through the coil or coils, this can affect the self-inductance or mutual inductance of the coils. In the case of the mutual inductance, this can be measured using the technique described in the previous paragraph, whereas self-inductance changes can be measured using various instrumentation circuits used for measuring inductance. This method is also a simple

20

method for measuring displacements, but the characteristics are generally nonlinear, and the sensor generally has only moderate precision.

2.4 Linear Variable Differential Transformer

By far the most frequently applied displacement transducer based upon inductance is the linear variable differential transformer (LVDT). This device is illustrated in c and is essentially a three-coil variable reluctance transducer. The two secondary coils are situated symmetrically about the primary coil and connected such that the induced voltages in each secondary oppose each other. When the core is located in the center of the structure equidistant from each secondary coil, the voltage induced in each secondary will be the same. Since these voltages oppose one another, the output voltage from the device will be zero. As the core is moved closer to one or the other secondary coils, the voltages in each coil will no longer be equal, and there will be an output voltage proportional to the displacement of the core from the central, zero-voltage position. Because of the symmetry of the structure, this voltage is linearly related to the core displacement. When the core passes through the central, zero point, the phase of the output voltage from the sensor changes by 180 degrees. Thus, by measuring the phase angle as well as the voltage, one can determine the position of the core. The circuit associated with the LVDT not only measures the voltage but often measures the phase angle as well. LVDTs are available commercially in many sizes and shapes. Depending on the configuration of the coils, they can measure displacements ranging from tens of micrometers through centimeters.
2.5 Capacitive Sensors

Displacement sensors can be based upon measurements of capacitance as well as inductance. The fundamental principle of operation is the capacitance of a parallel plate capacitor as given by , where d - is the dielectric constant of the medium between the plates, - d is the separation between the plates, - A is the cross-sectional area of the plates. Each of the quantities in Eq. can be varied to form a displacement transducer. By moving one of the plates with respect to the other, shows us that the capacitance will vary inversely with respect to the plate separation. This will give a hyperbolic capacitance-displacement characteristic. However, if the plate separation is maintained at a constant value and the plates are displaced laterally with respect to one another so that the area of overlap changes, this can produce a capacitance-displacement characteristic that can be linear, depending on the shape of the actual plates. The third way that a variable capacitance transducer can measure displacement is by having a fixed parallel plate capacitor with a slab of dielectric material having a dielectric constant different from that of air that can slide between the plates. The effective dielectric constant for the capacitor will depend on how much of the slab is between the plates and how much of the C=

21

region between the plates is occupied only by air. This, also, can yield a transducer with linear characteristics. The electronic circuitry used with variable capacitance transducers, is essentially the same as any other circuitry used to measure capacitance. As with the inductance transducers, this circuit can take the form of a bridge circuit or specific circuits that measure capacitive reactance.
2.6 Sonic and Ultrasonic Sensors

If the velocity of sound in a medium is constant, the time it takes a short burst of that sound energy to propagate from a source to a receiver will be proportional to the displacement between the two transducers. This is given by D=cT, where - c is the velocity of sound in the medium, - T is the transit time, -d is the displacement. A simple system for making such a measurement is shown in e. A brief sonic or ultrasonic pulse is generated at the transmitting transducer and propagates through the medium. It is detected by the receiving transducer at time T after the burst was initiated. The displacement D can then be determined. In practice, this method is best used with ultrasound, since the wavelength is shorter, and the device will neither produce annoying sounds nor respond to extraneous sounds in the environment. Small piezoelectric transducers to generate and receive ultrasonic pulses are readily available. The electronic circuit used with this instrument carries out three functions: (1) generation of the sonic or ultrasonic burst, (2) detection of the received burst, and (3) measurement of the time of propagation of the ultrasound. An advantage of this system is that the two transducers are coupled to one another only sonically. There is no physical connection as was the case for the other sensors described in this section.
2.6.1 Velocity Measurement

Velocity is the time derivative of displacement, and so all the displacement transducers mentioned above can be used to measure velocity if their signals are processed by passing them through a differentiator circuit. There are, however, two additional methods that can be applied to measure velocity directly.
2.6.2 Magnetic Induction

22

If a magnetic field that passes through a conducting coil varies with time, a voltage is induced in that coil that is proportional to the time-varying magnetic field. This relationship is given by

N d , where dt - v is the voltage induced in the coil - N is the number of turns in the coil - is the total magnetic flux passing through the coil (the product of the flux density and area within the coil).

Thus a simple way to apply this principle is to attach a small permanent magnet to an object whose velocity is to be determined, and attach a coil to a nearby structure that will serve as the reference against which the velocity is to be measured. A voltage will be induced in the coil whenever the structure containing the permanent magnet moves, and this voltage will be related to the velocity of that movement. The exact relationship will be determined by the field distribution for the particular magnet and the orientation of the magnet with respect to the coil
2.6.3 Doppler Ultrasound

When the receiver of a signal in the form of a wave such as electromagnetic radiation or sound is moving at a nonzero velocity with respect to the emitter of that wave, the frequency of the wave perceived by the receiver will be different than the frequency of the transmitter. This frequency difference, known as the Doppler shift, is determined by the relative velocity of the receiver with respect to the emitter and is given by f d = f 0 / c , where f d - is the Doppler frequency shift, f 0 - is the frequency of the transmitted wave, v is the relative velocity between the transmitter and receiver, c is the velocity of sound in the medium. This principle can be applied in biomedical applications as a Doppler velocimeter. A piezoelectric transducer can be used as the ultrasound source with a similar transducer as the receiver. When there is no relative movement between the two transducers, the frequency of the signal at the receiver will be the same as that at the emitter, but when there is relative motion, the frequency at the receiver will be shifted according to Eq. The ultrasonic velocimeter can be applied in the same way that the ultrasonic displacement sensor is used. In this case the electronic circuit produces a continuous ultrasonic wave and, instead of detecting the transit time of the signal, now detects the frequency difference between the transmitted and received signals. This frequency difference can then be converted into a signal proportional to the relative velocity between the two transducers.

23

2.7 Accelerometers

Acceleration is the time derivative of velocity and the second derivative with respect to time of displacement. Thus, sensors of displacement and velocity can be used to determine acceleration when their signals are appropriately processed through differentiator circuits. In addition, there are direct sensors of acceleration based upon Newtons second law and Hookes law. A known seismic mass is attached to the housing by an elastic element. As the structure is accelerated in the sensitive direction of the elastic element, a force is applied to that element according to Newtons second law. This force causes the elastic element to be distorted according to Hookes law, which results in a displacement of the mass with respect to the accelerometer housing. This displacement is measured by a displacement sensor. The relationship between the displacement and the acceleration is found by combining Newtons second law and Hookes law a = k/m x, where - x is the measured displacement, - m is the known mass, - k is the spring constant of the elastic element, - a is the acceleration. Any of the displacement sensors described above can be used in an accelerometer. The most frequently used displacement sensors are strain gauges or the LVDT.

Figure 3. Fundamental structure of an accelerometer.

24

One type of accelerometer uses a piezoelectric sensor as both the displacement sensor and the elastic element. A piezoelectric sensor generates an electric signal that is related to the dynamic change in shape of the piezoelectric material as the force is applied. Thus, piezoelectric materials can only directly measure time varying forces. A piezoelectric accelerometer is, therefore, better for measuring changes in acceleration than for measuring constant accelerations. A principal advantage of piezoelectric accelerometers is that they can be made very small, which is useful in many biomedical applications.

2.8 Force

Force is measured by converting the force to a displacement and measuring the displacement with a displacement sensor. The conversion takes place as a result of the elastic properties of a material. Applying a force to the material distorts the materials shape, and this distortion can be determined by a displacement sensor. For example, the cantilever structure shown in Fig. could be a force sensor. Applying a vertical force at the tip of the beam will cause the beam to deflect according to its elastic properties. This deflection can be detected using a displacement sensor such as a strain gauge as described previously. A common form of force sensor is the load cell. This consists of a block of material with known elastic properties that has strain gauges attached to it. Applying a force to the load cell stresses the material, resulting in a strain that can be measured by the strain gauge. Applying Hookes law, one finds that the strain is proportional to the applied force. The strain gauges on a load cell are usually in a half-bridge or full-bridge configuration to minimize the temperature sensitivity of the device. Load cells come in various sizes and configurations, and they can measure a wide range of forces.

2.9 Measurement of Fluid Dynamic Variables

The measurement of the fluid pressure and flow in both liquids and gases is important in many biomedical applications. These two variables, however, often are the most difficult variables to measure in biologic applications because of interactions with the biologic system and stability problems.
2.10 Pressure Measurement

Sensors of pressure for biomedical measurements such as blood pressure consist of a structure such as shown in Fig. In this case a fluid coupled to the fluid to be measured is housed in a chamber with a flexible diaphragm making up a portion of the wall, with the other side of the diaphragm at atmospheric pressure. When a pressure exists across the diaphragm, it will cause the diaphragm to deflect. This deflection is then measured by a displacement sensor. The displacement transducer consists of four fine-gauge wires drawn between a structure attached to the diaphragm and the housing of the sensor so that these wires serve as strain gauges. When pressure causes the diaphragm to deflect, two of the fine-wire strain gauges will be extended by a small amount, and the other two will contract by the same amount. By connecting these wires into a bridge circuit, a

25

voltage proportional to the deflection of the diaphragm and hence the pressure can be obtained. Semiconductor technology has been applied to the design of pressure transducers such that the entire structure can be fabricated from silicon. A portion of a silicon chip can be formed into a diaphragm and semiconductor strain gauges incorporated directly into that diaphragm to produce a small, inexpensive, and sensitive pressure sensor. Such sensors can be used as disposable, single-use devices for measuring blood pressure without the need for additional sterilization before being used on the next patient. This minimizes the risk of transmitting blood-borne infections in the cases where the transducer is coupled directly to the patients blood for direct blood pressure measurement.

Figure 4. See the structure of an unbonded strain gauge pressure sensor. In using this type of sensor to measure blood pressure, it is necessary to couple the chamber containing the diaphragm to the blood or other fluids being measured. This is usually done using a small, flexible plastic tube known as a catheter, that can have one end placed in an artery of the subject while the other is connected to the pressure sensor. This catheter is filled with a physiologic saline solution so that the arterial blood pressure is coupled to the diaphragm. This external blood-pressure-measurement method is used quite frequently in the clinic and research laboratory, but it has the limitation that the properties of the fluid in the catheter and the catheter itself can affect the measurement. For example, both ends of the catheter must be at the same vertical level to avoid a pressure offset due to hydrostatic effects. Also, the compliance of the tube will affect the frequency response of the pressure measurement. Air bubbles in the catheter or obstructions due to clotted blood or other materials can introduce distortion of the waveform due to resonance and damping. These problems can be minimized by utilizing a miniature semiconductor pressure transducer that is located at the tip of a catheter and can be placed in the blood vessel rather than being positioned external to the body. Such internal pressure sensors are available commercially and have the advantages of a much broader frequency response, no hydrostatic pressure error, and generally clearer signals than the external system. Although it is possible to measure blood pressure using the techniques described above, this remains one of the major problems in biomedical sensor technology.

26

Long-term stability of pressure transducers is not very good. This is especially true for pressure measurements of venous blood, cerebrospinal fluid, or fluids in the gastrointestinal tract, where pressures are relatively low. Long-term changes in baseline pressure for most pressure sensors require that they be frequently adjusted to be certain of zero pressure. Although this can be done relatively easily when the pressure transducer is located external to the body, this can be a major problem for indwelling pressure transducers. Thus, these transducers must be extremely stable and have low baseline drift to be useful in long-term applications. The packaging of the pressure transducer is also a problem that needs to be addressed, especially when the transducer is in contact with blood for long periods. Not only must the package be biocompatible, but it also must allow the appropriate pressure to be transmitted from the biologic fluid to the diaphragm. Thus, a material that is mechanically stable under corrosive and aqueous environments in the body is needed.

2.11 Measurement of Flow

The measurement of true volummetric flow in the body represents one of the most difficult problems in biomedical sensing. The sensors that have been developed measure velocity rather than volume flow, and they can only be used to measure flow if the velocity is measured for a tube of known cross-section. Thus, most flow sensors constrain the vessel to have a specific cross-sectional area. The most frequently used flow sensor in biomedical systems is the electromagnetic flow meter.

Figure 5. Fundamental structure of an electromagnetic flowmeter. This device consists of a means of generating a magnetic field transverse to the flow vector in a vessel. A pair of very small biopotential electrodes are attached to the wall of the vessel

27

such that the vessel diameter between them is at right angles to the direction of the magnetic field. As the blood flows in the structure, ions in the blood deflect in the direction of one or the other electrodes due to the magnetic field, and the voltage across the electrodes is given by u = Blv, where - B is the magnetic field, - l is the distance between the electrodes, - v is the average instantaneous velocity of the fluid across the vessel. If the sensor constrains the blood vessel to have a specific diameter, then its cross-sectional area will be known, and multiplying this area by the velocity will give the volume flow. Although dc flow sensors have been developed and are available commercially, the most desirable method is to use ac excitation of the magnetic field so that offset potential effects from the biopotential electrodes do not generate errors in this measurement. Small ultrasonic transducers can also be attached to a blood vessel to measure flow. In this case the transducers are oriented such that one transmits a continuous ultrasound signal that illuminates the blood. Cells within the blood diffusely reflect this signal in the direction of the second sensor so that the received signal undergoes a Doppler shift in frequency that is proportional to the velocity of the blood. By measuring the frequency shift and knowing the cross-sectional area of the vessel, it is possible to determine the flow. The oscillator generates a signal that, after amplification, drives the transmitting transducer. The oscillator frequency is usually in the range of 110 MHz. The reflected ultrasound from the blood is sensed by the receiving transducer and amplified before being processed by a detector circuit. This block generates the frequency difference between the transmitted and received ultrasonic signals. This difference frequency can be converted into a voltage proportional to frequency, and hence flow velocity, by the frequency to voltage converter circuit.

28

Figure 6. Structure of an ultrasonic Doppler flowmeter with the major blocks of the electronic signal processing system. Another method of measuring flow that has had biomedical application is the measurement of cooling of a heated object by convection. The object is usually a thermistor placed either in a blood vessel or in tissue, and the thermistor serves as both the heating element and the temperature sensor. In one mode of operation, the amount of power required to maintain the thermistor at a temperature slightly above that of the blood upstream is measured. As the flow around the thermistor increases, more heat is removed from the thermistor by convection, and so more power is required to keep it at a constant temperature. Relative flow is then measured by determining the amount of power supplied to the thermistor.
2.12 Temperature

There are many different sensors of temperature, but three find particularly wide application to biomedical problems. Table summarizes the properties of various temperature sensors, and these three, including metallic resistance thermometers, thermistors, and thermocouples, are described.
Properties of Temperature Sensors

Sensor Form Sensitivity Stability Range Metal resistanceCoil of fine platinum 100 Low High thermometer wire 700C

29

Moderate 50 100C Thermocouple Pair of wires Low High 100 >1000C Mercury in glassColumn of Hg in glass Moderate High 50 thermometer capillary 400C 50 Silicon p-n diode Electronic component Moderate High 150C

Thermistor

Bead, disk, or rod

High

Table 2

2.13 Metallic Resistance Thermometers

The electric resistance of a piece of metal or wire generally increases as the temperature of that electric conductor increases. A linear approximation to this relationship is given by R = R0 [1 + (T T0 )] , where - R0 is the resistance at temperature T0 , - is the temperature coefficient of resistance, - T is the temperature at which the resistance is being measured. Most metals have temperature coefficients of resistance of the order of 0.10.4%/C, as indicated in Table 3 The noble metals are preferred for resistance thermometers, since they do not corrode easily and, when drawn into fine wires, their crosssection will remain constant, thus avoiding drift in the resistance over time which could result in an unstable sensor. It is also seen from Table that the noble metals of gold and platinum have some of the highest temperature coefficients of resistance of the common metals.

Resistivity at 20C Metal or Alloy Platinum Gold Silver microhm-cm 9.83 2.22 1.629

Temperature Coefficient of Resistance, %/C 0.3 0.368 0.38 0.393 0.0002 0.013

Copper 1.724 Constantan (60% Cu, 49.0 40% Ni) Nichrome (80% Ni, 108.0 20% Cr)

30

Table 3. Temperature Coefficient of Resistance for Common Metals and Alloys

2.14 Thermistors

Unlike metals, semiconductor materials have an inverse relationship between resistance and temperature. This characteristic is very nonlinear and cannot be characterized by a linear equation such as for the metals. The thermistor is a semiconductor temperature sensor. Its resistance as a function of temperature is given by
1 1 R = R0 e ,where T T0

- is a constant determined by the materials that make up the thermistor. Thermistors can take a variety of forms and cover a large range of resistances. The most common forms used in biomedical applications are the bead, disk, or rod forms of the sensor. These structures can be formed from a variety of different semiconductors ranging from elements such as silicon and germanium to mixtures of various semiconducting metallic oxides Thermistors are generally not as stable as the metallic resistance thermometers. However, thermistors are close to an order of magnitude more sensitive.

Figure 7. Common forms of thermistors.

31

2.15 Thermocouples

When different regions of an electric conductor or semiconductor are at different temperatures, there is an electric potential between these regions that is directly related to the temperature differences. This phenomenon, known as the Seebeck effect, can be used to produce a temperature sensor known as a thermocouple by taking a wire of metal or alloy A and another wire of metal or alloy B and connecting them. One of the junctions is known as the sensing junction, and the other is the reference junction. When these junctions are at different temperatures, a voltage proportional to the temperature difference will be seen at the voltmeter when metals A and B have different Seebeck coefficients. This voltage is roughly proportional to the temperature difference and can be represented over the relatively small temperature differences encountered in biomedical applications by the linear equation V = SAB (Ts-Tr), where - SAB is the Seebeck coefficient for the thermocouple made up of metals A and B. Although this equation is a reasonable approximation, more accurate data are usually found in tables of actual voltages as a function of temperature difference.

32

Figure 8. Circuit arrangement for a thermocouple showing the voltage-measuring device. (a) the voltmeter, interrupting one of the thermocouple wires and (b).at the cold junction The voltages generated by thermocouples used for temperature measurement are generally quite small being on the order of tens of microvolts per degree C. Thus, for most biomedical measurements where there is only a small difference in temperature between the sensing and reference junction, very sensitive amplifiers must be used to measure these potentials. Higher-output thermocouples or thermopiles can be produced by connecting several thermocouples in series. Thermocouples can be made from very fine wires that can be implanted in biologic tissues for temperature measurements, and it is also possible to place these fine-wire thermocouples within the lumen of a hypodermic needle to make short-term temperature measurements in tissue.

Type S

Materials Platinum/platinum

Seebeck Coefficient, V/C 6

Temperature Range 01700C 33

10% rhodium T K J E Copper/constantan Chromel/alumel Iron/constantan Chromel/constantan 50 41 53 78 190400C 200 1370C 200760C 200970C

Table 4. Common Thermocouples

34

3. Physiological signals
3.1 Electrocardiogram ECG

The electrocardiogram (ECG) is the recording on the body surface of the electrical activity generated by heart. It was originally observed by Waller in 1889 using his pet bulldog as the signal source and the capillary electrometer as the recording device. In 1903, Einthoven enhanced the technology by employing the string galvanometer as the recording device and using human subjects with a variety of cardiac abnormalities. Einthoven is chiefly responsible for introducing some concepts still in use today, including the labeling of the various waves, defining some of the standard recording sites using the arms and legs, and developing the first theoretical construct whereby the heart is modeled as a single timevarying dipole. In order to record an ECG waveform, a differential recording between two points on the body are made. Traditionally, each differential recording is referred to as a lead. Einthoven defined three leads numbered with the Roman numerals I, II, and III. They are defined as

I = VLA VRA II = VLL VRA III = VLL VLA


where RA = right arm, LA = left arm, and LL = left leg. Because the body is assumed to be purely resistive at ECG frequencies, the four limbs can be thought of as wires attached to the torso. Hence lead I could be recorded from the respective shoulders without a loss of cardiac information. Note that these are not independent, and the following relationship holds: II = I + III.

Figure 9. The 12-lead ECG. The 12-lead ECG is formed by the 3 bipolar surface leads: I, II, and III; the augmented Wilson terminal referenced limb leads: aVR, aVL, and aVF; and the Wilson terminal referenced chest leads: V1, V2, V3, V4, V5,and V6. Through a complex change of ionic concentration across the cell membranes (the current source), an extracellular potential field is established which then excites neighboring cells, and a cell-to-cell propagation of electrical events occurs. Because the body acts as a purely resistive medium, these potential fields extend to the body surface. The character of the body surface waves depends on the amount of tissue activating at one time and the relative speed and direction of the activation wavefront. Therefore, the pacemaker potentials that are generated by a small tissue mass are not seen on the ECG. As the activation wavefront encounters the increased mass of atrial muscle, the initiation of electrical activity is observed on the body surface, and the first ECG wave of the cardiac cycle is seen. This is the P wave, and it represents activation of the atria. Conduction of the cardiac impulse proceeds from the atria through a series of specialized cardiac cells (the A-V node and the His-Purkinje system) which again are too small in total 36

mass to generate a signal large enough to be seen on the standard ECG. There is a short, relatively isoelectric segment following the P wave. Once the large muscle mass of the ventricles is excited, a rapid and large deflection is seen on the body surface. The excitation of the ventricles causes them to contract and provides the main force for circulating blood to the organs of the body. This large wave appears to have several components. The initial downward deflection is the Q wave, the initial upward deflection is the R wave, and the terminal downward deflection is the S wave. The polarity and actual presence of these three components depend on the position of the leads on the body as well as a multitude of abnormalities that may exist. In general, the large ventricular waveform is generically called the QRS complex regardless of its makeup. Following the QRS complex is another short relatively isoelectric segment. After this short segment, the ventricles return to their electrical resting state, and a wave of repolarization is seen as a low-frequency signal known as the T wave. In some individuals, a small peak occurs at the end or after the T wave and is the U wave. Its origin has never been fully established, but it is believed to be a repolarization potential. The general instrumentation requirements for the ECG have been addressed by professional societies through the years. Briefly, they recommend a system bandwidth between 0.05 and 150 Hz. Of great importance in ECG diagnosis is the low-frequency response of the system, because shifts in some of the low-frequency regions, e.g., the ST segment, have critical diagnosis value. While the heart rate may only have a 1-Hz fundamental frequency, the phase responses of typical analog high-pass filters are such that the system corner frequency must be much smaller than the 3-dB corner frequency where only the amplitude response is considered. The system gain depends on the total system design. The typical ECG amplitude is 2 mV, and if A/D conversion is used in a digital system, the enough gain to span the full range of the A/D converter is appropriate. Notice that each functional block has its own controller and that the system requires a realtime, multitasking operating system to coordinate all system functions. Concomitant with the data acquisition is the automatic interpretation of the ECG. These programs are quite sophisticated and are continually evolving. It is still a medical/legal requirement that these ECGs be overread by the physician. To first obtain an ECG the patient must be physically connected to the amplifier front end. The patientamplifier interface is formed by a special bioelectrode that converts the ionic current flow of the body to the electron flow of the metallic wire. These electrodes typically rely on a chemical paste or gel with a high ionic concentration. This acts as the transducer at the tissue-electrode interface. For short-term applications, silver-coated suction electrodes or sticky metallic foil electrodes are used. Long-term recordings, such as for the monitored patient, require a stable electrode-tissue interface, and special adhesive tape material surrounds the gel and an Ag+/Ag+Cl electrode. Each potential signal is digitally converted, and all the ECG leads can be formed mathematically in software. This would necessitate a 9-amplifier system. By performing some of the lead calculations with the analog differential amplifiers, this can be reduced to an 8channel system. Thus only the individual chest leads V1 through V6 and any 2 of the limb leads, e.g., I and III, are needed to calculate the full 12-lead ECG.

37

3.1.2 The ambulatory ECG

Besides the standard 12-lead ECG, there are several other uses of ECG recording technology that rely on only a few leads. These applications have had a significant clinical and commercial impact. Following are brief descriptions of several ECG applications which are aimed at introducing the reader to some of the many uses of the ECG. The ambulatory or Holter ECG has an interesting history, and its evolution closely followed both technical and clinical progress. The original, analog, tape-based, portable ECG resembled a fully loaded backpack and was developed by Dr. Holter in the early 1960s. It was soon followed by more compact devices that could be worn on the belt. The original large-scale clinical use of this technology was to identify patients who developed heart block transiently and could be treated by implanting a cardiac pacemaker. This required the secondary development of a device that could rapidly play back the 24 hours of taperecorded ECG signals and present to the technician or physician a means of identifying periods of time where the patients heart rate became abnormally low. The scanners had the circuitry not only to play back the ECG at speeds 30 to 60 times real time but also to detect the beats and display them in a superimposed mode on a CRT screen. In addition, an audible tachometer could be used to identify the periods of low heart rate. With this playback capability came numerous other observations, such as the identification of premature ventricular complexes (PVCs), which lead to the development of techniques to identify and quantify their number. Very sophisticated algorithms were developed based on pattern-recognition techniques and were sometimes implemented with high-speed specialized numerical processors as the tape playback speeds became several hundred times real time. The ambulatory ECG is still a widely used diagnostic tool, and modern units often have built-in microprocessors with considerable amounts of random access memory and even small disk drives with capacities greater than 400 Mbytes. Here the data can be analyzed on-line, with large segments of data selected for storage and later analysis with personal computer-based programs.
3.1.3 Patient Monitoring

The techniques for monitoring the ECG in real time were developed in conjunction with the concept of the coronary care unit (CCU). Patients were placed in these specialized hospital units to carefully observe their progress during an acute illness such as a myocardial infarction or after complex surgical procedures. As the number of beds increased in these units, it became clear that the highly trained medical staff could not continually watch a monitor screen, and computerized techniques were added that monitored the patients rhythm. The typical CCU would have 8 to 16 beds, and hence the computing power was taken to its limit by monitoring multiple beds. The modern units have the CPU distributed within the ECG module at the bedside, along with modules for measuring many other physiologic parameters. Each bedside monitor would be interconnected with a high-speed digital line, e.g., Ethernet, to a centralized computer used primarily to control communications and maintain a patient database.
3.1.4 High-Resolution ECG

High-resolution (HR) capability is now a standard feature on most digitally based ECG systems or as a stand-alone microprocessor-based unit. The most common application of the 38

HRECG is to record very low level (~1.0-V) signals that occur after the QRS complex but are not evident on the standard ECG. These late potentials are generated from abnormal regions of the ventricles and have been strongly associated with the substrate responsible for a life-threatening rapid heart rate (ventricular tachycardia). The typical HRECG is derived from 3 bipolar leads configured in an anatomic xyz coordinate system. These 3 ECG signals are then digitized at a rate of 1000 to 2000 Hz per channel, time aligned via a realtime QRS correlator, and summated in the form of a signal average. Signal averaging will theoretically improve the signal-to-noise ratio by the square root of the number of beats averaged. The underlying assumptions are that the signals of interest do not vary, on a beat-to-beat basis, and that the noise is random.

3.2 Electromyography EMG

Movement and position of limbs are controlled by electrical signals traveling back and forth between the muscles and the peripheral and central nervous system. When pathologic conditions arise in the motor system, whether in the spinal cord, the motor neurons, the muscle, or the neuromuscular junctions, the characteristics of the electrical signals in the muscle change. Careful registration and study of electrical signals in muscle (electromyograms) can thus be a valuable aid in discovering and diagnosing abnormalities not only in the muscles but also in the motor system as a whole. Electromyography (EMG) is the registration and interpretation of these muscle action potentials. Until recently, electromyograms were recorded primarily for exploratory or diagnostic purposes; however, with the advancement of bioelectric technology, electromyograms also have become a fundamental tool in achieving artificial control of limb movement, i.e., functional electrical stimulation (FES) and rehabilitation. Since the rise of modern clinical EMG, the technical procedures used in recording and analyzing electromyograms have been dictated by the available technology. The concentric needle electrode introduced by Adrian and Bronk in 1929 provided an easy-to-use electrode with high mechanical qualities and stable, reproducible measurements. Replacement of galvanometers with high-gain amplifiers allowed smaller electrodes with higher impedances to be used and potentials of smaller amplitudes to be recorded. With these technical achievements, clinical EMG soon evolved into a highly specialized field where electromyographists with many years of experience read and interpreted long paper EMG records based on the visual appearance of the electromyograms. Slowly, a more quantitative approach emerged, where features such as potential duration, peak-to-peak amplitude, and number of phases were measured on the paper records and compared with a set of normal data gathered from healthy subjects of all ages. In the last decade, the general-purpose rackmounted equipment of the past have been replaced by ergonomically designed EMG units with integrated computers. Electromyograms are digitized, processed, stored on removable media, and displayed on computer monitors with screen layouts that change in accordance with the type of recording and analysis chosen by the investigator.
3.2.1 The Origin of Electromyograms

39

Unlike the myocardium, skeletal muscles do not contain pacemaker cells from which excitations arise and spread. Electrical excitation of skeletal muscle is initiated and regulated by the central and peripheral nervous systems. Motor neurons carry nerve impulses from the anterior horn cells of the spinal cord to the nerve endings.

Figure 10. Simulated currents and extracellular potentials of frog sartorius muscle fiber (radius a = 50m). (a) The net fiber current density is the summation of the current density through the sarcolemma and that passing the tubular mouth. (b)
3.2.2 Electromyographic Recordings

A considerable amount of information regarding the bioelectrical state of a muscle is hidden in the timevarying spatial distribution of potentials in the muscle. Unfortunately, it is not clinically feasible to obtain high-resolution three-dimensional samples of the spatial potential distribution, since this would require the insertion of hundreds of electrodes into the muscles. In order to minimize the discomfort of the patient, routine EMG procedures usually employ only a single electrode that is inserted into different regions of the muscle. As the SFAP (single-fiber action potential) of an active motor unit pass by the electrode, only their summation, i.e., the MUP (the motor unit potential), will be registered by the electrode. It means, the electrode is effectively integrating out the spatial information hidden in the passing potential complex, leaving only a time-variant potential waveform to be recorded and interpreted. To increase the amount of diagnostic information, several sets of EMG investigations may be performed using electrodes with different recording characteristics. Figure illustrates three of the most popular EMG needle electrodes. The concentric and monopolar electrodes have an intermediate pickup range and are used in conventional recordings. The single-fiber electrode is a more recent innovation. It has a very small pickup range and is used to obtain recordings from only one or two muscle fibers. The macro electrode, which is the cannula of either the concentric or single-fiber electrode in combination with a remote reference electrode, picks up potentials throughout the motor unit territory.

40

Figure 11. EMG needle electrodes. The concentric electrode is connected to a differential amplifier; thus common-mode signals are effectively rejected, and a relatively stable baseline is achieved. 3.2.2.1 Amplitude Amplitude is determined by the presence of active fibers within the immediate vicinity of the electrode tip. Low-pass filtering by the volume conductor attenuates the high-frequency spikes of remote SFAPs; hence the MUP amplitude does not increase for a larger motor unit.

Figure 12. MUP amplitude and duration. However, MUP amplitude will increase if the tip of the electrode is located near a cluster of reinnervated fibers. Large MUP amplitudes are frequently observed in neurogenic diseases. 3.2.2.2 Rise time Rise time is an increasing function of the distance between the electrode and the closest active muscle fiber. A short rise time in combination with a small MUP amplitude might therefore indicate that the amplitude is reduced due to fiber atrophy rather than to a large distance between the electrode and the closest fiber.

41

3.2.2.3 Number of phases Number of phases indicates the complexity of the MUP and the degree of misalignment between SFAPs. In neurogenic diseases, polyphasic MUPs arise due to slow conduction velocity in immature nerve sprouts or slow conduction velocity in reinnervated but still atrophied muscle fibers.

3.2.2.4 Variation Variation in muscle fiber size also causes polyphasic MUPs in myopathic diseases. To prevent noisy baseline fluctuations from affecting the count of MUP phases, a valid baseline crossing must exceed a minimum absolute amplitude criterion. 3.2.2.5 Duration Duration is the time interval between the first and last occurrence of the waveform exceeding a predefined amplitude threshold, e.g., 5V. The MUP onset and end are the summation of lowfrequency components of SFAPs scattered over the entire pickup range of the electrode. As a result, the MUP duration provides information about the number of active fibers within the pickup range. However, since the motor unit territory can be larger than the pickup range of the electrode, MUP duration does not provide information about the total size of a large motor unit. MUP duration will increase if a motor unit has an increased number of fibers due to reinnervation. MUP duration is affected to a lesser degree by SFAP misalignment. 3.2.2.6 Area Area indicates the number of fibers adjacent to the electrode; however, unlike MUP amplitude, MUP area depends on MUP duration and is therefore influenced by fibers in a larger region compared with that of MUP amplitude. 3.2.2.7 Turns Turns is a measure of the complexity of the MUP, much like the number of phases; however, since a valid turn does not require a baseline crossing like a valid phase, the number of turns is more sensitive to changes in the MUP waveshape. In order to distinguish valid turns from signal noise, successive turns must be offset by a minimum amplitude difference. Based on the complimentary information contained in the MUP features defined above, it is possible to infer about the number and density of fibers in a motor unit as well as the synchronicity of the SFAPs.
3.2.3 Single-Fiber EMG

The positive lead of the single-fiber electrode is the end cap of a 25-m wire exposed through a side port on the cannula of a steel needle. Due to the small size of the positive lead, 42

bioelectric sources, which are located more than about 300m from the side port, will appear as common-mode signals and be suppressed by the differential amplifier. To further enhance the selectivity, the recorded signal is high-pass filtered at 500 Hz to remove low-frequency background activity from distant fiber.

Figure 13. Measurement of interpotential interval (IPI). Measurement of interpotential interval (IPI) between single-fiber potentials recorded simultaneously from two fibers of the same motor unit. The mean IPI is normally 5 to 50s but increases when neuromuscular transmission is disturbed.
3.2.4 Macro EMG

For this electrode, the cannula of a single-fiber or concentric electrode is used as the positive lead, while the reference electrode can be either a remote subcutaneous or remote surface electrode. Due to the large lead surface, this electrode picks up both near- and far-field activity. However, the signal has very small amplitude, and the macro electrode must therefore be coupled to an electronic averager. Quantitative features of the macro MUP include the peak-to-peak amplitude, the rectified area under the curve, and the number of phases.

3.3 EEG Electroencephalography 3.3.1 History

In 1875, Richard Caton published the first account documenting the recording of spontaneous brain electrical activity from the cerebral cortex of an experimental animal. The amplitude of these electrical oscillations was so low (i.e., in the microvolt range) that Catons discovery is all the more amazing because it was made 50 years before suitable electronic amplifiers became available. In 1924, Hans Berger, of the University of Jena in Austria, carried out the first human EEG recordings using metal strips pasted to the scalps of his subjects as electrodes and a sensitive galvanometer as the recording instrument. Berger was able to measure the irregular, relatively small electrical potentials (i.e., 50 to 100 V) coming from the brain. From 1924 to 1938, Berger laid the foundation for many of the present applications of electroencephalography. He was the first to use the word electroencephalogram in describing these brain potentials in humans. Berger also noted that these brain waves were not entirely

43

random but instead displayed certain periodicities and regularities. For example, he observed that although these brain waves were slow (i.e., exhibited a synchronized pattern of high amplitude and low frequency, <3 Hz) during sleep, they were faster (i.e., exhibited a desynchronized pattern of low amplitude and higher frequency, 15 to 25 Hz) during waking behaviors. He suggested, quite correctly, that the brains activity changed in a consistent and recognizable fashion when the general status of the subject changed, as from relaxation to alertness. Berger also concluded that these brain waves could be greatly affected by certain pathologic conditions after noting a marked increase in the amplitude of these brain waves recorded during convulsive seizures.
3.3.2 EEG Recording Techniques

Electroencephalography or EEG is the measurement of neural activity within the brain. Each neurone in the brain receives and transmits information through the depolarisation of its cell body sending an action potential along the nerve fibre. Within the human brain there is continuous activity and therefore continuous depolarisation and re-polarisation of neurones. This results in the continuous generation of electrical signals. These signals can be detected by electrodes placed on the scalp. The measurement can be performed with instrumentation similar to that used for ECG measurement but with higher gain and common mode rejection. Signals are much smaller than ECG signals with an approximate amplitude of 2 V. Usually surface electrodes are used of similar construction to ECG electrodes, with the metal-electrolyte interface being silver / silver chloride. During operations, needle electrodes may be implanted into various parts of the brain. This has allowed researchers to quantify which parts of the brain control which bodily functions.

Figure 14. EEG measurement.

44

We are continuously thinking and, therefore, EEG signals appear much like electrical noise. In certain circumstances, the brain waves or EEG signal may have characteristics which can be identified. If a person is relaxed with the eyes closed lying in a prone position, the brain waves form into a regular low frequency amplitude modulated wave form characterised as an Alpha wave. This occurs more readily in men than in women. The bandwidth of the signals is approximately 10 Hz. Beta waves are classified as waveforms between 18 and 30 Hz. EEG is used in studying patterns of brain action during sleep to enable the quantification of a patients sleep into classes such as rapid eye movement (REM approximately 30 second bursts of 8-12 Hz) and deep sleep (frequencies of less than 4 Hz). During operations brain activity measured by EEG has been used to detect low oxygen and high carbon dioxide levels. Signals may also be recorded from electrodes implanted in the brain to assess the level of damage to a particular region. Active electrodes, electrode cap, and amplifiers of a portable dEEG system with 256 channels.

3.4 Magnetoencephalography MEG

The MEG measures the extracranial magnetic fields produced by intracranial electrical currents. Neuromagnetic signals are many orders of magnitude weaker than the ambient magnetic noise, which is due to the earths field and to the presence of ferromagnetic objects and electrical instrumentation. Typical scalp-recorded magnetic fields have a peak amplitude of about 100 fT (Gomez-Tortosa et al, 1994; Lewine,1990), whereas environmental electromagnetic noise in a hospital (power lines, elevators, MRI magnets,etc.) may be as high as 1 T (1015 fT) in extreme cases (Lewine, 1990). Therefore, to detect this kind of biological activity, it is necessary to use highly sensitive instrumentation and, at the same time, attempt to eliminate extraneous magnetic fields.
3.4.1 MEG Recording Device

MEG measurements were practically impossible before the introduction of superconductive instrumentation. The latest generation of biomagnetometers are composed of large arrays of pick-up coils, each of which is connected to a SQUID (superconducting quantum interference device) that acts as a very low noise, ultrahigh gain, current-to-voltage converter. The SQUIDs and induction coils are immersed in liquid helium to maintain a superconducting state. This type of device can detect even very small changes in magnetic flux, such as the one resulting from neurophysiological activity The cost and complexity of instrumentation led to the initial development of MEG systems containing only a few channels. Recently, however, whole-head systems with large arrays comprising 248 magnetometers have been developed (model 360WH, 4DNeuroimaging, San Diego, CA). An example of a 148-channel system is shown. The need for cryogenics, and for a shielded room and a rigid helmet-type sensor limit the scope of applications of the MEGbased mapping procedures.

45

Figure 15. Schematic diagram of a multisensor MEG system (left) along with a detection coil and SQUID in a single channel (right).

Figure 16. A whole-head MEG system with 148 recording channels operated in a magnetically shielded room.

3.5 Mapping-Based on EEG or MEG

Both theory and experiment suggest that the MEG offers no significant advantage over the EEG (Cohen and Cuffin, 1991). MEG systems, however, are very expensive (total cost about $3 million); they require special cryogenic equipment, a magnetically shielded room, daily monitoring and maintenance, and they are available only in a handful of places around the 46

world. Thus, the clinical usefulness of MEG can be very limited. On the contrary, in addition to the unique features mentioned earlier, the latest EEG systems incorporate several advantages: they are readily available in practically all clinical settings, and even the most sophisticated systems are much less expensive than MEG (total cost about $150,000). Therefore, successful brain mapping based on EEG can have a significant impact on patient care.
3.6 Digital Biomedical Signal Acquisition and Processing 3.6.1 Acquisition

A schematic representation of a general acquisition system is shown in Fig. 17. Several physical magnitudes are usually measured from biologic systems. They include electromagnetic quantities (currents, potential differences, field strengths, etc.), as well as mechanical, chemical, or generally nonelectrical variables (pressure, temperature, movements, etc.). Electric signals are detected by sensors (mainly electrodes), while nonelectric magnitudes are first converted by transducers into electric signals that can be easily treated, transmitted, and stored. Several books of biomedical instrumentation give detailed descriptions of the various transducers and the hardware requirements associated with the acquisition of the different biologic signals [Cobbold, 1988; Tompkins & Webster, 1981; Webster, 1992]. An analog preprocessing block is usually required to amplify and filter the signal (in order to make it satisfy the requirements of the hardware such as the dynamic of the analog-to-digital converter), to compensate for some unwanted sensor characteristics, or to reduce the portion of undesired noise. Moreover, the continuous-time signal should be bandlimited before analog-to-digital (A/D) conversion. Such an operation is needed to reduce the effect of aliasing induced by sampling, as will be described in the next section. Here it is important to remember that the acquisition procedure should preserve the information contained in the original signal waveform. This is a crucial point when recording biologic signals, whose characteristics often may be considered by physicians as indices of some underlying pathologies (i.e., the ST-segment displacement on an ECG signal can be considered a marker of ischemia, the peak-and-wave pattern on an EEG tracing can be a sign of epilepsy, and so on). Thus the acquisition system should not introduce any form of distortion that can be misleading or can destroy real pathologic alterations. For this reason, the analog prefiltering block should be designed with constant modulus and linear phase (or zero-phase) frequency response, at least in the passband, over the frequencies of interest. Such requirements make the signal arrive undistorted up to the A/D converter. The analog waveform is then A/D converted into a digital signal; i.e., it is transformed into a series of numbers, discretized both in time and amplitude, that can be easily managed by digital processors. The A/D conversion ideally can be divided in two steps, as shown in Fig. 17: the sampling process, which converts the continuous signal in a discrete-time series and whose elements are named samples, and a quantization procedure, which assigns the amplitude value of each sample within a set of determined discrete values.

47

Figure 17. General block diagram of the acquisition procedure of a digital signal.
3.6.2 Signal Processing A brief review of different signal-processing techniques will be given in this section. They include traditional filtering, averaging techniques, and spectral estimators. Only the main concepts of analysis and design of digital filters are presented, and a few examples are illustrated in the processing of the ECG signal. Averaging techniques will then be described briefly and their usefulness evidenced when noise and signal have similar frequency contents but different statistical properties; an example for evoked potentials enhancement from EEG background noise is illustrated. Finally, different spectral estimators will be considered and some applications shown in the analysis of RR fluctuations [i.e., the heart rate variability (HRV) signal]. 3.6.3 Digital Filters A digital filter is a discrete-time system that operates some transformation on a digital input signal x(n) generating an output sequence y(n), as schematically shown by the block diagram in Fig. 18. The characteristics of transformation T[] identify the filter. The filter will be timevariant if T[] is a function of time or time-invariant otherwise, while is said to be linear if, and only if, having x1(n) and x2(n) as inputs producing y1(n) and y2(n), respectively, we have:
T [ax` + bx2 ] = aT [x1 ] + bT [x2 ] = ay1 + by2

Figure 18. General block diagram of a digital filter. The output digital signal y(n) is obtained from the input x(n) by means of a transformation T[] which identifies the filter. In the following, only linear, time-invariant filters will be considered, even if several interesting applications of nonlinear [Glaser & Ruchkin, 1976; Tompkins, 1993] or timevariant [Cohen, 1983; Hut & Webster, 1973; Thakor, 1987; Widrow et al., 1975] filters have been proposed in the literature for the analysis of biologic signals. The behavior of a filter is usually described in terms of input-output relationships. They are usually assessed by exciting the filter with different inputs and evaluating which is the

48

response (output) of the system. In particular, if the input is the impulse sequence (n), the resulting output, the impulse response, has a relevant role in describing the characteristic of the filter. Such a response can be used to determine the response to more complicated input sequences. In fact, let us consider a generic input sequence x(n) as a sum of weighted and delayed impulses.

x(n ) =

k = ,

x(k ) (n k )

(1)

and let us identify the response to (n k) as h(n k). If the filter is time-invariant, each delayed impulse will produce the same response, but time-shifted; due to the linearity property, such responses will be summed at the output:

y (n ) =

k = ,

x ( k ) h( n k )

(2)

This convolution product links input and output and defines the property of the filter. Two of them should be recalled: stability and causality. The former ensures that bounded (finite) inputs will produce bounded outputs. Such a property can be deduced by the impulse response; it can be proved that the filter is stable if and only if

k = ,

h( k ) <

(3)

Causality means that the filter will not respond to an input before the input is applied. This is in agreement with our physical concept of a system, but it is not strictly required for a digital filter that can be implemented in a noncausal form. A filter is causal if and only if h(k ) = 0 for k < 0 Even if relation (2) completely describes the properties of the filter, most often it is necessary to express the input-output relationships of linear discrete-time systems under the form of the z-transform operator, which allows one to express relation (2) in a more useful, operative, and simpler form
3.6.4 Signal Averaging

Traditional filtering performs very well when the frequency content of signal and noise do not overlap. When the noise bandwidth is completely separated from the signal bandwidth, the noise can be decreased easily by means of a linear filter according to the procedures described earlier. On the other hand, when the signal and noise bandwidth overlap and the noise amplitude is enough to seriously corrupt the signal, a traditional filter, designed to cancel the noise, also will introduce signal cancellation or, at least, distortion. As an example, let us consider the brain potentials evoked by a sensory stimulation (visual, acoustic, or somatosensory) generally called evoked potentials (EP). Such a response is very difficult to determine because its amplitude is generally much lower than the background EEG activity. Both EP and EEG signals contain information in the same frequency range; thus the problem

49

of separating the desired response cannot be approached via traditional digital filtering [Aunon et al., 1981]. Another typical example is in the detection of ventricular late potentials (VLP) in the ECG signal. These potentials are very small in amplitude and are comparable with the noise superimposed on the signal and also for what concerns the frequency content [Simson, 1981]. In such cases, an increase in the SNR may be achieved on the basis of different statistical properties of signal and noise. When the desired signal repeats identically at each iteration (i.e., the EP at each sensory stimulus, the VLP at each cardiac cycle), the averaging technique can satisfactorily solve the problem of separating signal from noise. This technique sums a set of temporal epochs of the signal together with the superimposed noise. If the time epochs are properly aligned, through efficient trigger-point recognition, the signal waveforms directly sum together. If the signal and the noise are characterized by the following statistical properties: 1. All the signal epochs contain a deterministic signal component x(n) that does not vary for all the epochs. 2. The superimposed noise w(n) is a broadband stationary process with zero mean and variance 2 so that E [w(n)] = 0 E w2 ( n) = 2 (4)

3. Signal x(n) and noise w(n) are uncorrelated so that the recorded signal y(n) at the ith iteration can be expressed as (5) y (n)i = x(n) + wi (n) Then the averaging process yields yt:
N 1 N yt (n) = yi = x(n) + wi (n) N i =1 i =1

(6)

The noise term is an estimate of the mean by taking the average of N realizations. Such an average is a new random variable that has the same mean of the sum terms (zero in this case) and which has variance of 2/N. The effect of the coherent averaging procedure is then to maintain the amplitude of the signal and reduce the variance of the noise by a factor of N. In order to evaluate the improvement in the SNR (in rms value) in respect to the SNR (at the generic ith sweep):

SNR = SNRi N

(7)

Thus signal averaging improves the SNR by a factor of in rms value. A coherent averaging procedure can be viewed as a digital filtering process, and its frequency characteristics can be investigated. An alternative expression for H(z) is

1 1 z Nh H ( z) = N 1 zh

(8)

50

This is a moving average low-pass filter as discussed earlier, where the output is a function of the preceding value with a lag of h samples; in practice, the filter operates not on the time sequence but in the sweep sequence on corresponding samples. The frequency response of the filter is shown in Fig. 19 for different values of the parameter N. In this case, the sampling frequency fs is the repetition frequency of the sweeps, and we may assume it to be 1 without loss of generality. The frequency response is characterized by a main lobe with the first zero corresponding to f = 1/N and by successive secondary lobes separated by zeroes at intervals 1/N. The width of each tooth decreases as well as the amplitude of the secondary lobes when increasing the number N of sweeps. The desired signal is sweep-invariant, and it will be unaffected by the filter, while the broadband noise will be decreased. Some leakage of noise energy takes place in the center of the sidelobes and, of course, at zero frequency. Under the hypothesis of zero mean noise, the dc component has no effect, and the diminishing sidelobe amplitude implies the leakage to be not relevant for high frequencies. It is important to recall that the average filtering is based on the hypothesis of broadband distribution of the noise and lack of correlation between signal and noise. Unfortunately, these assumptions are not always verified in biologic signals. For example, the assumptions of independence of the background EEG and the evoked potential may be not completely realistic [Gevins & Remond, 1987]. In addition, much attention must be paid to the alignment of the sweeps; in fact, slight misalignments (fiducial point jitter) will lead to a low-pass filtering effect of the final result.

Figure 19. Equivalent frequency response for the signal-averaging procedure for different values of N. 51

3.6.5 Spectral Analysis The various methods to estimate the power spectrum density (PSD) of a signal may be classified as nonparametric and parametric.

3.6.5.1 Nonparametric Estimators of PSD This is a traditional method of frequency analysis based on the Fourier transform that can be evaluated easily through the fast Fourier transform (FFT) algorithm [Marple, 1987]. The expression of the PSD as a function of the frequency P(f ) can be obtained directly from the time series y(n) by using the periodogram expression

P( f ) =

1 T y (k )e j 2kTs f Ts k =0

N 1

1 2 Y( f ) NTs

(9)

where Ts is the sampling period, N is the number of samples, and Y(f ) is the discrete time Fourier transform of y(n).

Figure 20. Enhancement of evoked potential (EP) by means of averaging technique. The EEG noise is progressively reduced, and the EP morphology becomes more recognizable as the number of averaged sweeps (N) is increased.

52

FFT-based methods are widely diffused, for their easy applicability, computational speed, and direct interpretation of the results. Quantitative parameters are obtained by evaluating the power contribution at different frequency bands. This is achieved by dividing the frequency axis in ranges of interest and by integrating the PSD on such intervals. The area under this portion of the spectrum is the fraction of the total signal variance due to the specific frequencies. However, autocorrelation function and Fourier transform are theoretically defined on infinite data sequences. Thus errors are introduced by the need to operate on finite data records in order to obtain estimators of the true functions. In addition, for the finite data set it is necessary to make assumptions, sometimes not realistic, about the data outside the recording window; commonly they are considered to be zero. This implicit rectangular windowing of the data results in a special leakage in the PSD. Different windows that smoothly connect the side samples to zero are most often used in order to solve this problem, even if they may introduce a reduction in the frequency resolution [Harris, 1978]. Furthermore, the estimators of the signal PSD are not statistically consistent, and various techniques are needed to improve their statistical performances. Various methods are mentioned in the literature; the methods of Dariell [1946], Bartlett [1948], and Welch [1970] are the most diffused ones. Of course, all these procedures cause a further reduction in frequency resolution.

53

4. X-Ray Equipment
Conventional x-ray radiography produces images of anatomy that are shadowgrams based on x-ray absorption. The x-rays are produced in a region that is nearly a point source and then are directed on the anatomy to be imaged. The x-rays emerging from the anatomy are detected to form a two-dimensional image, where each point in the image has a brightness related to the intensity of the x-rays at that point. Image production relies on the fact that significant numbers of x-rays penetrate through the anatomy and that different parts of the anatomy absorb different amounts of x-rays. In cases where the anatomy of interest does not absorb xrays differently from surrounding regions, contrast may be increased by introducing strong xray absorbers. For example, barium is often used to image the gastrointestinal tract. X-rays are electromagnetic waves (like light) having an energy in the general range of approximately 1 to several hundred kiloelectronvolts (keV). In medical x-ray imaging, the xray energy typically lies between 5 and 150 keV, with the energy adjusted to the anatomic thickness and the type of study being performed. X-rays striking an object may either pass through unaffected or may undergo an interaction. These interactions usually involve either the photoelectric effect (where the x-ray is absorbed) or scattering (where the x-ray is deflected to the side with a loss of some energy). X-rays that have been scattered may undergo deflection through a small angle and still reach the image detector; in this case they reduce image contrast and thus degrade the image. This degradation can be reduced by the use of an air gap between the anatomy and the image receptor or by use of an antiscatter grid. Because of health effects, the doses in radiography are kept as low as possible. However, x-ray quantum noise becomes more apparent in the image as the dose is lowered. This noise is due to the fact that there is an unavoidable random variation in the number of x-rays reaching a point on an image detector. The quantum noise depends on the average number of x-rays striking the image detector and is a fundamental limit to radiographic image quality. The equipment of conventional x-ray radiography mostly deals with the creation of a desirable beam of x-rays and with the detection of a high-quality image of the transmitted xrays. These are discussed in the following sections.
4.1 Production of X-Rays 4.1.1 X-Ray Tube The standard device for production of x-rays is the rotating anode x-ray tube, as illustrated in Fig. 21. The x-rays are produced from electrons that have been accelerated in vacuum from the cathode to the anode. The electrons are emitted from a filament mounted within a groove in the cathode. Emission occurs when the filament is heated by passing a current through it. When the filament is hot enough, some electrons obtain a thermal energy sufficient to overcome the energy binding the electron to the metal of the filament. Once the electrons have boiled off from the filament, they are accelerated by a voltage difference applied from the cathode to the anode. This voltage is supplied by a generator (see below). After the electrons have been accelerated to the anode, they will be stopped in a short distance. Most of the electrons energy is converted into heating of the anode, but a small percentage is converted to x-rays by two main methods. One method of x-ray production relies on the fact that deceleration of a charged particle results in emission of electromagnetic radiation, called bremsstrahlung radiation.

These x-rays will have a wide, continuous distribution of energies, with the maximum being the total energy the electron had when reaching the anode. The number of x-rays is relatively small at higher energies and increases for lower energies. A second method of x-ray production occurs when an accelerated electron strikes an atom in the anode and removes an inner electron from this atom. The vacant electron orbital will be filled by a neighboring electron, and an x-ray may be emitted whose energy matches the energy change of the electron. The result is production of large numbers of x-rays at a few discrete energies. Since the energy of these characteristic x-rays depends on the material on the surface of the anode, materials are chosen partially to produce x-rays with desired energies. For example, molybdenum is frequently used in anodes of mammography x-ray tubes because of its 20-keV characteristic x-rays.

Figure 21. X-ray tube. Low-energy x-rays are undesirable because they increase the dose to the patient but do not contribute to the final image because they are almost totally absorbed. Therefore, the number of low-energy x-rays is usually reduced by use of a layer of absorber that preferentially absorbs them. The extent to which low-energy x-rays have been removed can be quantified by the half-value layer of the x-ray beam. It is ideal to create x-rays from a point source because any increase in source size will result in blurring of the final image. Quantitatively, the effects of the blurring are described by the focal spots contribution to the system modulation transfer function (MTF). The blurring has its main effect on edges and small objects, which correspond to the higher frequencies. The effect of this blurring depends on the geometry of the imaging and is worse for larger distances between the object and the image receptor (which corresponds to larger geometric magnifications). To avoid this blurring, the electrons must be focused to strike a small spot of the anode. The focusing is achieved by electric fields determined by the exact shape of the cathode. However, there is a limit to the size of this focal spot because the anode material will melt if too much power is deposited into too small an area. This limit is improved by use of a rotating anode, where the anode target material is rotated about a central axis and new (cooler) anode material is constantly being rotated into place at the focal spot. To further increase the power limit, the anode is made with an angle surface. This allows the heat to be deposited in a relatively large 55

spot while the apparent spot size at the detector will be smaller by a factor of the sine of the anode angle. Unfortunately, this angle cannot be made too small because it limits the area that can be covered with x-rays. In practice, tubes are usually supplied with two (or more) focal spots of differing sizes, allowing choice of a smaller (sharper, lower-power) spot or a larger (blurrier, higher-power) spot. The x-ray tube also limits the total number of x-rays that can be used in an exposure because the anode will melt if too much total energy is deposited in it. This limit can be increased by using a more massive anode.
4.1.2 Generator

The voltages and currents in an x-ray tube are supplied by an x-ray generator. This controls the cathodeanode voltage, which partially defines the number of x-rays made because the number of x-rays produced increases with voltage. The voltage is also chosen to produce xrays with desired energies: Higher voltages make x-rays that generally are more penetrating but give a lower contrast image. The generator also determines the number of x-rays created by controlling the amount of current flowing from the cathode to anode and by controlling the length of time this current flows. This points out the two major parameters that describe an xray exposure: the peak kilovolts (peak kilovolts from the anode to the cathode during the exposure) and the milliampere-seconds (the product of the current in milliamperes and the exposure time in seconds). The peak kilovolts and milliampere-seconds for an exposure may be set manually by an operator based on estimates of the anatomy. Some generators use manual entry of kilovolts and milliamperes but determine the exposure time automatically. This involves sampling the radiation either before or after the image sensor and is referred to as phototiming. The anode-cathode voltage (often 15 to 150 kV) can be produced by a transformer that converts 120 or 220 V ac to higher voltages. This output is then rectified and filtered. Use of three-phase transformers gives voltages that are nearly constant versus those from singlephase transformers, thus avoiding low kilovoltages that produce undesired low-energy x-rays. In a variation of this method, the transformer output can be controlled at a constant voltage by electron tubes. This gives practically constant voltages and, further, allows the voltage to be turned on and off so quickly that millisecond exposure times can be achieved. In a third approach, an ac input can be rectified and filtered to produce a nearly dc voltage, which is then sent to a solid-state inverter that can turn on and off thousands of times a second. This higher-frequency ac voltage can be converted more easily to a high voltage by a transformer. Equipment operating on this principle is referred to as midfrequency or high-frequency generators.
4.1.3 Image Detection: Screen Film Combinations

Special properties are needed for image detection in radiographic applications, where a few high-quality images are made in a study. Because decisions are not immediately made from the images, it is not necessary to display them instantly (although it may be desirable). The most commonly used method of detecting such a radiographic x-ray image uses lightsensitive negative film as a medium. Because high-quality film has a poor response to x-rays, it must be used together with x-raysensitive screens. Such screens are usually made with CaWo or phosphors using rare earth elements such as doped Gd. The film is enclosed in a light-tight cassette in contact with an x-ray screen or between two x-ray screens. When a x-

56

ray image strikes the cassette, the x-rays are absorbed by the screens with high efficiency, and their energy is converted to visible light. The light then exposes a negative image on the film, which is in close contact with the screen. Several properties have been found to be important in describing the relative performance of different films. One critical property is the contrast, which describes the amount of additional darkening caused by an additional amount of light when working near the center of a films exposure range. Another property, the latitude of a film, describes the films ability to create a usable image with a wide range in input light levels. Generally, latitude and contrast are competing properties, and a film with a large latitude will have a low contrast. Additionally, the modulation transfer function (MTF) of a film is an important property. MTF is most degraded at higher frequencies; this highfrequency MTF is also described by the films resolution, its ability to image small objects. X-ray screens also have several key performance parameters. It is essential that screens detect and use a large percentage of the x-rays striking them, which is measured as the screens quantum detection efficiency. Currently used screens may detect 30% of x-rays for images at higher peak kilovolts and as much 60% for lower peak kilovolt images. Such efficiencies lead to the use of two screens (one on each side of the film) for improved x-ray utilization. As with films, a good high-frequency MTF is needed to give good visibility of small structures and edges. Some MTF degradation is associated with blurring that occurs when light spreads as it travels through the screen and to the film. This leads to a compromise on thickness; screens must be thick enough for good quantum detection efficiency but thin enough to avoid excess blurring. For a film/screen system, a certain amount of radiation will be required to produce a usable amount of film darkening. The ability of the film/screen system to make an image with a small amount of radiation is referred to as its speed. The speed depends on a number of parameters: the quantum detection efficiency of the screen, the efficiency with which the screen converts x-ray energy to light, the match between the color emitted by the screen and the colors to which the film is sensitive, and the amount of film darkening for a given amount of light. The number of x-rays used in producing a radiographic image will be chosen to give a viewable amount of exposure to the film. Therefore, patient dose will be reduced by the use of a high-speed screen/film system. However, high-speed film/screen combinations gives a noisier image because of the smaller number of x-rays detected in its creation.
4.1.4 Image Detection: X-Ray Image Intensifiers with Televisions

Although screen-film systems are excellent for radiography, they are not usable for fluoroscopy, where lower x-ray levels are produced continuously and many images must be presented almost immediately. Fluoroscopic images are not used for diagnosis but rather as an aid in performing tasks such as placement of catheters in blood vessels during angiography. For fluoroscopy, x-ray image intensifiers are used in conjunction with television cameras. An x-ray image intensifier detects the x-ray image and converts it to a small, bright image of visible light. This visible image is then transferred by lenses to a television camera for final display on a monitor. The basic structure of an x-ray image intensifier is shown in Fig.22. The components are held in a vacuum by an enclosure made of glass and/or metal. The x-rays enter through a low-absorption window and then strike an input phosphor usually made of doped CsI. As in the x-ray screens described above, the x-rays are converted to light in the CsI. On top of the CsI layer is a photoemitter, which absorbs the light and emits a number of low-energy electrons that initially spread in various directions. The photoelectrons are accelerated and steered by a set of grids that have voltages applied to them. The electrons

57

strike an output phosphor structure that converts their energy to the final output image made of light. This light then travels through an output window to a lens system. The grid voltages serve to add energy to the electrons so that the output image is brighter. Grid voltages and shapes are also chosen so that the x-ray image is converted to a light image with minimal distortion. Further, the grids must be designed to take photoelectrons that are spreading from a point on the photoemitter and focus them back together at a point on the output phosphor. It is possible to adjust grid voltages on an image intensifier so that it has different fields of coverage. Either the whole input area can be imaged on the output phosphor, or smaller parts of the input can be imaged on the whole output. Use of smaller parts of the input is advantageous when only smaller parts of anatomy need to be imaged with maximum resolution and a large display. For example, an image intensifier that could cover a 12-in.-diameter input also might be operated so that a 9-in.-diameter or 6-in.-diameter input covers all the output phosphor. Xray image intensifiers can be described by a set of performance parameters not unlike those of screen/film combinations. It is important that x-rays be detected and used with a high efficiency; current image intensifiers have quantum detection efficiencies of 60% to 70% for 59-keV x-rays. As with film/screens, a good high-frequency MTF is needed to image small objects and sharp edges without blurring. However, low-frequency MTF also must be controlled carefully in image intensifiers, since it can be degraded by internal scattering of xrays, photoelectrons, and light over relatively large distances. The amount of intensification depends on brightness and size of the output image for a given x-ray input. This is described either by the gain (specified relative to a standard x-ray screen) or by conversion efficiency [a light output per radiation input measured in (cd/m2 )/(mR/min)]. Note that producing a smaller output image is as important as making a light image with more photons because the small image can be handled more efficiently by the lenses that follow. Especially when imaging the full input area, image intensifiers introduce a pincushion distortion into the output image. Thus a square object placed off-center will produce an image that is stretched in the direction away from the center.

Figure 22. X-ray image intensifier.

58

4.1.5 Biomedical Imaging

Although an image intensifier output could be viewed directly with a lens system, there is more flexibility when the image intensifier is viewed with a television camera and the output is displayed on a monitor. Televisions are currently used with pickup tubes and with CCD sensors. When a television tube is used, the image is focused on a charged photoconducting material at the tubes input. A number of materials are used, including SbS, PbO, and SeTeAs. The light image discharges regions of the photoconductor, converting the image to a charge distribution on the back of the photoconducting layer. Next, the charge distribution is read by scanning a small beam of electrons across the surface, which recharges the photoconductor. The recharging current is proportional to the light intensity at the point being scanned; this current is amplified and then used to produce an image on a monitor. The tube target is generally scanned in an interlaced mode in order to be consistent with broadcast television and allow use of standard equipment. In fluoroscopy, it is desirable to use the same detected dose for all studies so that the image noise is approximately constant. This is usually achieved by monitoring the image brightness in a central part of the image intensifiers output, since brightness generally increases with dose. The brightness may be monitored by a photomultiplier tube that samples it directly or by analyzing signal levels in the television. However, maintaining a constant detected dose would lead to high patient doses in the case of very absorptive anatomy. To avoid problems here, systems are generally required by federal regulations to have a limit on the maximum patient dose. In those cases where the dose limit prevents the image intensifier from receiving the usual dose, the output image becomes darker. To compensate for this, television systems are often operated with automatic gain control that gives an image on the monitor of a constant brightness no matter what the brightness from the image intensifier.
4.1.6 Image Detection: Digital Systems

In both radiography and fluoroscopy, there are advantages to the use of digital images. This allows image processing for better displayed images, use of lower doses in some cases, and opens the possibility for digital storage with a PACS system or remote image viewing via teleradiology. Additionally, some digital systems provide better image quality because of fewer processing steps, lack of distortion, or improved uniformity. A common method of digitizing medical x-ray images uses the voltage output from an image-intensifier/ TV system. This voltage can be digitized by an analog-to-digital converter at rates fast enough to be used with fluoroscopy as well as radiography. Another technology for obtaining digital radiographs involves use of photostimulable phosphors. Here the x-rays strike an enclosed sheet of phosphor that stores the x-ray energy. This phorphor can then be taken to a read-out unit, where the phosphor surface is scanned by a small light beam of proper wavelength. As a point on the surface is read, the stored energy is emitted as visible light, which is then detected, amplified, and digitized. Such systems have the advantage that they can be used with existing systems designed for screen/film detection because the phosphor sheet package is the same size as that for screen films. A new method for digital detection involves use of active-matrix thin-film-transistor technology, in which an array of small sensors is grown in hydrogenated amorphous silicon. Each sensor element includes an electrode for storing charge that is proportional to its x-ray

59

signal. Each electrode is coupled to a transistor that either isolates it during acquisition or couples it to digitization circuitry during readout. There are two common methods for introducing the charge signal on each electrode. In one method, a layer of x-ray absorber (typically selenium) is deposited on the array of sensors; when this layer is biased and x-rays are absorbed there, their energy is converted to electronhole pairs and the resulting charge is collected on the electrode. In the second method, each electrode is part of the photodiode that makes electron-hole pairs when exposed to light; the light is produced from x-rays by a layer of scintillator (such as CsI) that is deposited on the array. Use of a digital system provides several advantages in fluoroscopy. The digital image can be processed in real time with edge enhancement, smoothing, or application of a median filter. Also, frame-to-frame averaging can be used to decrease image noise, at the expense of blurring the image of moving objects. Further, digital fluoroscopy with TV system allows the TV tube to be scanned in formats that are optimized for read-out; the image can still be shown in a different format that is optimized for display. Another advantage is that the displayed image is not allowed to go blank when x-ray exposure is ended, but a repeated display of the last image is shown. This last-image-hold significantly reduces doses in those cases where the radiologist needs to see an image for evaluation, but does not necessarily need a continuously updated image. The processing of some digital systems also allows the use of pulsed fluoroscopy, where the x-rays are produced in a short, intense burst instead of continuously. In this method the pulses of x-rays are made either by biasing the x-ray tube filament or by quickly turning on and off the anode-cathode voltage. This has the advantage of making sharper images of objects that are moving. Often one x-ray pulse is produced for every display frame, but there is also the ability to obtain dose reduction by leaving the x-rays off for some frames. With such a reduced exposure rate, doses can be reduced by a factor of two or four by only making x-rays every second or fourth frame. For those frames with no x-ray pulse, the system repeats a display of the last frame with x-rays.

4.2 Computed Tomography

4.2.1 Instrumentation

The development of computed tomography (CT) in the early 1970s revolutionized medical radiology. For the first time, physicians were able to obtain high-quality tomographic (crosssectional) images of internal structures of the body. Over the next 10 years, 18 manufacturers competed for the exploding world CT market. Technical sophistication increased dramatically, and even today, CT continues to mature, with new capabilities being researched and developed. Computed tomographic images are reconstructed from a large number of measurements of xray transmission through the patient (called projection data). The resulting images are tomographic maps of the x-ray linear attenuation coefficient. The mathematical methods used to reconstruct CT images from projection data are discussed in the next section. In this section, the hardware and instrumentation in a modern scanner are described. The first practical CT instrument was developed in 1971 by Dr. G. N. Hounsfield in England and was used to image the brain [Hounsfield, 1980]. The projection data were acquired in approximately 5 min, and the tomographic image was reconstructed in approximately 20 min. Since then, CT technology has developed dramatically, and CT has become a standard

60

imaging procedure for virtually all parts of the body in thousands of facilities throughout the world. Projection data are typically acquired in approximately 1 s, and the image is reconstructed in 3 to 5 s.

Figure 23. Schematic drawing of a typical CT scanner installation, consisting of (1) control console, (2) gantry stand, (3) patient table, (4) head holder, and (5) laser imager. (Courtesy of Picker International, Inc.) One special-purpose scanner described below acquires the projection data for one tomographic image in 50 ms. A typical modern CT scanner is shown in Fig. 23, and typical CT images are shown in Fig. 24.

Figure 24. Typical CT images of (a) brain, (b) head showing orbits, (c) chest showing lungs, and (d) abdomen. The fundamental task of CT systems is to make an extremely large number (approximately 500,000) of highly accurate measurements of x-ray transmission through the patient in a precisely controlled geometry. A basic system generally consists of a gantry, a patient table, a 61

control console, and a computer. The gantry contains the x-ray source, x-ray detectors, and the data-acquisition system (DAS).
4.2.2 Data-Acquisition Geometries

Projection data may be acquired in one of several possible geometries described below, based on the scanning configuration, scanning motions, and detector arrangement. The evolution of these geometries is descried in terms of generations, as illustrated in Fig. 25, and reflects the historical development [Newton and Potts, 1981; Seeram, 1994]. Current CT scanners use either third-, fourth-, or fifthgeneration geometries, each having its own pros and cons.

Figure 25. Four generations of CT scanners illustrating the parallel- and fan-beam geometries [Robb, 1982].
4.2.3 First Generation: Parallel-Beam Geometry

Parallel-beam geometry is the simplest technically and the easiest with which to understand the important CT principles. Multiple measurements of x-ray transmission are obtained using a single highly collimated x-ray pencil beam and detector. The beam is translated in a linear 62

motion across the patient to obtain a projection profile. The source and detector are then rotated about the patient isocenter by approximately 1, and another projection profile is obtained. This translate-rotate scanning motion is repeated until the source and detector have been rotated by 180. The highly collimated beam provides excellent rejection of radiation scattered in the patient; however, the complex scanning motion results in long (approximately 5-min) scan times. This geometry was used by Hounsfield in his original experiments [Hounsfield, 1980] but is not used in modern scanners.
4.2.4 Second Generation: Fan Beam, Multiple Detectors

Scan times were reduced to approximately 30 s with the use of a fan beam of x-rays and a linear detector array. A translate-rotate scanning motion was still employed; however, a larger rotate increment could be used, which resulted in shorter scan times. The reconstruction algorithms are slightly more complicated than those for first-generation algorithms because they must handle fan-beam projection data.
4.2.5 Third Generation: Fan Beam, Rotating Detectors

Third-generation scanners were introduced in 1976. A fan beam of x-rays is rotated 360 around the isocenter. No translation motion is used; however, the fan beam must be wide enough to completely contain the patient. A curved detector array consisting of several hundred independent detectors is mechanically coupled to the x-ray source, and both rotate together. As a result, these rotate-only motions acquire projection data for a single image in as little as 1 s. Third-generation designs have the advantage that thin tungsten septa can be placed between each detector in the array and focused on the x-ray source to reject scattered radiation.

4.2.6 Fourth Generation: Fan Beam, Fixed Detectors

In a fourth-generation scanner, the x-ray source and fan beam rotate about the isocenter, while the detector array remains stationary. The detector array consists of 600 to 4800 (depending on the manufacturer) independent detectors in a circle that completely surrounds the patient. Scan times are similar to those of third-generation scanners. The detectors are no longer coupled to the x-ray source and hence cannot make use of focused septa to reject scattered radiation. However, detectors are calibrated twice during each rotation of the x-ray source, providing a self-calibrating system. Third-generation systems are calibrated only once every few hours. Two detector geometries are currently used for fourth-generation systems: (1) a rotating x-ray source inside a fixed detector array and (2) a rotating x-ray source outside a nutating detector array. Figure 26 shows the major components in the gantry of a typical fourth-generation system using a fixed-detector array. Both third- and fourth-generation systems are commercially available, and both have been highly successful clinically. Neither can be considered an overall superior design.

63

Figure 26. The major internal components of a fourth-generation CT gantry are shown in a photograph with the gantry cover removed (upper) and identified in the line drawing (lower). (Courtesy of Picker International, Inc.)
4.2.7 Fifth Generation: Scanning Electron Beam

Fifth-generation scanners are unique in that the x-ray source becomes an integral part of the system design. The detector array remains stationary, while a high-energy electron beams is electronically swept along a semicircular tungsten strip anode. X-rays are produced at the point where the electron beam hits the anode, resulting in a source of x-rays that rotates about the patient with no moving parts [Boyd et al., 1979]. Projection data can be acquired in approximately 50 ms, which is fast enough to image the beating heart without significant motion artifacts [Boyd and Lipton, 1983]. An alternative fifth-generation design, called the dynamic spatial reconstructor (DSR) scanner, is in use at the Mayo Clinic [Ritman, 1980, 1990]. This machine is a research prototype and is not available commercially. It consists of 14 x-ray tubes, scintillation screens, and video cameras. Volume CT images can be produced in as little as 10 ms.

4.2.8 Spiral/Helical Scanning

The requirement for faster scan times, and in particular for fast multiple scans for threedimensional imaging, has resulted in the development of spiral (helical) scanning systems [Kalendar et al., 1990]. Both third- and fourth-generation systems achieve this using selflubricating slip-ring technology (Fig. 27) to make the electrical connections with rotating 64

components. This removes the need for power and signal cables which would otherwise have to be rewound between scans and allows for a continuous rotating motion of the x-ray fan beam. Multiple images are acquired while the patient is translated through the gantry in a smooth continuous motion rather than stopping for each image. Projection data for multiple images covering a volume of the patient can be acquired in a single breath hold at rates of approximately one slice per second. The reconstruction algorithms are more sophisticated because they must accommodate the spiral or helical path traced by the x-ray source around the patient, as illustrated in Fig. 28

Figure 27. Photograph of the slip rings used to pass power and control signals to the rotating gantry. (Courtesy of Picker International, Inc.)

Figure 28. Spiral scanning causes the focal spot to follow a spiral path around the patient as indicated. (Courtesy of Picker International, Inc.)
4.2.9 X-Ray System

The x-ray system consists of the x-ray source, detectors, and a data-acquisition system.

65

4.2.9.1 X-Ray Source With the exception of one fifth-generation system described above, all CT scanners use bremsstrahlung x-ray tubes as the source of radiation. These tubes are typical of those used in diagnostic imaging and produce x-rays by accelerating a beam of electrons onto a target anode. The anode area from which x-rays are emitted, projected along the direction of the beam, is called the focal spot. Most systems have two possible focal spot sizes, approximately 0.5, x 1.5 mm and 1.0 x 2.5 mm. A collimator assembly is used to control the width of the fan beam between 1.0 and 10 mm, which in turn controls the width of the imaged slice. The power requirements of these tubes are typically 120 kV at 200 to 500 mA, producing xrays with an energy spectrum ranging between approximately 30 and 120 keV. All modern systems use highfrequency generators, typically operating between 5 and 50 kHz [Brunnett et al., 1990]. Some spiral systems use a stationary generator in the gantry, requiring high-voltage (120-kV) slip rings, while others use a rotating generator with lower-voltage (480-V) slip rings. Production of x-rays in bremsstrahlung tubes is an inefficient process, and hence most of the power delivered to the tubes results in heating of the anode. A heat exchanger on the rotating gantry is used to cool the tube. Spiral scanning, in particular, places heavy demands on the heat-storage capacity and cooling rate of the x-ray tube. The intensity of the x-ray beam is attenuated by absorption and scattering processes as it passes through the patient. The degree of attenuation depends on the energy spectrum of the x-rays as well as on the average atomic number and mass density of the patient tissues. 4.2.9.2 X-Ray Detectors X-ray detectors used in CT systems must (a) have a high overall efficiency to minimize the patient radiation dose, have a large dynamic range, (b) be very stable with time, and (c) be insensitive to temperature variations within the gantry. Three important factors contributing to the detector efficiency are geometric efficiency, quantum (also called capture) efficiency, and conversion efficiency [Villafanaet et al., 1987]. Geometric efficiency refers to the area of the detectors sensitive to radiation as a fraction of the total exposed area. Thin septa between detector elements to remove scattered radiation, or other insensitive regions, will degrade this value. Quantum efficiency refers to the fraction of incident x-rays on the detector that are absorbed and contribute to the measured signal. Conversion efficiency refers to the ability to accurately convert the absorbed x-ray signal into an electrical signal (but is not the same as the energy conversion efficiency). Overall efficiency is the product of the three, and it generally lies between 0.45 and 0.85. A value of less than 1 indicates a nonideal detector system and results in a required increase in patient radiation dose if image quality is to be maintained. The term dose efficiency sometimes has been used to indicate overall efficiency. Modern commercial systems use one of two detector types: solid-state or gas ionization detectors.
Solid-State Detectors.

66

Solid-state detectors consist of an array of scintillating crystals and photodiodes, as illustrated in Fig 29. The scintillators generally are either cadmium tungstate (CdWO 4) or a ceramic material made of rare earth oxides, although previous scanners have used bismuth germanate crystals with photomultiplier tubes. Solid-state detectors generally have very high quantum and conversion efficiencies and a large dynamic range.

Figure 29. (a) A solid-state detector consists of a scintillating crystal and photodiode combination. (b) Many such detectors are placed side by side to form a detector array that may contain up to 4800 detectors.
Gas Ionization Detectors. Gas ionization detectors, as illustrated in Fig. 30, consist of an array of chambers containing compressed gas (usually xenon at up to 30 atm pressure). A high voltage is applied to tungsten septa between chambers to collect ions produced by the radiation. These detectors have excellent stability and a large dynamic range; however, they generally have a lower quantum efficiency than solidstate detectors.

Figure 30. Gas ionization detector Gas ionization detector arrays consist of high-pressure gas in multiple chambers separated by thin septa. A voltage is applied between alternating septa. The septa also act as electrodes and collect the ions created by the radiation, converting them into an electrical signal.

67

4.2.10 Computer System

Various computer systems are used by manufacturers to control system hardware, acquire the projection data, and reconstruct, display, and manipulate the tomographic images. A typical system is illustrated in Fig. 31, which uses 12 independent processors connected by a 40Mbyte/s multibus. Multiple custom array processors are used to achieve a combined computational speed of 200 MFLOPS (million floating-point operations per second) and a reconstruction time of approximately 5 s to produce an image on a 1024 x 1024 pixel display. A simplified UNIX operating system is used to provide a multitasking, multiuser environment to coordinate tasks.

Figure 31. The computer system controls the gantry motions, acquires the x-ray transmission measurements, and reconstructs the final image. The system shown here uses 12 68000family CPUs. (Courtesy of Picker International, Inc.)
4.2.11 Summary Computed tomography revolutionized medical radiology in the early 1970s. Since that time, CT technology has developed dramatically, taking advantage of developments in computer hardware and detector technology. Modern systems acquire the projection data required for one tomographic image in approximately 1 s and present the reconstructed image on a 1024 1024 matrix display within a few seconds. The images are high-quality tomographic maps of the x-ray linear attenuation coefficient of the patient tissues.

68

4.3 Magnetic Resonance Imaging (MRI)

Magnetic resonance imaging (MRI) is a clinically important medical imaging modality due to its exceptional soft-tissue contrast. MRI was invented in the early 1970s. The first commercial scanners appeared about 10 years later. Noninvasive MRI studies are now supplanting many conventional invasive procedures. A 1990 study found that the principal applications for MRI are examinations of the head (40%), spine (33%), bone and joints (17%), and the body (10%). The percentage of bone and joint studies was growing in 1990. Although typical imaging studies range from 1 to 10 minutes, new fast imaging techniques acquire images in less than 50 ms. MRI research involves fundamental tradeoffs between resolution, imaging time, and signal-to-noise ratio (SNR).
4.3.1 Fundamentals of MRI

Magnetic resonance imaging exploits the existence of induced nuclear magnetism in the patient. Materials with an odd number of protons or neutrons possess a weak but observable nuclear magnetic moment. Most commonly protons (1H) are imaged, although carbon (13C), phosphorous (31P), sodium (23Na), and fluorine (19F) are also of significant interest. The nuclear moments are normally randomly oriented, but they align when placed in a strong magnetic field. Typical field strengths for imaging range between 0.2 and 1.5 T, although spectroscopic and functional imaging work is often performed with higher field strengths. The nuclear magnetization is very weak; the ratio of the induced magnetization to the applied fields is only 4109. MRI scanners use the technique of nuclear magnetic resonance (NMR) to induce and detect a very weak radio frequency signal that is a manifestation of nuclear magnetism. The term nuclear magnetism refers to weak magnetic properties that are exhibited by some materials as a consequence of the nuclear spin that is associated with their atomic nuclei. In particular, the proton, which is the nucleus of the hydrogen atom, possesses a nonzero nuclear spin and is an excellent source of NMR signals. The human body contains enormous numbers of hydrogen atomsespecially in water (H2O) and lipid molecules. The patient to be imaged must be placed in an environment in which several different magnetic fields can be simultaneously or sequentially applied to elicit the desired NMR signal. Every MRI scanner utilizes a strong static field magnet in conjunction with a sophisticated set of gradient coils and radiofrequency coils. The gradients and the radiofrequency components are switched on and off in a precisely timed pattern, or pulse sequence. Different pulse sequences are used to extract different types of data from the patient. MR images are characterized by excellent contrast between the various forms of soft tissues within the body. For patients who have no ferromagnetic foreign bodies within them, MRI scanning appears to be perfectly safe and can be repeated as often as necessary without danger. This provides one of the major advantages of MRI over conventional X-ray and computed tomographic (CT) scanners. The NMR signal is not blocked at all by regions of air or bone within the body, which provides a significant advantage over ultrasound imaging. Also, unlike the case of nuclear medicine scanning, it is not necessary to add radioactive tracer materials to the patient.

4.3.2 Fundamentals of MRI Instrumentation

Three types of magnetic fieldsmain fields or static fields, gradient fields, an radiofrequency (RF) fields are required in MRI scanners. Most MRI hardware engineering is concerned with producing and controlling these various forms of magnetic fields. The successful implementation of MRI requires a two-way flow of information between analog and digital formats. The main magnet, the gradient and RF coils, and the gradient and RF power supplies operate in the analog domain. The digital domain is centered on a general-purpose computer that is used to provide control information (signal timing and amplitude) to the gradient and RF amplifiers, to process timedomain MRI signal data returning from the receiver, and to drive image display and storage systems. The computer also provides miscellaneous control functions, such as permitting the operator to control the position of the patient table.
4.3.3 Static Field Magnets

The main field magnet is required to produce an intense and highly uniform, static magnetic field over the entire region to be imaged. To be useful for imaging purposes, this field must be extremely uniform in space and constant in time. In practice, the spatial variation of he main field of a whole-body scanner must be less than about 1 to 10 parts per million (ppm) over a region approximately 40 cm in diameter. To achieve these high levels of homogeneity requires careful attention to magnet design and to manufacturing tolerances. The temporal drift of the field strength is normally required to be less than 0.1 ppm/h. Two units of magnetic field strength are now in common use. The gauss (G) has a long historical usage and is firmly embedded in the older scientific literature. The tesla (T) is a more recently adopted unit, but is a part of the SI system of units and, for this reason, is generally preferred. The tesla is a much larger unit than the gauss1 T corresponds to 10,000 G. The magnitude of the earth's magnetic field is about 0.05 mT. The static magnetic fields of modern MRI scanners are most commonly in the range of from 0.5 to 1.5 T; useful scanners, however, have been built using the entire range from 0.02 to 8 T. The signal-to-noise ration (SNR) is the ratio of the NMR signal voltage to the ever-present noise voltages that arise within the patient and within the electronic components of the receiving system. The SNR is one of the key parameters that determine the performance capabilities of a scanner. The maximum available SNR increases linearly with field strength. The improvement in SNR as the field strength is increased is the major reason that so much effort has gone into producing high-field magnets for MRI systems. To produce the highly uniform field required for MRI, it is necessary to more or less surround the patient with a magnet. The main field magnet must be large enough, therefore, to effectively surround the patient; in addition, it must meet other stringent performance requirements. For these reasons, the main field magnet is the most important determinant of the cost, performance, and appearance of an MRI scanner. Four different classes of main magnets(1) permanent magnets, (2) electromagnets, (3) resistive magnets, and (4) superconducting magnetshave been used in MRI scanners.

70

Figure 32. Schematic drawing of a superconducting magnet The main magnet coils and the superconducting shim coils are maintained at liquid helium temperature. A computer-controlled table is used to advance the patient into the region of imaging. Six coils of superconducting wire are connected in a series and carry an intense currenton the order of 200 Ato produce the 1.5-T magnetic field at the magnet's center. The diameter of the coils is about 1.3 m, and the total length of wire is about 65 km (40 miles). The entire length of this wire must be without any flawssuch as imperfect weldsthat would interrupt the superconducting properties. If the magnet wire has no such flaws, the magnet can be operated in the persistent modethat is, once the current is established, the terminals may be connected together, and a constant persistent current flow indefinitely so long as the temperature of the coils is maintained below the superconducting transition temperature. This temperature is about 10 K for niobiumtitanium wire. The coils are kept at this low temperature by encasing them in a double-walled cryostat (analogous to a Thermos bottle) that permits them to be immersed in liquid helium at a temperature of 4.2 K. The gradual boiling of liquid helium caused by inevitable heat leaks into the cryostat requires that the helium be replaced on a regular schedule.
4.3.4 Gradient Coils Three gradient fields, one each for the x, y, and z directions are used to code position information into the MRI signal and to permit the imaging of thin anatomic slices. Each with

71

its own power supply and under independent computer control. Ordinarily, the most practical method for constructing the gradient coils is to wind them on a cylindrical coil form that surrounds the patient and is located inside the warm bore of the magnet. The generation of MR images requires that a rapid sequence of time-dependent gradient fields (on all three axes) be applied to the patient. For example, the commonly used technique of spin-warp imaging utilizes a slice-selection gradient pulse to select the spins in a thin (3 to10 mm) slice of the patient. This, in turn, requires that the currents in the three gradient coils be rapidly switched by computer-controlled power supplies. The rate at which gradient currents can be switched is an important determinant of the imaging capabilities of a scanner. In typical scanners, the gradient coils have an electrical resistance of about 1 and an inductance of about 1 mH, and the gradient field can be switched from 0 to 10 mT/m (1 G/cm) in about 0.5 ms. The current must be switched from 0 to about 100 A. This places very strong demands on the power supplies, and it is often necessary to use water cooling to prevent overheating the gradient coils.
4.3.5 Radiofrequency Coils Radiofrequency (RF) coils are components of every scanner and are used for two essential purposes transmitting and receiving signals at the resonant frequency of the protons within the patient. The precession occurs at the Larmor frequency of the protons, which is proportional to the static magnetic field. At IT this frequency is 42.58 MHz. Thus in the range of field strengths currently used in whole-body scanners, 0.02 to 4 T, the operating frequency ranges from 0.85 to 170.3 MHz. For the commonly used 1.5-T scanners, the operating frequency is 63.86 MHz.

Three classes of RF coilsbody coils, head coils, and surface coilsare commonly used in MRI scanners. These coils are located in the space between the patient and the gradient coils. Conducting shields just inside the gradient coils are used to prevent electromagnetic coupling between the RF coils and the rest of the scanner. Head and body coils are large enough to surround the region being imaged and are designed to produce an RF magnetic field that is uniform across the region to be imaged. Body coils are usually constructed on cylindrical coil forms and have a large enough diameter (50 to 60 cm) to entirely surround the patient's body.

Figure 33. Birdcage resonator.

72

This is a head coil designed to operate in a 4-T scanner at 170 MHz. Quadrature excitation and receiver performance are achieved by using two adjacent ports with a 90-degree phase shift between them. A common practice is to use separate coils for the transmitter and receiver functions. This permits the use of a large coilsuch as the body coilwith a uniform excitation pattern as the transmitter and a small surface coil optimized to the anatomic regionsuch as the spine being imaged. When this two-coil approach is used, it is important to provide for electronically decoupling of the two coils because they are tuned at the same frequency and will tend to have harmful mutual interactions.
4.3.6 Functional MRI

Functional magnetic resonance imaging (fMRI), a technique that images intrinsic blood signal change with magnetic resonance (MR) imagers, has in the last 3 years become one of the most successful tools used to study blood flow and perfusion in the brain. Since changes in neuronal activity are accompanied by focal changes in cerebral blood flow (CBF), blood volume (CBV), blood oxygenation, and metabolism, these physiologic changes can be used to produce functional maps of mental operations. fMRI results using intrinsic blood contrast were first presented in public at the Tenth Annual Meeting of the Society of Magnetic Resonance in Medicine in August of 1991. The visual cortex activation work was carried out with the injection of the contrast agent gadoliniumDTPA. The use of an external contrast agent allows the study of change in blood volume. The popularity of fMRI is based on many factors. It is safe and totally noninvasive. It can be acquired in single subjects for a scanning duration of several minutes, and it can be repeated on the same subjects as many times as necessary.

Figure 34. Functional MR image demonstrating activation of the primary visual cortex.

73

Figure 35. Functional MRI mapping of motor cortex for preoperative planning. This three-dimensional rendering of the brain represents fusion of functional and structural anatomy. Brain is viewed from the top. A tumor is shown in the left hemisphere, near the midline. The other areas depict sites of functional activation during movement of the right hand, right foot, and left foot. The right foot cortical representation is displaced by tumor mass effect from its usual location.
4.4 Positron-Emission Tomography (PET)

4.4.1 Background The history of positron-emission tomography (PET) can be traced to the early 1950s, when workers in Boston first realized the medical imaging possibilities of a particular class of radioactive substances. It was recognized then that the high-energy photons produced by annihilation of the positron from positronemitting isotopes could be used to describe, in three dimensions, the physiologic distribution of tagged chemical compounds. After two decades of moderate technological developments by a few research centers, widespread interest and broadly based research activity began in earnest following the development of sophisticated reconstruction algorithms and improvements in detector technology. By the mid-1980s, PET had become a tool for medical diagnosis and for dynamic studies of human metabolism.

74

Figure 36. The MRI image shows the arteriovenous malformation (AVM) as an area of signal loss due to blood flow. The PET image shows the AVM as a region devoid of glucose metabolism and also shows decreased metabolism in the adjacent frontal cortex. This is a metabolic effect of the AVM on the brain and may explain some of the patients symptoms. Today, because of its million-fold sensitivity advantage over magnetic resonance imaging (MRI) in tracer studies and its chemical specificity, PET is used to study neuroreceptors in the brain and other body tissues. In contrast, MRI has exquisite resolution for anatomic (Fig. 36) and flow studies as well as unique attributes of evaluating chemical composition of tissue but in the millimolar range rather than the nanomolar range of much of the receptor proteins in the body. Clinical studies include tumors of the brain, breast, lungs, lower gastrointestinal tract, and other sites. Additional clinical uses include Alzheimers disease, Parkinsons disease, epilepsy, and coronary artery disease affecting heart muscle metabolism and flow. Its use has added immeasurably to our current understanding of flow, oxygen utilization, and the metabolic changes that accompany disease and that change during brain stimulation
4.4.2 PET Theory PET imaging begins with the injection of a metabolically active tracera biologic molecule that carries with it a positron-emitting isotope (e.g., 11C, 13N, 15O, or 18F). Over a few minutes, the isotope accumulates in an area of the body for which the molecule has an affinity. As an example, glucose labeled with 11C, or a glucose analogue labeled with 18F, accumulates in the brain or tumors, where glucose is used as the primary source of energy. The radioactive nuclei then decay by positron emission. In positron (positive electron) emission, a nuclear proton changes into a positive electron and a neutron. The atom maintains its atomic mass but decreases its atomic number by 1. The ejected positron combines with an electron almost instantaneously, and these two particles undergo the process of annihilation. The energy associated with the masses of the positron and electron particles is 1.022 MeV in accordance with the energy E to mass m equivalence E = mc2, where c is the velocity of light.

75

This energy is divided equally between two photons that fly away from one another at a 180 angle. Each photon has an energy of 511 keV. These high-energy gamma rays emerge from the body in opposite directions, to be detected by an array of detectors that surround the patient (Fig. 37). When two photons are recorded simultaneously by a pair of detectors, the annihilation event that gave rise to them must have occurred somewhere along the line connecting the detectors. Of course, if one of the photons is scattered, then the line of coincidence will be incorrect. After 100,000 or more annihilation events are detected, the distribution of the positronemitting tracer is calculated by tomographic reconstruction procedures. PET reconstructs a two-dimensional (2D) image from the one-dimensional projections seen at different angles. Three-dimensional (3D) reconstructions also can be done using 2D projections from multiple angles.

Figure 37. The physical basis of positron-emission tomography. Positrons emitted by tagged metabolically active molecules annihilate nearby electrons and give rise to a pair of high-energy photons. The photons fly off in nearly opposite directions and thus serve to pinpoint their source. The biologic activity of the tagged molecule can be used to investigate a number of physiologic functions, both normal and pathologic.

PET Detectors Efficient detection of the annihilation photons from positron emitters is usually provided by the combination of a crystal, which converts the high-energy photons to visible-light photons, and a photomultiplier tube that produces an amplified electric current pulse proportional to the amount of light photons interacting with the photocathode. The fact that imaging system sensitivity is proportional to the square of the detector efficiency leads to a very important requirement that the detector be nearly 100% efficient. Thus other detector systems such as plastic scintillators or gas-filled wire chambers, with typical individual efficiencies of 20% or less, would result in a coincident efficiency of only 4% or less.

76

Figure 38. Most modern PET cameras are multilayered with 15 to 47 levels or transaxial layers to be reconstructed. The lead shields prevent activity from the patient from causing spurious counts in the tomograph ring, while the tungsten septa reject some of the events in which one (or both) of the 511-keV photons suffer a Compton scatter in the patient. The sensitivity of this design is improved by collection of data from cross-planes. Most modern PET cameras are multilayered with 15 to 47 levels or transaxial layers to be reconstructed (Fig. 38). The lead shields prevent activity from the patient from causing spurious counts in the tomograph ring, while the tungsten septa reject some of the events in which one (or both) of the 511-keV photons suffer a Compton scatter in the patient. The sensitivity of this design is improved by collection of data from cross-planes (Fig. 38). The arrangement of scintillators and phototubes is shown in Fig. 39. The individually coupled design is capable of very high resolution, and because the design is very parallel (all the photomultiplier tubes and scintillator crystals operate independently), it is capable of very high data throughput. The disadvantages of this type of design are the requirement for many expensive photomultiplier tubes and, additionally, that connecting round photomultiplier tubes to rectangular scintillation crystals leads to problems of packing rectangular crystals and circular phototubes of sufficiently small diameter to form a solid ring. The contemporary method of packing many scintillators for 511 keV around the patient is to use what is called a block detector design. A block detector couples several photomultiplier tubes to a bank of scintillator crystals and uses a coding scheme to determine the crystal of interaction. In the two-layer block (Fig. 39), five photomultiplier tubes are coupled to eight scintillator crystals. Whenever one of the outside four photomultiplier tubes fires, a 511-keV photon has interacted in one of the two crystals attached to that photomultiplier tube, and the center photomultiplier tube is then used to determine whether it was the inner or outer crystal. This is known as a digital coding scheme, since each photomultiplier tube is either hit or not hit and the crystal of interaction is determined by a digital mapping of the hit pattern. Block detector designs are much less expensive and practical to form into a multilayer camera. However, errors in the decoding scheme reduce the spatial resolution, and since the entire block is dead whenever one of its member crystals is struck by a photon, the dead time is worse than with individual coupling. The electronics necessary to decode the output of the block are straightforward but more complex than that needed for the individually coupled design.

77

Most block detector coding schemes use an analog coding scheme, where the ratio of light output is used to determine the crystal of interaction. In the example above, four photomultiplier tubes are coupled to a block of BGO that has been partially sawed through to form 64 individual crystals. The depth of the cuts are critical; that is, deep cuts tend to focus the scintillation light onto the face of a single photomultiplier tube, while shallow cuts tend to spread the light over all four photomultiplier tubes. This type of coding scheme is more difficult to implement than digital coding, since analog light ratios place more stringent requirements on the photomultiplier tube linearity and uniformity as well as scintillator crystal uniformity. However, most commercial PET cameras use an analog coding scheme because it is much less expensive due to the lower number of photomultiplier tubes required.

Figure 39. The arrangement of scintillators and phototubes is shown. The individually coupled design is capable of very high resolution, and because the design is very parallel (all the photomultiplier tubes and scintillator crystals operate independently), it is capable of very high data throughput. A block detector couples several photomultiplier tubes to a bank of scintillator crystals and uses a coding scheme to determine the crystal of interaction. In the two-layer block, five photomultiplier tubes are coupled to eight scintillator crystals.

78

4.4.3 Physical Factors Affecting Resolution The factors that affect the spatial resolution of PET tomographs are shown in Fig. 40. The size of the detector is critical in determining the systems geometric resolution. If the block design is used, there is a degradation in this geometric resolution by 2.2 mm for BGO. The degradation is probably due to the limited light output of BGO and the ratio of crystals (cuts) per phototube. The angle between the paths of the annihilation photons can deviated from 180 as a result of some residual kinetic motion (Fermi motion) at the time of annihilation. The effect on resolution of this deviation increases as the detector ring diameter increases so that eventually this factor can have a significant effect.

Figure 40. Factors contributing to the resolution of the PET tomograph. The contribution most accessible to further reduction is the size of the detector crystals. The distance the positron travels after being emitted from the nucleus and before annihilation causes a deterioration in spatial resolution. This distance depends on the particular nuclide. For example, the range of blurring for 18F, the isotope used for many of the current PET studies, is quite small compared with that of the other isotopes. Combining values for these factors for the PET-600 tomograph, we can estimate a detector-pair spatial resolution of 2.0 mm and a reconstructed image resolution of 2.6 mm. The measured resolution of this system is 2.6 mm, but most commercially available tomographs use a block detector design (Fig. 39), and the resolution of these systems is above 5 mm. The evolution of resolution improvement is shown in Fig. 41.

79

Figure 41. The evolution of resolution. Over the past decade, the resolving power of PET has improved from about 9 to 2.6 mm. This improvement is graphically illustrated by the increasing success with which one is able to resolve hot spots of an artificial sample that are detected and imaged by the tomographs. The resolution evolutions discussed above pertain to results for the center or axis of the tomograph. The resolution at the edge of the object (e.g., patient) will be less by a significant amount due to two factors. First, the path of the photon from an off-center annihilation event typically traverses more than one detector crystal, as shown in Fig. 42. This results in an elongation of the resolution spread function along the radius of the transaxial plane. The loss of resolution is dependent on the crystal density and the diameter of the tomograph detector ring. For a 60-cm-diameter system, the resolution can deteriorate by a factor of 2 from the axis to 10 cm.

Figure 42. Resolution astigmatism in detecting off-center events.

80

Because annihilation photons can penetrate crystals to different depths, the resolution is not equal in all directions, particularly at the edge of the imaging field. This problem of astigmatism will be taken into account in future PET instrumentation. The coincidence circuitry must be able to determine coincident events with 10- to 20-ns resolution for each crystal-crystal combination (i.e., chord). The timing requirement is set jointly by the time of flight across the detector ring (4 ns) and the crystal-to-crystal resolving time (typically 3 ns). The most stringent requirement, however, is the vast number of chords in which coincidences must be determined (over 1.5 million in a 24-layer camera with septa in place and 18 million with the septa removed). It is obviously impractical to have an individual coincidence circuit for each chord, so tomograph builders use parallel organization to solve this problem. A typical method is to use a high-speed clock (typically 200 MHz) to mark the arrival time of each 511-keV photon and a digital coincidence processor to search for coincident pairs of detected photons based on this time marker. This search can be done extremely quickly by having multiple sorters working in parallel. The maximum event rate is also quite important, especially in septaless systems. The maximum rate in a single detector crystals is limited by the dead time due to the scintillator fluorescent lifetime (typically 1 s per event), but as the remainder of the scintillator crystals are available, the instrument has much higher event rates (e.g., number of crystals 1 s). Combining crystals together to form contiguous blocks reduces the maximum event rate because the fluorescent lifetime applies to the entire block and a fraction of the tomograph is dead after each event.

81

5. Ultrasound
An ultrasound transducer generates acoustic waves by converting magnetic, thermal, and electrical energy into mechanical energy. The most efficient technique for medical ultrasound uses the piezoelectric effect, which was first demonstrated in 1880 by Jacques and Pierre Curie. They applied a stress to a quartz crystal and detected an electrical potential across opposite faces of the material. The Curies also discovered the inverse piezoelectric effect by applying an electric field across the crystal to induce a mechanical deformation. In this manner, a piezoelectric transducer converts an oscillating electric signal into an acoustic wave, and vice versa. Many significant advances in ultrasound imaging have resulted from innovation in transducer technology. One such instance was the development of linear-array transducers. Previously, ultrasound systems had made an image by manually moving the transducer across the region of interest. Even the faster scanners had required several seconds to generate an ultrasound image, and as a result, only static targets could be scanned. On the other hand, if the acoustic beam could be scanned rapidly, clinicians could visualize moving targets such as a beating heart.
5.1 Transducers To implement real-time imaging, researchers developed new types of transducers that rapidly steer the acoustic beam: 1. Piston-shaped transducers were designed to wobble or rotate about a fixed axis to mechanically steer the beam through a sector-shaped region. 2. Linear sequential arrays were designed to electronically focus the beam in a rectangular image region. 3. Linear phased-array transducers were designed to electronically steer and focus the beam at high speed in a sector image format. 5.1.1 Transducer Materials

Ferroelectric materials strongly exhibit the piezoelectric effect, and they are ideal materials for medical ultrasound. For many years, the ferroelectric ceramic lead-zirconate-titanate (PZT) has been the standard transducer material for medical ultrasound, in part because of its high electromechanical conversion efficiency and low intrinsic losses. The properties of PZT can be adjusted by modifying the ratio of zirconium to titanium and introducing small amounts of other substances. The disadvantages of PZT include its high acoustic impedance (Z= 30) compared with body tissue (Z= 1.5) and the presence of lateral modes in array elements. One or more acoustic matching layers can largely compensate for the acoustic impedance mismatch. Other piezoelectric materials are used for various applications. Polyvinylidene difluoride (PVDF) is a ferroelectric polymer that has been used effectively in high-frequency transducers. The copolymer of PVDF with trifluoroethylene has an improved electromechanical conversion efficiency.

Lead-magnesium-niobate (PMN), become piezoelectric when a large direct-current (dc) bias voltage is applied. They have a very large dielectric constant (> 20,000), resulting in higher transducer capacitance and a lower electrical impedance.
5.2 Scanning with Array Transducers Array transducers use the same principles as acoustic lenses to focus an acoustic beam. In both cases, variable delays are applied across the transducer aperture. With a sequential or phased array, however, the delays are electronically controlled and can be changed instantaneously to focus the beam in different regions. Linear arrays were first developed for radar, sonar, and radio astronomy, and they were implemented in a medical ultrasound system in 1968. Linear-array transducers have increased versatility over piston transducers. Electronic scanning involves no moving parts, and the focal point can be changed dynamically to any location in the scanning plane. The system can generate a wide variety of scan formats. The disadvantages of linear arrays are due to the increased complexity and higher cost of the transducers and scanners. For high-quality ultrasound images, many identical array elements are required (currently 128 and rising). The array elements are typically less than a millimeter on one side, and each has a separate connection to its own transmitter and receiver electronics.

Figure 43. Array-element configurations and the region scanned by the acoustic beam.

83

(a) A sequential linear array scans a rectangular region; (b) a curvilinear array scans a sectorshaped region; (c) a linear phased array scans a sector-shaped region; (d) a 1.5D array scans a sector-shaped region; (e) a 2D array scans a pyramidal-shaped region.
5.3 Ultrasonic Imaging Ultrasonic imaging of the soft tissues of the body really began in the early 1970s. The development followed much the same sequence (and borrowed much of the terminology) as did radar and sonar, from initial crude single-line-of-sight displays (A-mode) to recording these side by side to build up recordings over time to show motion (M-mode), to finally sweeping the transducer either mechanically or electronically over many directions and building up two-dimensional views (B-mode or 2D).

For some heart valvular diseases, the preferred display format for diagnosis is still the Mmode, on which the speed of valve motions can be measured and the relations of valve motions to the electrocardiogram (ECG) are easily seen. Later, as 2D displays became available, ultrasound was applied more and more to imaging of the soft abdominal organs and in obstetrics. In this format, organ dimensions and structural relations are seen more easily, and since the images are now made in real time, motions of organs such as the heart are still well appreciated.

Figure 44. Schematic representation of the signal received from along a single line of sight in a tissue. The rectified voltage signals are displayed for A-mode.

84

Figure 45. Completed M-mode display obtained by showing the M-lines side by side. The motion of the heart walls and their thickening and thinning are well appreciated. Often the ECG or heart sounds are also shown in order to coordinate the motions of the heart with other physiologic markers.

Figure 46. Schematic representation of a heart and how a 2D image is constructed by scanning the transducer.

85

5.4 Blood Flow Measurement Using Ultrasound

In blood velocity estimation, the goal is not simply to estimate the mean target position and mean target velocity. The goal instead is to measure the velocity profile over the smallest region possible and to repeat this measurement quickly and accurately over the entire target. Therefore, the joint optimization of spatial, velocity, and temporal resolution is critical. In addition to the mean velocity, diagnostically useful information is contained in the volume of blood flowing through various vessels.

Figure 47. Operating environment for the estimation of blood velocity. Current ultrasonic imaging systems operate in a pulse-echo (PE) or continuous-wave (CW) intensity mapping mode. In pulse-echo mode, a very short pulse is transmitted, and the reflected signal is analyzed. For a continuous-wave system, a lower-intensity signal is continuously transmitted into the body, and the reflected energy is analyzed. In both types of systems, an acoustic wave is launched along a specific path into the body, and the return from this wave is processed as a function of time.
5.5 Single Sample Volume Doppler Instruments One type of system uses the Doppler effect to estimate velocity in a single volume of blood, known as the sample volume, which is designated by the system operator. The Doppler shift frequency from a moving target can be shown to equal 2fcv/c, where fc is the transducer center frequency in Hertz, c is the speed of sound within tissue, and v is the velocity component of the blood cells toward or away from the transducer. These Doppler systems transmit a train of long pulses with a well-defined carrier frequency and measure the Doppler shift in the returned signal. The spectrum of Doppler frequencies is proportional to the distribution of velocities present in the sample volume. The sample volume is on a cubic millimeter scale for typical pulse-echo systems operating in the frequency range of 2 to 10 MHz. Therefore, a thorough cardiac or peripheral vascular examination requires a long period. In these systems, 64 to 128 temporal samples are acquired for each estimate. The spectrum of these samples is typically computed using a fast Fourier transform (FFT) technique. The range of velocities present within the sample volume can then be estimated.

86

The spectrum is scaled to represent velocity and plotted on the vertical axis. Subsequent spectral estimates are then calculated and plotted vertically adjacent to the first estimate.
5.6 Color Flow Mapping

In color flow mapping, a pseudo-color velocity display is overlaid on a 2D grayscale image. Simultaneous amplitude and velocity information is thus available for a 2D sector area of the body. The clinical advantage is a reduction in the examination time and the ability to visualize the velocity profile as a 2D map. The color flow map shows color-encoded velocities superimposed on the gray-scale image with the velocity magnitude indicated by the color bar on the side of the image. Motion toward the transducer is shown in yellow and red, and motion away from the transducer is shown in blue and green, with the range of colors representing a range of velocities to a maximum of 6 cm/s in each direction. Velocities above this limit would produce aliasing for the parameters used in optimizing the instrument for the display of ovarian flow. A velocity of 0 m/s would be indicated by black, as shown at the center of the color bar.

87

6. LASERS IN MEDICAL DIAGNOSTICS


Laser - Light Amplification by Stimulated Emission of Radiation
6.1 History The principles of laser - by Townes and Schawlow in 1958 First visible laser (single crystal of ruby surrounded by powerful flashlamps) - by Maiman in 1960 at Hughes Aircraft Corporation Helium-neon laser (first gas laser) - by Javan, Bennett, and Herriott at Bell Labs. 6.2 Wavelengths of different lasers

Total Reflecting Mirror

Partal Reflecting Mirror

Active Medium Laser Beam

Energy Source Figure 48. Primary components of a laser All lasers contain three primary components: - active medium - solid, liquid, gas - excitation mechanism - by thermal, electr-ical or optical energy - feedback mechanism - 2 mirrors The generation of laser radiation takes place in the active, lasing medium (optical resonator). The properties of the medium determine the wavelength of the the light produced (also power). The atoms or molecules of the lasing medium are normally (without external energy) in their ground state (bottom state). These atoms or moleculs are raised to a higher energy state by absorbing energy (as quanta of electromagnetic radiation) produced by the excitation mechanism of the laser (population inversion). Exists three types of optical transition. In the stimulated emission case all properties of the new photon are identical, i.e. exact amplification.
6.3 Characteristics of a typical helium-neon laser

The amplification bandwidth of a laser is usually broader than the cavity mode spacing. As a consequence, many lasers simultaneously run on several cavity modes (multimode regime). For monomode regime the threshold gain can be changed. The nature of the interaction of all laser light with material can be described in terms of: - reflection - transmission - scattering

89

- absorbtion In order for light to exert an effect on any human tissue it has to be absorbed (therapy). For diagnostic purposes we may use also reflection, transmission and scattering of laser light.
6.4 Absorption characteristics of tissue constituents.

Different lasers must be used according to the absorption of the particular wavelength by the region of interest. For noninvasive optical monitoring: - of oxygen content of blood we use absorbtion of hemoglobins (visible) - to assess concentration of glycose we use near-infrared radiation. Biomedical laser beam delivery systems guide the radiation from the output mirror to the site of action on tissue. Two methods are in use: - a flexible fused silica (SiO2 ) optical fiber or light guide for wavelengths between 400 nm and 2100 nm and for small optical power. - an articulated arm having beam-guiding mirrors for wavelegths greater than 2100 nm (for CO2 lasers) and for high power (pulsed lasers over 100 W). Dielectric multilayer mirrors routinely are used with high reflectivity > 99.9% and they are held in kinetically adjustable mounts. Principe of optical transmis-sion inside a fiber is a total internal reflection of the radiation. Refractive index of cladding material must be smaller than the core - n1 < n2 Total internal reflection occurs for any angle of incidence q when q > qc where

sin c =

n1 n2
n1 n2

or in terms of complementary angle c

cos c =

Typical value m = 14 deg Leakage of radiation - 0.3 dB/m at 400 nm - 0.01 dB/m at 1000 nm
6.5 Ophthalmology

The use of lasers in this area is well established because it was the first specialty to use such equipment. Specific lasers are useful for specific tasks depending from ocular transmission characteristics for different wavelengths Both the far-infrared (1400 nm - 1 mm) and far ultraviolet (200-295 nm) radiations will afect only the cornea at the front of the eye. Near-ultraviolet (295-400 nm) will penetrate into the eye as far the lens.

90

Near-infrared (700-1400 nm) and all visible radiation (400-700 nm) will penetrate to the retina.
6.6 Holography May be used to study three-dimensional (3D) changes in the eye for diagnostic purposes. Holography is a technique similar to photography but both the amplitude and the phase of the object wave are recorded rather than an image. When the hologram is illuminated with a beam similar to the original wave (laser with the same wavelength) we get all the information about this recorded object. This 3D technique is useful for examining the retinal changes of the eye produced by increased intraocular pressure which results in loss of the visual fields. Holograms could easily be stored. 6.7 Pulse Oximetry Easy to use, noninvasive and accurate measurement of real-time arterial oxygen saturation. John Severinghaus and Poul Astrup - pulse oximetry is the most significant technological advance ever made in monitoring the well-being and safety of patients during anesthesia, recovery and critical care. Pulse oximetry is based on the fractional change in light transmission during an arterial pulse at two different wavelengths - 650 nm (visible) and 850 nm (near-infrared). In this method highly variable optical characteristics of tissue are eliminated.

The optical absorbtsion spectra of hemoglobin in its oxygeneted (O2Hb) and deoxygeneted (RHb) states is different. Joseph D. Bronzino, The Biomedical Engineering Handbook, IEEE Press, 1995

The oxygen saturation of the whole blood S can be derived from the following equation: S = A - B (a650 / a850) where a650 and a850are the absorbtion coeffitients of the whole blood at the wavelengths of 650 and 850 nm. A and B are constants related to the O2Hb and RHb, respectively. Typical pulse oximeter sensing configuration on a finger. Light is emitted by the source (laser or light diode) diffusely scattered through the finger and detected on the opposite side by photodetector.

6.7.1 Limitations - sensitivity to high levels of optical interference - errors due to high concentrations of dysfunctional hemo-globins - interference from physiologic dyes (such as methylene blue) - signal quality depen-dence from motion arti-facts (signal processing)

91

No touch pulse measurement. (Martin D.Fox, IEEE Transactions on Biomedical Engineering, Vol 41, No. 11, 1994 ) Device can detect pulsation profiles of major arteries with information about pulse wave velocity. Skin vibration velocity profile and ECG at the radial artery. We can measure the timing relations between mecha-nical waves and the elect-rical activation of the heart.
6.8 Blood flow velocity measurements

Blood flow velocity measurements based on the self-mixing effect in a fibre-coupled semiconductor laser. Ultrasound have limited spatial resolution. Thermodilution, electromagnetic flowmetry and washout techniques require additional treatment. Doppler velocimetry in optical wavelengths - high spatial resolution - requires no additional treatment - can access small vessels (fibre diameter about 0.3 mm) - simple optical set-up Shortcoming - invasive measurements.
6.8.3 Measuring principle

Figure 49. Measuring principle of blood flow velocity. The laser light penetrates into the blood and is scattered by moving blood celles. f D = 2 cos / Doppler-shifted light will re-enter the laser cavity. Due to the interference the intensity of the laser will fluctuate. A self-mixing signal in the frequency domain. The first peak in the spect-rum at 9.6 kHz corres-ponds with a velocity about 4.9 mm/s. Noninvasive optical measurement of digital circulation of blood in the skin. The laser Doppler method measures the flux of blood in the skin down to a depth of approximately 1 mm (0.75 mm for He-Ne laser and 1.2 mm for near-infrared laser). The device for such measurements consists of 92

- light source (different types of lasers) - beam delivery system - signal processor (noise correction processor for correcting the blood-flow signal for shot noise). Small particle detection This method is based on the interaction of the field scattered by the particles with the field inside the laser cavity and can be applied to measure single micron-size particle or small numbers of particles (dpart~ llaser). - detection of air pollution - quality assessment of radioaerosols (particles of aerosol must be smaller that micron) - bacterial activity in water The system is operating as a homodyne photoreceiver where information about the moving particle number, its size and velocity can be derived by counting and processing the current pulses. Shematic diagram of device with a gas laser Radiation from laser passes to a mirror, reflected by this mirror returns to the laser resonator and forms an external cavity l which is highly sensitive to the moving particles inside it. The signal about the moving particle number, its size and velocity forms in the active medium of the laser. Two ways to obtain this signal: - directly from active medium (induced voltage) - by photodetector PD at the other end of the laser. Shematic diagram of device with a diode laser The diode laser radiation is collimated with the help of lens into beam of parallel rays of the required diameter. There is an additional resistor 50 Ohms for the signal separation. In another structure of the same device a photodiode 7 in the diode laser package was used. 1-laser, 2-mirror, 3-lens, 4-resistor 50 Ohms, 5-amplifier, 6-oscilloscope, 7-photodetector, 8resistor 10 kOhms. Output signal level is a critically dependent on the laser excitation current. An increase in the excitation current (intensity of the pump field) reduces the signal. The amplitude-frequency characteristics of the signal recorded for different values of the excitation current I. 1-7.8 mA, 2-8.3 mA, 3-9.5 mA, 4-10 mA, 5-12 mA. The external cavity l (in both cases) has extensively variable dimensions adjustable to the configuration and to the purposes of application (depending from laser power). During testing the device (optical power 2 mW) length variations from 1 cm up to 30 cm and the sensitivity zone diameter a few mm up to few cm. The maximum counting rate is about 10 000 particles per second. The minimum counting rate (maximum sensitivity) is 1 micron-diameter particle. The measured amplitude of a signal is linearly dependent on particle sizes when they are much smaller than the cross-section of the activated zone.

93

6.9 Lasers in Cardiovascular Diagnostics

6.9.3 Method for optical self-mixing

Coherent photodetection has been known for years but has not been widely used until recently. Technically very complicated problems arise when mixing different optical waves in a certain case, especially in the visible region of optical band. Self-mixing is more simple, compact and easily aligning coherent method. - photoelectric conversion (conversion of light into an electrical signal) occurred in an external photodetector - the non-linear properties of the active medium of a laser not used directly (simplification of heterodyne system, using great amplification of the laser to reduce technical requirements for photodetector); - using laser as receiver with heterodyning the signals in the active medium of a semiconductor or gas laser - combined field in the laser resonator at difference frequency results a pulsation of laser excitation current and difference frequency signal may be obtained from this current.

LASER PACKAGE PD LD

LENS

TARGET

R
A1

SIGNAL GENERATOR

A2
OSCILLOSCOPE

Figure 50. Method for optical self-mixing. 1. Usually the diode laser incorporates a photodiode accommodated in the rear facet of diode package for monitoring of the laser power. This diode may be used as internal detector. 2. Another structure of the same method. For the signal separation and capturing, there is an additional resistor R (R = 50 ) with an intermittent potential.

94

Figure 51. Method for optical self-mixing . Upper trace - signal applied to achieve the periodic target movement; Middle and lower traces - self-mixing signals from amplifiers A1, A2. In the case of moving target a Doppler frequency shift will result which is proportional to the velocity of the target. Typical signals observed from self-mixing. Target is reflective surface attached to a loudspeaker cone driven by a signal generator, to provide phase variations of the external optical feedback. - Self-mixing with a diode laser requires the consideration of only one optical axis, in addition to the use of fewer optical components. - It is self-aligning as well as self-detecting and therefore this method presents significant advantages in compactness, simplicity, robustness and ease of alignment in comparison with the conventional methods. - In a fiber-coupled diode laser (pigtail construction) the same fiber used to guide the light toward the target, to collect reflected part, and to guide it back into the laser. Such construction is more attractive, particularly because it is quite easy to reflect the light back into the laser and it makes this system easy to align.
> Reflective surface fixed to a loudspeaker cone

PD Output

PD LD Pigtail Laser Package QF 4142 LD current regulation > P LD Output

Distance l.

Figure 52. Pigtail Diode Laser:

95

Philips QF-4142, wavelength 1310 nm, output optical power 0.2 mW, monomode fiber with 1m length with core diameter d = 9 um, distance lmax = 42 mm
120

100

80

60

40

20

0 10 20 30 40 50

Figure 53. Mixed signal amplitude dependent from laser current. Used laser QF-4142 with threshold current Ith = 24 mA. Vertical axis - LD amplifier output in mV; Horizontal axis - laser current in mA. The self-mixed signal in both cases depends from pump current as inversely proportional and the maximum value of the signal corresponds to the pump current near threshold.
10

2 3 Distance L

Figure 54. Measured dependence of a self-mixing interference on the distance between laser and target (first five maximums) and the spectrum of laser diode.

96

6.9.3.1 Sensitivity of the method

Geometrical spot of reflected radiation with diameter dp=14 mm

Spatial angle 2 = 10 for 90% of radiation P90%=0,05mW

Reflecting surface

Optical fibre with diameter df=9m

Maximum distance l = 42 mm, when S/N = 1

Figure 55. Minimum registered optical power, reflected back to laser cavity, when self-mixing output signal S/N ratio was 1, was 19 pW.

6.9.4 Pulse profile and pulse wave velocity

Used optical method is self-mixing which is able to detect pulsation profiles of blood vessels with potentially useful information including velocity, spectrum and profile of pulse wave. The delay of pulse wave in different regions of human body is measured relatively to ECG signal. Registered signals are stored and processed after digitalisation. Measuring system consists of optical part, electronic amplifier and specialised personal computer. Optical part for non-contact detection of the shape of pulse wave is based on registration of mechanical movement of blood vessels walls and consists of diode laser and optical fibre. Specialised PC consist cards for ECG measurement and for signal digitalisation and pre-processing and based on software LabView 5.0 Used optical method is self-mixing which is able to detect pulsation profiles of blood vessels with potentially useful information including velocity, spectrum and profile of pulse wave. The delay of pulse wave in different regions of human body is measured relatively to ECG signal. Registered signals are stored and processed after digitalisation.

97

Measuring system consists of optical part, electronic amplifier and specialised personal computer. Optical part for non-contact detection of the shape of pulse wave is based on registration of mechanical movement of blood vessels walls and consists of diode laser and optical fibre. Specialised PC consist cards for ECG measurement and for signal digitalisation and pre-processing and based on software LabView 5.0 Laser device provides possibilities for estimation of: - Functional status of the heart (detecting the amplitude, spectrum and profile of the pulse pressure wave, it propagation dynamics); - Changes of tonus of blood vessels caused by anatomical or functional reasons (detecting the speed of pulse wave propagation in different vessel segments); - Damage of peripheral blood vessels caused by endogenous or exogenous factors (detecting the time delay of pulse wave propagation in different parts of human body with reference to ECG signal). The main features: - Fully non-invasive and patient-friendly method (light as optical non-ionising radiation); - Low cost (small power Laser Diode as source of radiation); - Digitalised output and flexibility (digital signal processing). Measured pulse profile at the arm artery. The horizontal axis- time elapsed in seconds; the vertical axis- Doppler shift frequency. National Instruments data acquisition board (DAQ) AT-M10-16E-10 was used to digitize the signals.

Figure 56. Measured pulse profile at the arm artery. The shape of each pulsation has more peaks, including a major peak and a secondary peaks immediately following this major peak. Reason for this are the reflections of the blood pressure wave, but the output signal of our system represents the absolute value of the skin vibrations, because we can not distinguish between moving toward or away from the probing ray.

98

t4 t1 t2 t3 t5

Figure 57. Frame of recorded pulse profile signal. t1 - normal diameter of the vessel t2 - maximum velocity of the vessel extension t3 - maximum diameter of the vessel t4 - maximum velocity of the vessel restriction t5 - normal diameter of the vessel

5000.0 Doppler frequency, Hz 4000.0 3000.0 2000.0 1000.0 0.0 A 0.2 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0

sub array size 256 7.5 8.0

8.5

9.0

9.5 9.9 Time, s

Skin vibration, mm

0.1

0.0 B

Sliding processing 256 point s(t) Windowing (Hanning) [FFT]2 max detector max detector (t)v(t) min detector t2 t1 v(t)dt
t2 t1

Skin vibration, mm

Figure 58. Processed pulse profile amplitude at the arm artery and processing algorithm.

99

6.9.5 Pulse wave velocity measurement


Pigtail Laser Package CQF58

LD

L
LD Output

Mp5 Mp4 N F Mp1

+ Vs -

LD Current Regulation

LASER DIODE (LD) AND AMPLIFIER FOR LD SIGNAL

PC WITH DATA ACQUISITION CARD AND DATA PROCESSING SOFTWARE

Mp6

Mp3 ECG

Figure 59. Pulse wave velocity measurement

Figure 60. Recorded ECG, pulse profile and processed pulse profile signals.

100

Pulse location Right arm Left arm Right leg Left leg

Measuring point Mp4 Mp5 Mp1 Mp2 Mp6 Mp3

Pulse delay (mean SD) sec 0.228 0.05 0.185 0.07 0.215 0.04 0.175 0.06 0.358 0.07 0.351 0.05

Heart rate (mean SD) 1/min 79.6 1.5 78.6 0.9 82.8 0.9 82.5 1.0 82.0 0.9 79.5 1.2

Sliding processing 128 point Laser-Doppler Get Data


10kHz

Windowing (Hanning)

[FFT]2

-max detector

Pulse start point finder Pulse Delay

ECG

Data reduction

335Hz QRS- complex

detector

Q-peak finder

Figure 61. Pulse delay measured from different locations of human body and processing algorithm.

6.9.6 Blood flow measurements

The liquid used is aqueous suspension of polymer microspheres with a spheres diameter 7.0 m, polymer density 1.05 g/cc, refractive index 1.59, concentration approx. 107 spheres/millilitre, produced by Duke Scientific Corporation.

101

Pigtail laser package CQF58

LD current regulation LD output

LD

+ Vs Optical fiber used simultaneously for emitted and reflected light

PC with data acquisition card and data processing software Rotating vessel containing aqueous suspension of polymer microspheres

Figure 62. Block diagram of the equipment for blood flow measurements.

102

-50,0 -55,0 -60,0 -65,0 -70,0 -75,0 -80,0

dBVrms

0,0 -50,0 -55,0 -60,0 -65,0 -70,0 -75,0 -80,0 0,0 -50,0 -55,0 -60,0 -65,0 -70,0 -75,0 -80,0 0,0 -50,0 -55,0 -60,0 -65,0 -70,0 -75,0 -80,0 0,0 -50,0 -55,0 -60,0 -65,0 -70,0 -75,0 -80,0 0,0 dBVrms dBVrms dBVrms dBVrms

5000,0

10000,0

15000,0

20000,0

H z 24975,6

5000,0

10000,0

15000,0

20000,0

H z 24975,6

5000,0

10000,0

15000,0

20000,0

H z 24975,6

5000,0

10000,0

15000,0

20000,0

H z 24975,6

5000,0

10000,0

15000,0

20000,0

H z 24975,6

Figure 63. Blood flow measurements signals.

25,00 Doppler frequency, kHz

20,00

Measured Doppler frequency from spectrum analyzer, visual spectrum maximum estimation

15,00 10,00

Doppler frequency from calculated radial velocity

5,00

0,00 1 2 3 4 5 Measuring points (5 different speeds)

Figure 64. Differences between measured and calculated Doppler frequencies.

7. Clinical engineer: safety, standards and regulations

7.1 What Is a Clinical Engineer?

In recent years, a number of organizations, e.g., the American Heart Association [1986], the American Association of Medical Instrumentation [Goodman, 1989], the American College of Clinical Engineers [Bauld, 1991], and the Journal of Clinical Engineering [Pacela, 1991], have attempted to provide an appropriate definition for the term, clinical engineer. A clinical engineer is an engineer who has graduated from an accredited academic program in engineering or who is licensed as a professional engineer or engineer-in-training and is engaged in the application of scientific and technological knowledge developed through engineering education and subsequent professional experience within the health care environment in support of clinical activities. Furthermore, the clinical environment is defined as that portion of the health care system in which patient care is delivered, and clinical activities include direct patient care, research, teaching, and public service activities intended to enhance patient care.
7.2 Evolution of Clinical Engineering

Engineers were first encouraged to enter the clinical scene during the late 1960s in response to concerns about patient safety as well as the rapid proliferation of clinical equipment, especially in academic medical centers. In the process, a new engineering disciplineclinical engineeringevolved to provide the technological support necessary to meet these new needs. During the 1970s, a major expansion of clinical engineering occurred, primarily due to the following events: The Veterans' Administration (VA), convinced that clinical engineers were vital to the overall operation of the VA hospital system, divided the country into biomedical engineering districts, with a chief biomedical engineer overseeing all engineering activities in the hospitals in that district. Throughout the United States, clinical engineering departments were established in most large medical centers and hospitals and in some smaller clinical facilities with at least 300 beds. Clinical engineers were hired in increasing numbers to help these facilities use existing technology and incorporate new technology. Having entered the hospital environment, routine electrical safety inspections exposed the clinical engineer to all types of patient equipment that was not being maintained properly. It soon became obvious that electrical safety failures represented only a small part of the overall problem posed by the presence of medical equipment in the clinical environment. The equipment was neither totally understood nor properly maintained. Simple visual inspections often revealed broken knobs, frayed wires, and even evidence of liquid spills. Investigating further, it was found that many devices did not perform in accordance with manufacturers specifications and were not maintained in accordance with manufacturers recommendations. In short, electrical safety problems were only the tip of the iceberg. The entrance

of clinical engineers into the hospital environment changed these conditions for the better. By the mid-1970s, complete performance inspections before and after use became the norm, and sensible inspection procedures were developed [Newhouse et al., 1989]. In the process, clinical engineering departments became the logical support center for all medical technologies and became responsible for all the biomedical instruments and systems used in hospitals, the training of medical personnel in equipment use and safety, and the design, selection, and use of technology to deliver safe and effective health care. With increased involvement in many facets of hospital/ clinic activities, clinical engineers now play a multifaceted role (Fig. 65). They must interface successfully with many clients, including clinical staff, hospital administrators, regulatory agencies, etc., to ensure that the medical equipment within the hospital is used safely and effectively. Today, hospitals that have established centralized clinical engineering departments to meet these responsibilities use clinical engineers to provide the hospital administration with an objective option of equipment function, purchase, application, overall system analysis, and preventive maintenance policies.

Figure 65. Diagram illustrating the range of interactions of a clinical engineer. Some hospital administrators have learned that with the in-house availability of such talent and expertise, the hospital is in a far better position to make more effective use of its technological resources [Bronzino, 1986, 1992]. By providing health professionals with needed assurance of safety, reliability, and efficiency in using new and innovative equipment, clinical engineers can readily identify poor-quality and ineffective equipment, thereby resulting in faster, more appropriate utilization of new medical equipment. Typical pursuits of clinical engineers, therefore, include Supervision of a hospital clinical engineering department that includes clinical engineers and biomedical equipment technicians (BMETs) Prepurchase evaluation and planning for new medical technology 105

Design, modification , or repair of sophisticated medical instruments or systems Cost-effective management of a medical equipment calibration and repair service Supervision of the safety and performance testing of medical equipment performed by BMETs Inspection of all incoming equipment (i.e., both new and returning repairs) Establishment of performance benchmarks for all equipment Medical equipment inventory control Coordination of outside engineering and technical services performed by vendors Training of medical personnel in the safe and effective use of medical devices and systems Clinical applications engineering, such as custom modification of medical devices for clinical research, evaluation of new noninvasive monitoring systems, etc. Biomedical computer support Input to the design of clinical facilities where medical technology is used, e.g., operating rooms (ORs), intensive care units, etc. Development and implementation of documentation protocols required by external accreditation and licensing agencies. Clinical engineers thus provide extensive engineering services for the clinical staff and, in recent years, have been increasingly accepted as valuable team members by physicians, nurses, and other clinical professionals. Furthermore, the acceptance of clinical engineers in the hospital setting has led to different types of engineering-medicine interactions, which in turn have improved health care delivery.
7.3 Hospital Organization and the Role of Clinical Engineering

In the hospital, management organization has evolved into a diffuse authority structure that is commonly referred to as the triad model. The three primary components are the governing board (trustees), hospital administration (CEO and administrative staff), and the medical staff organization [Bronzino and Hayes, 1988]. The role of the governing board and the chief executive officer are briefly discussed below to provide some insight regarding their individual responsibilities and their interrelationship.
7.3.1 Governing Board (Trustees) The Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) summarizes the major duties of the governing board as adopting by-laws in accordance with its legal accountability and its responsibility to the patient. The governing body, therefore, requires both medical and paramedical departments to monitor and evaluate the quality of patient care, which is a critical success factor in hospitals today. To meet this goal, the governing board essentially is responsible for establishing the mission statement and defining the specific goals and objectives that the institution must satisfy. Therefore, the trustees are involved in the following functions:

Establishing the policies of the institution Providing equipment and facilities to conduct patient care Ensuring that proper professional standards are defined and maintained (i.e., providing quality assurance) Coordinating professional interests with administrative, financial, and community needs

106

Providing adequate financing by securing sufficient income and managing the control of expenditures Providing a safe environment Selecting qualified administrators, medical staff, and other professionals to manage the hospital In practice, the trustees select a hospital chief administrator who develops a plan of action that is in concert with the overall goals of the institution.
7.3.2 Hospital Administration The hospital administrator, the chief executive officer of the medical enterprise, has a function similar to that of the chief executive officer of any corporation. The administrator represents the governing board in carrying out the day-to-day operations to reflect the broad policy formulated by the trustees. The duties of the administrator are summarized as follows: Preparing a plan for accomplishing the institutional objectives, as approved by the board Selecting medical chiefs and department directors to set standards in their respective fields Submitting for board approval an annual budget reflecting both expenditures and income projections Maintaining all physical properties (plant and equipment) in safe operating condition Representing the hospital in its relationships with the community and health agencies Submitting to the board annual reports that describe the nature and volume of the services delivered during the past year, including appropriate financial data and any special reports that may be requested by the board

In addition to these administrative responsibilities, the chief administrator is charged with controlling cost, complying with a multitude of governmental regulations, and ensuring that the hospital conforms to professional norms, which include guidelines for the care and safety of patients.
7.4 Major Functions of a Clinical Engineering Department It should be clear by the preceding job description that clinical engineers are first and foremost engineering professionals. However, as a result of the wide-ranging scope of interrelationships within the medical setting, the duties and responsibilities of clinical engineering directors are extremely diversified. Yet a common thread is provided by the very nature of the technology they manage. Directors of clinical engineering departments are usually involved in the following core functions: 7.4.1 Technology Management. Developing, implementing, and directing equipment management programs. Specific tasks include accepting and installing new equipment, establishing preventive maintenance and repair programs, and managing the inventory of medical instrumentation. Issues such as costeffective use and quality assurance are integral parts of any technology management program. The director advises the administrator of the budgetary, personnel, space, and test equipment requirements necessary to support this equipment management program. 7.4.2 Risk Management.

107

Evaluating and taking appropriate action on incidents attributed to equipment malfunctions or misuse. For example, the clinical engineering director is responsible for summarizing the technological significance of each incident and documenting the findings of the investigation. He or she then submits a report to the appropriate hospital authority and, according to the Safe Medical Devices Act of 1990, to the device manufacturer, the Food and Drug Administration (FDA), or both.
7.4.3 Technology Assessment Evaluating and selecting new equipment. The director must be proactive in the evaluation of new requests for capital equipment expenditures, providing hospital administrators and clinical staff with an in depth appraisal of the benefits/ advantages of candidate equipment. Furthermore, the process of technology assessment for all equipment used in the hospital should be an ongoing activity. 7.4.4 Facilities Design and Project Management Assisting in the design of new or renovated clinical facilities that house specific medical technologies. This includes operating rooms, imaging facilities, and radiology treatment centers. 7.4.5 Training Establish and deliver instructional modules for clinical engineering staff as well as clinical staff on the operation of medical equipment. In the future, it is anticipated that clinical engineering departments will provide assistance in the application and management of many other technologies that support patient care, including computer support, telecommunications, and facilities operations.

7.5 The Health Care Delivery System Societal demands on the health care delivery system revolve around cost, technology, and expectations. To respond effectively, the delivery system must identify its goals, select and define its priorities, and then wisely allocate its limited resources. For most organizations, this means that they must acquire only appropriate technologies and manage what they have already more effectively. To improve performance and reduce costs, the delivery system must recognize and respond to the key dynamics in which it operates, must shape and mold its planing efforts around several existing health care trends and directions, and must respond proactively and positively to the pressures of its environment. These issues and the technology managers response are outlined here: (1) technologys positive impact on care quality and effectiveness, (2) an unacceptable rise in national spending for health care services, (3) a changing mix of how Americans are insured for health care, (4) increases in health insurance premiums for which appropriate technology application is a strong limiting factor, (5) a changing mix of health care services and settings in which careis delivered, and (6) growing pressures related to technology for hospital capital spending and budgets.

108

7.5.1 Major Health Care Trends and Directions The major trends and directions in health care include (1) changing location and design of treatment areas, (2) evolving benefits, coverages, and choices, (3) extreme pressures to manage costs, (4) treating of more acutely ill older patients and the prematurely born, (5) changing job structures and demand for skilled labor, (6) the need to maintain a strong cash flow to support construction, equipment, and information system developments, (7) increased competition on all sides, (8) requirement for information systems that effectively integrate clinical and business issues, (9) changing reimbursement policies that reduce new purchases and lead to the expectation for extended equipment life cycles, (10) internal technology planning and management programs to guide decision making, (11) technology planning teams to coordinate adsorption of new and replacement technologies, as well as to suggest delivery system changes, and (12) equipment maintenance costs that are emerging as a significant expense item under great administrative scrutiny.

7.6 Technology Assessment As medical technology continues to evolve, so does its impact on patient outcome, hospital operations, and financial resources. The ability to manage this evolution and its subsequent implications has become a major challenge for all health care organizations. Successful management of technology will ensure a good match between needs and capabilities and between staff and technology. To be successful, an ongoing technology assessment process must b an integral part of an ongoing technology planning and management program at the hospital, addressing the needs of the patient, the user, and the support team. This facilitates better equipment planning and utilization of the hospitals resources. The manager who is knowledgeable about his or her organizations culture, equipment users needs, the environment within which equipment will be applied, equipment engineering, and emerging technological capabilities will be successful in proficiently implementing and managing technological changes. It is in the technology assessment process that the clinical engineering/technology manager professional needs to wear two hats: that of the manager and that of the engineer. This is a unique position, requiring expertise and detailed preparation, that allows one to be a key leader and contributor to the decisionmaking process of the medical technology advisory committee (MTAC). The MTAC uses an ad hoc team approach to conduct technology assessment of selected services and technologies throughout the year. The ad hoc teams may incorporate representatives of equipment users, equipment service providers, physicians, purchasing agents, reimbursement mangers, representatives of administration, and other members from the institution as applicable. 7.6.1 Technology Assessment Process More and more hospitals are faced with the difficult phenomenon of a capital equipment requests list that is much larger than the capital budget allocation. The must difficult decision, then, is the one that matches clinical needs with the financial capability. In doing so, the following questions are often raised: How do we avoid costly technology mistakes? How do we wisely target capital dollars for technology? How do we avoid medical staff conflicts as they relate to technology? How do we control equipmentrelated risks? and How do we maximize the useful life of the equipment or systems while minimizing the cost ownership? A hospitals clinical engineering department can assist in providing the right answers to these questions.

109

Technology assessment is a component of technology planning that begins with the analysis of the hospitals existing technology base. It is easy to perceive then that technology assessment, rather than an equipment comparison, is a new major function for a clinical engineering department. It is important that clinical engineers be well prepared for the challenge. They must have a full understanding of the mission of their particular hospitals, a familiarity with the health care delivery system, and the cooperation of hospital administrators and the medical staff. To aid in the technology assessment process, clinical engineers need to utilize the following tools: (1) access to national database services, directories, and libraries, (2) visits to scientific and clinical exhibits, (3) a network with key industry contacts, and (4) a relationship with peers throughout the country. The need for clinical engineering involvement in the technology assessment process becomes evident when recently purchased equipment or its functions are underutilized, users have ongoing problems with equipment, equipment maintenance costs become excessive, the hospital is unable to comply withstandards or guidelines (i.e., JCAHO requirements) for equipment management, a high percentage of equipment is awaiting repair, or training for equipment operators is inefficient due to shortage of allied health professionals. A deeper look at the symptoms behind these problems would likely reveal a lack of a central clearinghouse to collect, index, and monitor all technology-related information for future planning purposes, the absence of procedures for identifying emerging technologies for potential acquisition, the lack of a systematic plan for conducting technology assessment, resulting in an ability to maximize the benefits from deployment of available technology, the inability to benefit from the organizations own previous experience with a particular type of technology, the random replacement of medical technologies rather than a systematic plan based on a set of welldeveloped criteria, and/or the lack of integration of technology acquisition into the strategic and capital planning of the hospital. To address these issues, efforts to develop a technology microassessment process were initiated at one leading private hospital with the following objectives: (1) accumulate information on medical equipment, (2) facilitate systematic planning, (3) create an administrative structure supporting the assessment process and its methodology, (4) monitor the replacement of outdated technology, and (5) improve the capital budget process by focusing on long-term needs relative to the acquisition of medical equipment. The process, in general, and the collection of up-to-date pertinent information, in particular, require the expenditure of certain resources and the active participation of designated hospital staff in networks providing technology assessment information. For example, corporate membership in organizations and societies that provide such information needs to be considered, as well as subscriptions to certain computerized database and printed sources. At the example hospital, and MTAC was formed to conduct technology assessment. It was chaired by the director of clinical engineering. Other managers from equipment user departments usually serve as the MTACs designated technical coordinators for specific task forces. Once the committee accepted a request from an individual user, it identified other users that might have an interest in that equipment
7.7 Risk Management Inherent in the definition of risk management is the implication that the hospital environment cannot be made risk-free. In fact, the nature of medical equipmentto invasively or noninvasively perform diagnostic, therapeutic, corrective, or monitoring intervention on

110

behalf of the patientimplies that risk is present. Therefore, a standard of acceptable risk must be established that defines manageable risk in a real-time economic environment. Unfortunately, a preexistent, quantitative standard does not exist in terms of, for instance, mean time before failure (MTBF), number of repairs or repair redos per equipment item, or cost of maintenance that provides a universal yardstick for risk management of medical equipment. Sufficient clinical management of risk must be in place the can utilize safeguards, preventive maintenance, and failure analysis information to minimize the occurrence of injury or death to patient or employee or property damage. Therefore, a process must be put in place that will permit analysis of information and modification of the preceding factors to continuously move the medical equipment program to a more stable level of manageable risk. Risk factors that require management can be illustrated by the example of the double-edge sword concept of technology (see Fig. 66). The front edge of the sword represents the cutting edge of technology and its beneficial characteristics: increased quality, greater availability of technology, timeliness of test results and treatment, and so on. The back edge of the sword represents those liabilities which must be addressed to effectively manage risk: the hidden costs discussed in the next paragraph, our dependence on technology, incompatibility of equipment, and so on. For example, the purchase and installation of a major medical equipment item may only represent 20% of the lifetime cost of the equipment. If the operational budget of a nursing floor does not include the other 80% of the equipment costs, the budget constraints may require cutbacks where they appear to minimally affect direct patient care. Preventive maintenance, software upgrades that address glitches, or overhaul requirements may be seen as unaffordable luxuries. Gradual equipment deterioration without maintenance may bring the safety level below an acceptable level of manageable risk. Since economic factors as well as those of safety must be considered, a balanced approach to risk management that incorporates all aspects of the medical equipment lifecycle must be considered.

Figure 66. Double-edged sword concept of risk management.

111

8. Home care and rehabilitation


8.1 Introduction

Rehabilitation engineering requires a multidisciplinary effort. To put rehabilitation engineering into its proper context, we need to review some of the other disciplines with which rehabilitation engineers must be familiar. Rehabilitation is the (re)integration of an individual with a disability into society. This can be done either by enhancing existing capabilities or by providing alternative means to perform various functions or to substitute for specific sensations. Rehabilitation engineering is the application of science and technology to ameliorate the handicaps of individuals with disabilities [Reswick]. In actual practice, many individuals who say that they practice rehabilitation engineering are not engineers by training. While this leads to controversies from practitioners with traditional engineering degrees, it also has the de facto benefit of greatly widening the scope of what is encompassed by the term rehabilitation engineering. Rehabilitation medicine is a clinical practice that focuses on the physical aspects of functional recovery, but that also considers medical, neurological and psychological factors. Physical therapy, occupational therapy, and rehabilitation counseling are professions in their own right. On the sensory-motor side, other medical and therapeutical specialties practice rehabilitation in vision, audition, and speech. Rehabilitation technology (or assistive technology) narrowly defined is the selection, design, or manufacture of augmentative or assistive devices that are appropriate for the individual with a disability. Such devices are selected based on the specific disability, the function to be augmented or restored, the users wishes, the clinicians preferences, cost, and the environment in which the device will be used. Rehabilitation science is the development of a body of knowledge, gleaned from rigorous basic and clinical research, that describes how a disability alters specific physiological functions or anatomical structures, and that details the underlying principles by which residual function or capacity can be measured and used to restore function of individuals with disabilities.
8.2 Rehabilitation Concepts

Effective rehabilitation engineers must be well versed in all of the areas described above since they generally work in a team setting, in collaboration with physical and occupational therapists, orthopedic surgeons, physical medicine specialists and/or neurologists. Some rehabilitation engineers are interested in certain activities that we do in the course of a normal day that could be summarized as activities of daily living (ADL). These include eating, toileting, combing hair, brushing teeth, reading, etc. Other engineers focus on Mobility and the limitations to mobility. Mobility can be personal (e.g., within a home or office) or public (automobile, public transportation, accessibility questions in buildings). Mobility also includes the ability to move functionally through the environment. Thus, the question of mobility is not limited to that of getting from place to place, but also includes such questions as whether one can reach an object in a particular setting or whether a paralyzed urinary bladder can be made functional again. Barriers that limit mobility are also studied. For instance, an ill-fitted wheelchair cushion or support system will most assuredly limit mobility

by reducing the time that an individual can spend in a wheelchair before he or she must vacate it to avoid serious and difficult-to-heal pressure sores. Other groups of rehabilitation engineers deal with sensory disabilities, such as sight or hearing, or with communications disorders, both in the production side (e.g., the non-vocal) or in the comprehension side. For any given client, a rehabilitation engineer might have all of these concerns to consider (i.e., ADLs, mobility, sensory and communication dysfunctions). A key concept in physical or sensory rehabilitation is that of residual function or residual capacity. Such a concept implies that the function or sense can be quantified, that the performance range of that function or sense is known in a non-impaired population, and that the use of residual capacity by a disabled individual should be encouraged. These measures of human performance can be made subjectively by clinicians or objectively by some rather clever computerized test devices. A rehabilitation engineer asks three key questions: Can a diminished function or sense be successfully augmented? Is there a substitute way to return the function or to restore a sense? And is the solution appropriate and cost-effective? These questions give rise to two important rehabilitation concepts: orthotics and prosthetics. An orthosis is an appliance that aids an existing function. A prosthesis provides a substitute. An artificial limb is a prosthesis, as is a wheelchair. An ankle brace is an orthosis. So are eyeglasses. In fact, eyeglasses might well be the consumate rehabilitation device. They are inexpensive, have little social stigma, and are almost completely unobtrusive to the user. They have let many millions of individuals with correctable vision problems lead productive lives. But in essence, a pair of eyeglasses is an optical device, governed by traditional equations of physical optics. Eyeglasses can be made out of simple glass (from a raw material as abundant as the sands of the earth!) or complex plastics such as those that are ultraviolet sensitive. They can be ground by hand or by sophisticated computer-controlled optical grinders. Thus, crude technology can restore functional vision. Increasing the technical content of the eyeglasses (either by material or manufacturing method) in most cases will not increase the amount of function restored, but it might make the glasses cheaper, lighter and more prone to be used.
8.3 Engineering Concepts in Sensory Rehabilitation

Of the five traditional senses, vision and hearing most define the interactions that permit us to be human. These two senses are the main input channel through which data with high information content can flow. We read; we listen to speech or music; we view art. A loss of one or the other of these senses (or both) can have a devastating impact on the individual affected. Rehabilitation engineers attempt to restore the functions of these senses either through augmentation or via sensory substitution systems. Eyeglasses and hearing aids are examples of augmentative devices that can be used if some residual capacity remains. A major area of rehabilitation engineering research deals with sensory substitution systems. The visual system has the capability to detect a single photon of light, yet also has a dynamic range that can respond to intensities many orders of magnitude greater. It can work with high contrast items and with those of almost no contrast, and across the visible spectrum of colors. Millions of parallel data channels form the optic nerve that comes from an eye; each channel transmits an asynchronous and quasi-random (in time) stream of binary pulses. While the temporal coding on any one of these channels is not fast (on the order of 200 bits per sec or less), the capacity of the human brain to parallel process the entire image is faster than any supercomputer yet built. If sight is lost, how can it be replaced? A simple pair of eyeglasses will not work, since either the sensor (the retina), the communication channel (the optic nerve and all of its relays to the brain), or one or more essential central processors (the occipital part

113

of the cerebral cortex for initial processing; the parietal and other cortical areas for information extraction) has been damaged. For replacement within the system, one must determine where the visual system has failed and whether a stage of the system can be artificially bypassed. If one uses another sensory modality (e.g., touch or hearing) as an alternate input channel, one must determine whether there is sufficient bandwidth in that channel and whether the higher-order processing hierarchy is plastic enough to process information coming via a different route. While the above discussion might seem just philosophical, it is more than that. We normally read printed text with our eyes. We recognize words from their (visual) letter combinations. We comprehend what we read via a mysterious processing in the parietal and temporal parts of the cerebral cortex. Could we perhaps read and comprehend this text or other forms of writing through our fingertips with an appropriate interface? The answer surprisingly is yes! And, the adaptation actually goes back to one of the earliest applications of coding theorythat of the development of Braille. Braille condenses all text characters to a raised matrix of 2 by 3 dots (26 combinations), with certain combinations reserved as indicators for the next character (such as a number indicator) or for special contractions. Trained readers of Braille can read over 250 words per minute of grade 2 Braille (as fast as most sighted readers can read printed text!). Thus, the Braille code is in essence a rehabilitation engineering concept where an alternate sensory channel is used as a substitute and where a recoding scheme has been employed. Rehabilitation engineers and their colleagues have designed other ways to read text. To replace the retina as a sensor element, a modern high resolution, high sensitivity, fast imaging sensor (CCD, etc.) is employed to capture a visual image of the text. One method, used by various page scanning devices, converts the scanned image to text by using optical character recognition schemes, and then outputs the text as speech via text-to-speech algorithms. This machine essentially recites the text, much as an sighted helper might do when reading aloud to the blind individual. The user of the device is thus freed of the absolute need for a helper. Such Independence is often the goal of rehabilitation. Perhaps the most interesting method presents an image of the scanned data directly to the visual cortex or retina via an array of implantable electrodes that are used to electrically activate nearby cortical or retinal structures. The visual cortex and retina are laid out in a topographic fashion such that there is an orderly mapping of the signal from different parts of the visual field to the retina, and from the retina to corresponding parts of the occipital cortex. The goal of stimulation is to mimic the neural activity that would have been evoked had the signal come through normal channels. And, such stimulation does produce the sensation of light. Since the image stays within the visual system, the rehabilitation solution is said to be modality-specific. However, substantial problems dealing with biocompatibility and image processing and reduction remain in the design of the electrode arrays and processors that serve to interface the electronics and neurological tissue. Deafness is another manifestation of a loss of a communication channel, this time for the sense of hearing. Totally deaf individuals use vision as a substitute input channel when communicating via sign language (also a substitute code), and can sign at information rates that match or exceed that of verbal communication. Hearing aids are now commercially available that can adaptively filter out background noise (a predictable signal) while amplifying speech (unpredictable) using autoregressive, moving average (ARMA) signal processing. With the recent advent of powerful digital signal processing chips, true digital hearing aids are now available. Previous analog aids, or digitally programable analog aids, provided a set of tunable filters and amplifiers to cover the low, mid and high frequency ranges of the hearing spectrum. But the digital aids can be specifically and easily tailored (i.e., programmed) to compensate for the specific losses of each individual client across the frequency continuum of hearing, and still provide automatic gain control and one or more

114

user-selectable settings that have been adjusted to perform optimally in differing noise environments. An exciting development is occurring outside the field of rehabilitation that will have a profound impact on the ability of the deaf to comprehend speech. Electronics companies are now beginning to market universal translation aids for travellers, where a phrase spoken in one language is captured, parsed, translated, and restated (either spoken or displayed) in another language. The deaf would simply require that the visual display be in the language that they use for writing. Deafness is often brought on (or occurs congenitally) by damage to the cochlea. The cochlea normally transduces variations in sound pressure intensity at a given frequency into patterns of neural discharge. This neural code is then carried by the auditory (eighth cranial) nerve to the brainstem where it is preprocessed and relayed to auditory cortex for initial processing and on to the parietal and other cortical areas for information extraction. Similar to the case for the visual system, the cochlea, auditory nerve, auditory cortex and all relays in between maintain a topological map, this time based on tone frequency (tonotopic). If deafness is solely due to cochlear damage (as is often the case) and if the auditory nerve is still intact, a cochlear implant can often be substituted for the regular transducer array (the cochlea) while still sending the signal through the normal auditory channel (to maintain modality-specificity). At first glance, the design of a cochlear prosthesis to restore hearing appears daunting. The hearing range of a healthy young individual is 20 to 16,000 Hz. The transducing structure, the cochlea, has 3500 inner and 12000 outer hair cells, each best activated by a specific frequency that causes a localized mechanical resonance in the basilar membrane of the cochlea. Deflection of a hair cell causes the cell to fire an all-or-none (i.e., pulsatile) neuronal discharge, whose rate of repetition depends to a first approximation on the amplitude of the stimulus. The outputs of these hair cells have an orderly convergence on the 30,000 to 40,000 fibers that make up the auditory portion of the eighth cranial nerve. These afferent fibers in turn go to brainstem neurons that process and relay the signals on to higher brain centers [Klinke, 1983]. For many causes of deafness, the hair cells are destroyed, but the eighth nerve remains intact. Thus, if one could elicit activity in a specific output fiber by means other than the hair cell motion, perhaps some sense of hearing could be restored. The geometry of the cochlea helps in this regard as different portions of the nerve are closer to different parts of the cochlea. Electrical stimulation is now used in the cochlear implant to bypass hair cell transduction mechanisms [Loeb, 1985; Clark et al., 1990]. These sophisticated devices have required that complex signal processing, electronic and packaging problems be solved. One current cochlear implant has 22 stimulus sites along the scala tympani of the cochlea. Those sites provide excitation to the peripheral processes of the cells of the eighth cranial nerve, which are splayed out along the length of the scala. The electrode assembly itself has 22 ring electrodes spaced along its length and some additional guard rings between the active electrodes and the receiver to aid in securing the very flexible electrode assembly after it is snaked into the cochleas very small (a few mm) round window (a surgeon related to me that positioning the electrode was akin to pushing a piece of cooked spaghetti through a small hole at the end of a long tunnel). The electrode is attached to a receiver that is inlaid into a slot milled out of the temporal bone. The receiver contains circuitry that can select any electrode ring to be a source and any other electrode to be a sink for the stimulating current, and that can rapidly sequence between various pairs of electrodes. The receiver is powered and controlled by a radiofrequency link with an external transmitter, whose alignment is maintained by means of a permanent magnet imbedded in the receiver. A digital signal processor stores information about a specific user and his or her optimal electrode locations for specific frequency bands. The object is to determine what pair of electrodes best produces the subjective perception of a certain pitch

115

in the implanted individual , and then to associate a particular filter with that pair via the controller. An enormous amount of compression occurs in taking the frequency range necessary for speech comprehension and reducing it to a few discrete channels. At present, the optimum compression algorithm is unknown, and much fundamental research is being carried out in speech processing, compression and recognition. But, what is amazing is that a number of totally deaf individuals can relearn to comprehend speech exceptionally well without speech-reading through the use of these implants. Other individuals find that the implant aids in speech-reading. For some only an awareness of environmental sounds is apparent; and for another group, the implant appears to have had little effect. But, if you could (as I have been able to) finally converse in unaided speech with an individual who had been rendered totally blind and deaf by a traumatic brain injury, you begin to appreciate the power of rehabilitation engineering.
8.4 Engineering Concepts in Motor Rehabilitation

Limitations in mobility can severely restrict the quality of life of an individual so affected. A wheelchair is a prime example of a prosthesis that can restore personal mobility to those who cannot walk. Given the proper environment (fairly level floors, roads, etc.), modern wheelchairs can be highly efficient. In fact, the fastest times in one of mans greatest tests of endurance, the Boston Marathon, are achieved by the wheelchair racers. Although they do gain the advantage of being able to roll, they still must climb the same hills, and do so with only one-fifth of the muscle power available to an able-bodied marathoner. While a wheelchair user could certainly go down a set of steps (not recommended), climbing steps in a normal manual or electric wheelchair is a virtual impossibility. Ramps or lifts are engineered to provide accessibility in these cases, or special climbing wheelchairs can be purchased. Wheelchairs also do not work well on surfaces with high rolling resistance or viscous coefficients (e.g., mud, rough terrain, etc.), so alternate mobility aids must be found if access to these areas is to be provided to the physically disabled. Hand-controlled cars, vans, tractors and even airplanes are now driven by wheelchair users. The design of appropriate control modifications falls to the rehabilitation engineer. Loss of a limb can greatly impair functional activity. The engineering aspects of artificial limb design increase in complexity as the amount of residual limb decreases, especially if one or more joints are lost. As an example, a person with a mid-calf amputation could use a simple wooden stump to extend the leg, and could ambulate reasonably well. But such a leg is not cosmetically appealing and completely ignores any substitution for ankle function. Immediately following World War II, the United States government began the first concerted effort to foster better engineering design for artificial limbs. Dynamically lockable knee joints were designed for artificial limbs for above-knee amputees. In the ensuing years, energystoring artificial ankles have been designed, some with prosthetic feet so realistic that beach thongs could be worn with them! Artificial hands, wrists and elbows were designed for upper limb amputees. Careful design of the actuating cable system also provided for a sense of hand grip force, so that the user had some feedback and did not need to rely on vision alone for guidance. Perhaps the most transparent (to the user) artificial arms are the ones that use electrical activity generated by the muscles remaining in the stump to control the actions of the elbow, wrist and hand [Stein et al., 1988]. This electrical activity is known as myoelectricity, and is produced as the muscle contraction spreads through the muscle. Note that these muscles, if intact, would have controlled at least one of these joints (e.g., the biceps and triceps for the elbow). Thus, a high level of modality-specificity is maintained since the functional element is substituted only at the last stage. All of the batteries, sensor electrodes,

116

amplifiers, motor actuators and controllers (generally analog) reside entirely within these myoelectric arms. An individual trained in the use of a myoelectric arm can perform some impressive tasks with this arm. Current engineering research efforts involve the control of simultaneous multi-joint movements (rather than the single joint movement now available) and the provision for sensory feedback from the end effector of the artificial arm to the skin of the stump via electrical means.
8.5 Engineering Concepts in Communications Disorders

Speech is a uniquely human means of interpersonal communication. Problems that affect speech can occur at the initial transducer (the larynx) or at other areas of the vocal tract. They can be of neurological (due to cortical, brainstem or peripheral nerve damage), structural, and/or cognitive origin. A person might only be able to make a halting attempt at talking, or might not have sufficient control of other motor skills to type or write column of air that can be modulated by ther elements in the vocal tract. If other motor skills are intact, typing can be used to generate text, which in turn can be spoken via text-to-speech devices described above. And the rate of typing (either whole words or via coding) might be fast enough so that reasonable speech rates could be achieved. The rehabilitation engineer often becomes involved in the design or specification of augmentative communication aids for individuals who do not have good muscle control, either for speech or for limb movement. A whole industry has developed around the design of symbol or letter boards, where the user can point out (often painstakingly) letters, words or concepts. Some of these boards now have speech output. Linguistics and information theory have been combined in the invention of acceleration techniques intended to speed up the communication process. These include alternative language representation systems based on semantic (iconic), alphanumeric, or other codes; and prediction systems, which provide choices based on previously selected letters or words. A general review of these aids can be found in Chapter 144, while Goodenough-Trepagnier [1994] edited a good publication dealing with human factors and cognative requirements. Some individuals can produce speech, but it is dysarthric and very hard to understand. Yet the utterance does contain information. Can this limited information be used to figure out what the individual wanted to say, and then voice it by artificial means? Research labs are now employing neural network theory to determine which pauses in an utterance are due to content (i.e., between a word or sentence) and those due to unwanted halts in speech production.
8.6 Appropriate Technology

Rehabilitation engineering lies at the interface of a wide variety of technical, biological and other concerns. A user might (and often does) put aside a technically sophisticated rehabilitation device in favor of a simpler device that is cheaper, and easier to use and maintain. The cosmetic appearance of the device (or cosmesis) sometimes becomes the overriding factor in acceptance or rejection of a device. A key design factor often lies in the use of the appropriate technology to accomplish the task adequately given the extent of the resources available to solve the problem and the residual capacity of the client. Adequacy can be verified by determining that increasing the technical content of the solution results in disproportionately diminishing gains or escalating costs. Thus, a rehabilitation engineer must be able to distinguish applications where high technology is required from those where such technology results in an incremental gain in cost, durability, acceptance and other factors.

117

Further, appropriateness will greatly depend on location. What is appropriate to a client near a major medical center in a highly developed country might not be appropriate to one in a rural setting or in a developing country. This is not to say that rehabilitation engineers shoud shun advances in technology. In fact, a fair proportion of rehabilitation engineers work in a research setting where state-of-the-art technology is being applied to the needs of the disabled. However, it is often difficult to transfer complex technology from a laboratory to disabled consumers not directly associated with that laboratory. Such devices are often designed for use only in a structured environment, are difficult to repair properly in the field, and often require a high level of user interaction or sophistication. Technology transfer in the rehabilitation arena is difficult, due to the limited and fragmented market. Advances in rehabilitation engineering are often piggybacked onto advances in commercial electronics. For instance, the exciting developments in text-to-speech and speechto-text devices mentioned above are being driven by the commercial marketplace, and not by the rehabilitation arena. But such developments will be welcomed by rehabilitation engineers no less.
8.7 The Future of Engineering in Rehabilitation

The traditional engineering disciplines permeate many aspects of rehabilitation. Signal processing, control and information theory, materials design, computers are all in widespread use from an electrical engineering perspective. Neural networks, microfabrication, fuzzy logic, virtual reality, image processing and other emerging electrical and computer engineering tools are increasingly being applied. Mechanical engineering principles are used in biomechanical studies, gait and motion analysis, prosthetic fitting, seat cushion and back support design, and the design of artificial joints. Materials and metalurgical engineers provide input on newer biocompatable materials. Chemical engineers are developing implantable sensors. Industrial engineers are increasingly studying rehabilitative ergonomics. The challenge to rehabilitation engineers is to find advances in any field, engineering or otherwise, that will aid their clients who have a disability.
8.8 Future Developments The field of rehabilitation engineering, both in research and in service delivery, is at an important crossroad in its young history. Shifting paradigms of services, reduction in research funding, consumerism, credentialing, health care reform and limited formal educational options all make speculating on what the future may bring rather hazy. Given all this, it is reasonable to say that one group of rehabilitation engineers will continue to advance the state of the art through research and development, while another group will be on the front lines as members of clinical teams working to ensure that individuals with disabilities receive devices and services that are most appropriate for their particular needs. The demarcation between researchers and service providers will become clearer, since the latter will become credentialed. RESNA and its professional specialty group (PSG) on rehabilitation engineering are working out the final credentialing steps for the Rehabilitation Engineer RE and the Rehabilitation Engineering Technologist RET. Both must also be an ATP. They will be recognized as valued members of the clinical team by all members of the rehabilitation community, including third-party payers, who will reimburse them for the rehabilitation engineering services that they provide. They will spend as much or more time working in the community as they will in clinical settings. They will work closely with consumer-managed organizations who will be the gatekeepers of increasing amounts of governmentmandated

118

service dollars. If these predictions come to pass, the need for rehabilitation engineering will continue to grow. As medicine and medical technology continue to improve, more people will survive traumatic injury, disease, and premature birth, and many will acquire functional impairments that impede their involvement in personal, community, educational, vocational, and recreational activities. People continue to live longer lives, thereby increasing the likelihood of acquiring one or more disabling conditions during their lifetime. This presents an immense challenge for the field of rehabilitation engineering. As opportunities grow, more engineers will be attracted to the field. More and more rehabilitation engineering education programs will develop that will support the training of qualified engineers, engineers who are looking for exciting challenges and opportunities to help people live more satisfying and productive lives.

119

Das könnte Ihnen auch gefallen