Sie sind auf Seite 1von 14

718528

research-article2017
UIXXXX10.1177/0161734617718528Ultrasonic ImagingHerickhoff et al.

Article
Ultrasonic Imaging
1­–14
Low-cost Volumetric Ultrasound © The Author(s) 2017
Reprints and permissions:
by Augmentation of 2D Systems: sagepub.com/journalsPermissions.nav
DOI: 10.1177/0161734617718528
https://doi.org/10.1177/0161734617718528
Design and Prototype ultrasonicimaging.sagepub.com

Carl D. Herickhoff1, Matthew R. Morgan2,


Joshua S. Broder3, and Jeremy J. Dahl1

Abstract
Conventional two-dimensional (2D) ultrasound imaging is a powerful diagnostic tool in the
hands of an experienced user, yet 2D ultrasound remains clinically underutilized and inherently
incomplete, with output being very operator dependent. Volumetric ultrasound systems can
more fully capture a three-dimensional (3D) region of interest, but current 3D systems require
specialized transducers, are prohibitively expensive for many clinical departments, and do not
register image orientation with respect to the patient; these systems are designed to provide
improved workflow rather than operator independence. This work investigates whether it is
possible to add volumetric 3D imaging capability to existing 2D ultrasound systems at minimal
cost, providing a practical means of reducing operator dependence in ultrasound. In this paper,
we present a low-cost method to make 2D ultrasound systems capable of quality volumetric
image acquisition: we present the general system design and image acquisition method, including
the use of a probe-mounted orientation sensor, a simple probe fixture prototype, and an offline
volume reconstruction technique. We demonstrate initial results of the method, implemented
using a Verasonics Vantage research scanner.

Keywords
volumetric ultrasound, 3D imaging, low-cost, sensor-based reconstruction, operator dependence

Introduction
Ultrasound Today: Benefits and Challenges
Ultrasound has broad clinical applicability due to its many advantages over other medical imag-
ing modalities. In contrast to computed tomography (CT), fluoroscopy, positron emission tomog-
raphy, or magnetic resonance imaging (MRI), ultrasound imaging does not involve ionizing
radiation (e.g., x-rays or gamma rays) or high magnetic fields, and is thus very safe for patients,

1Stanford
University School of Medicine, Palo Alto, CA, USA
2Duke University, Durham, NC, USA
3Duke University School of Medicine, Durham, NC, USA

Corresponding Author:
Carl D. Herickhoff, Stanford University School of Medicine, 3155 Porter Dr., MC5483, Palo Alto, CA 94304, USA.
Email: cdh1@stanford.edu
2 Ultrasonic Imaging 00(0)

including those who are pregnant or have implanted hardware.1-5 Ultrasound can also acquire
high-quality images—with resolution below 1 mm, good soft-tissue contrast, and the option of
visualizing blood flow—at very high frame rates, enabling clinicians to observe multiple organ
systems in real time. Ultrasound transducer hardware can be flexibly designed and built into
nearly any conceivable package, ranging from hand-held probes, to endoscopes and intravascular
catheters. Furthermore, scanners can be made to be relatively inexpensive, low power, and por-
table, with laptop and hand-held form factors becoming increasingly available in recent years, in
addition to traditional cart-based systems. For these reasons, ultrasound excels in areas where
other modalities struggle, such as cardiac imaging (with the challenge of cardiovascular and
respiratory motion); fetal imaging (where safety is a paramount concern); interventional proce-
dure guidance (where speed and customizable hardware are important); and critical, emergency,
and out-of-hospital care (where portability is essential).
Despite its many desirable aspects and the several applications for which it is uniquely well-
suited, ultrasound has historically faced several challenges that have limited its clinical use.
Conventional two-dimensional (2D) ultrasound imaging utilizes a hand-held probe, consisting of
a one-dimensional transducer array and an elevation-focused acoustic lens; the probe is held in
contact with the patient, and the scanner controls the phased excitations of the array elements to
focus and steer the acoustic beam in depth and azimuth to generate a 2D image. However, it is
important to note that 2D ultrasound imaging is inherently incomplete—it can only yield selec-
tive cross-sectional sampling of a complete three-dimensional (3D) anatomic volume—and the
acquired image is dependent on the placement of the probe relative to the body and the target
structure. It follows that (a) the exact orientation of the acquired image plane relative to the
patient is not precisely known nor usually aligned with any of the standard sagittal, coronal, or
transverse anatomical planes, and (b) a significant level of skill is required of the user to obtain a
recognizable, clinically useful, high-quality ultrasound image.6,7 Many physicians lack the skills
necessary to utilize ultrasound effectively in a variety of clinical diagnostic procedures, and
advanced ultrasonography skills are obtained through education and training, with 12 months of
full-time clinical experience generally required for certification by the American Registry for
Diagnostic Medical Sonography.
Ultrasound is thus currently considered to be an “operator dependent” imaging modality, due
to the variability in probe and system manipulation skills (and resulting ultrasound images) that
exists among sonographers and differences in clinical knowledge and training of the operator or
reader. The lack of precisely known image orientation adds further ambiguity as to what view
and which structures are shown in a given image.8 This is a substantial issue, compounded by
the fact that acquisition and interpretation of the ultrasound images are often performed by dif-
ferent people (i.e., a sonographer and a clinician, respectively) at different times—the sonogra-
pher may not realize which specific structures or features are of greatest clinical interest nor
which specific 2D image plane is most revealing. If a clinical question cannot be answered by
the images saved by the sonographer, the clinician must order a repeat exam or order images
from another modality.
Volumetric 3D ultrasound imaging, which can more completely capture a region of interest,
was made clinically feasible by the development of motor-controlled “wobbler” and 2D “matrix”
array probes, which allow for acoustic beam steering in the elevation dimension in addition to the
azimuth dimension. However, the exact orientation of the volumetric images is not registered with
respect to the patient’s frame of reference, so a high level of skill and experience is still required
for effective image acquisition and interpretation—thus the problem of operator dependence
remains.9 In addition, the design and fabrication of these probes involve significant challenges,
and the increased data-processing complexity necessary to achieve volumetric 3D imaging results
in a very high cost of equipment (often well over $10,000 per probe, and $200,000 or more per
Herickhoff et al. 3

scanner), which can be prohibitively expensive for many clinics that wish to use ultrasound imag-
ing. Due to these high costs, 3D ultrasound equipment has only been commercially developed for
particular clinical areas and applications, such as cardiac and fetal imaging.

Previous Use of Sensors with Ultrasound


The integration of sensing technologies with ultrasound probes for volumetric image reconstruc-
tion as an alternative to wobbler and matrix-array probes has been investigated and patented as
far back as the 1990s.10,11 Most of these efforts have been in the pursuit of freehand acquisition
and/or ultrasound-guided interventions.12-14 These methods have required integration of ultra-
sound imaging with real-time tracking of probe pose (i.e., position and orientation), often coor-
dinated with an interventional tool (e.g., insertion of a needle), and possibly involving fusion of
ultrasound with another imaging modality, such as MRI or CT.14,15 Several open-source software
libraries have been developed and utilized for these purposes, including SynchroGrab, IGSTK,
Stradwin, MUSiiC, MITK-US, CISST, OpenTracker, VRPN, and PLUS.14,16-23
To precisely determine both the position and orientation (pose) of the ultrasound probe as it
acquires 2D images and is translated over the surface of the patient, a 3D Cartesian coordinate
system (a.k.a. reference frame) must be established and the transformations between each object
specified.12,14 Spatial location tracking is thus a functional requirement, and it is most often
achieved using either an electromagnetic (EM) field sensor or pre-calibrated optical tracking
setup (involving multiple cameras) to return both spatial coordinates and quaternions.12,14
Quaternion space is four-dimensional, and the set of quaternions forms a noncommutative divi-
sion algebra, providing a concise notation for completely representing orientation or relative
rotation in space, without the issue of gimbal lock associated with Euler angles.24 Quaternion
representation has become the generally accepted standard parameterization of rotation, used
extensively in computer graphics, robotics, and flight dynamics.25
There are several examples in the literature of position tracking systems used for freehand
2D-to-3D ultrasound acquisition and reconstruction, including Aurora, trakSTAR, and Polaris
(Northern Digital Inc., Waterloo, Ontario, Canada) and FASTRAK (Polhemus, Colchester,
Vermont).18,26-29 Compared with EM systems, optical systems such as the Polaris tend to be more
accurate but also require a clear visual line of site to a target object or marker.18 EM systems are
thus more appropriate for applications involving guidance of devices within the body, but EM
systems require a wired transmitter box to create a field that has limited range (approximately a
0.5-m radius) and is susceptible to distortion error in the presence of materials often found in
various equipment in clinical environments: pulsed direct-current systems (e.g., trakSTAR) are
more sensitive to ferromagnetic materials, whereas alternating-current systems (e.g., Aurora,
FASTRAK) are more sensitive to metal due to induced current therein.25,26,30-34 These spatial
position tracking systems involving EM sensors or optical setups can be expensive or cumber-
some to calibrate, and add clutter to the clinical environment.25,30
In recent years, there have been efforts to obtain sufficient probe pose and trajectory infor-
mation for 2D-to-3D reconstruction using low-cost sensors attached to ultrasound probes.
These methods incorporate an inertial sensor (e.g., three orthogonally mounted gyroscopes, or
a Nintendo Wii™ remote’s three-axis accelerometer) to detect angular rate of change and an
optical mouse sensor to track translation along the body surface, resulting in 5 degree-of-
freedom tracking.35-37 However, these studies have noted bias and drift in the inertial and opti-
cal mouse sensor readings; these effects can degrade the accuracy of the reconstruction and
may limit the allowable scan duration. Attempts have been made to correct for bias and drift
errors by comparing and combining inertial and optical tracking with ultrasound signal
decorrelation.37-39
4 Ultrasonic Imaging 00(0)

Ultimately, these previous sensor-based 3D ultrasound acquisition and reconstruction


methods have not proven to be cost-effective or practical for most clinical uses of
ultrasound.25

Orientation Sensor Advancement


Microelectromechanical systems (MEMS) utilize silicon processing techniques to create
microscopic devices, and MEMS-based orientation sensor technology has rapidly advanced
and become very low cost in the last 10 years, thanks in large part to the development and
proliferation of smartphones.40 The earliest touchscreen-based smartphones were made capa-
ble of detecting the phone’s “portrait” or “landscape” orientation by utilizing silicon-based
gyroscope, accelerometer, and magnetometer MEMS integrated in an inertial measurement
unit (IMU). These IMU sensors are also used to enable image stabilization techniques for
hand-held cameras, including smartphone cameras.41,42 Thus, orientation sensors have become
increasingly compact and inexpensive, as hundreds of millions of smartphones are produced
and sold annually.
In this paper, we present a low-cost method of acquiring and reconstructing a complete 3D
ultrasound image volume, as a means of reducing operator dependence. Use of a low-cost orien-
tation sensor and features of a probe fixture prototype design are described, and the setup and
results of preliminary experiments to demonstrate the method are presented.

Methods
General Design and Acquisition
A low-cost, commercially available IMU module incorporating a three-axis gyroscope, three-
axis accelerometer, and three-axis magnetometer (iNEMO-M1; STMicroelectronics, Geneva,
Switzerland) was selected to attach to an ultrasound probe. Because position measurement from
such IMU sensors (distance calculated by twice integrating accelerometer readings) is prone to
accumulative bias offset drift error, only the sensor’s orientation feedback was utilized.43 The
IMU firmware utilizes an extended Kalman filter with its raw acceleration, angular rate, and
magnetic field data to calculate and return orientation information. 44, 45
To interrogate and reconstruct a volume using 2D ultrasound, it is necessary to sweep the 2D
image plane through the target volume (region of interest) and use feedback from a sensor
attached to the probe to determine the relative location of each acquired 2D image plane within
the volume. Because the sensor feedback is limited to probe orientation, the location of the 2D
image plane can be inferred only when the motion of the probe is limited to rotation about a pre-
determined axis or point, fixed with respect to the body (see Figure 1).
By using a simple fixture to restrict the probe’s range of motion and “tagging” each acquired
2D image with a probe orientation reading (sampling rate of 400 Hz), the location of the 2D
image with respect to this fixed axis of rotation can be calculated, and these tagged 2D images
can be assembled into a 3D volumetric ultrasound.
The orientation is given by the sensor as a unit quaternion. To establish a predetermined pivot
axis, a calibration reading, q0, is taken with the probe held at the approximate midplane of the
intended sweep. A transformed quaternion, q, can then be calculated using the sensor’s measured
quaternion, qmeas, and the orientation of the sensor’s native coordinate system, given by qcoord.
The value for q is given by

−1
q = qcoord (q0−1qmeas )qcoord , (1)
Herickhoff et al. 5

Figure 1.  Three possible (perpendicular) pivot axes. By limiting the range of freehand motion of the
probe to pivoting about any single axis, an orientation sensor mounted on the probe can detect relative
position of the acquired 2D image planes, and a 3D volumetric image can be reconstructed. Pivot axes
parallel to the body surface are shown in (a) and (b); a pivot axis normal to the body surface is shown in
(c). 3D = three-dimensional; 2D = two-dimensional.

where −1 represents a quaternion inverse. The transformed quaternion, q, thus provides a basis
for calculating the angle of the probe and image plane about the established pivot axis relative to
the calibrated midplane of the sweep.
To establish orientation of the tagged 2D images and reconstructed 3D volume with respect to
a patient’s anatomical frame of reference, an additional calibration step is performed. This is
done by placing the ultrasound probe on the patient’s sternum with the probe’s index bump facing
toward the head, and saving the orientation reading from the sensor. Provided that the patient
remains stationary during subsequent acquisition(s), this saved reading of the patient’s orienta-
tion can be used as a reference to do a simple transformation (as above in Equation (1)) of the
tagged 2D images and reconstructed 3D volume to re-orient them relative to the patient’s ana-
tomical axes. This transformation enables images and volumes to be displayed in proper orienta-
tion with respect to the patient, thus the ultrasound data can be presented as a stack of transverse,
sagittal, or coronal slices (similar to MRI and CT datasets).

Probe Fixture: Functional Design


A probe fixture was designed to establish a precise and repeatable sweep path for the probe rela-
tive to the predetermined axis or point described previously. By utilizing a fixture, the probe
motion is stabilized and restricted to a single degree of freedom (i.e., pivoting about an axis) and
the position of each image frame relative to the pivot axis can be uniquely determined using an
orientation measurement; this approach removes the problem of accelerometer noise and drift
noted in the IMU-based freehand 3D acquisition literature, and eliminates the need for any posi-
tional tracking in general.35,36 The size and shape of the probe was measured with a calipers or
otherwise estimated. From these probe dimensions, a fixture was designed using the 3D CAD
program SOLIDWORKS (Dassault Systèmes SOLIDWORKS Corp., Waltham, Massachusetts).
The fixture had two parts: a “base” (to be held against the body, with grooves to establish two
possible pivot axes—both oriented parallel to the body surface with a common origin, but per-
pendicular to each other) and a “cradle” (to interface with the head of the probe and provide
features to fit and pivot within the grooves of the base); see Figure 2. For use with this fixture,
the sensor was secured to the probe with Velcro straps (see Figure 2).
6 Ultrasonic Imaging 00(0)

Figure 2.  Left: probe fixture drawing. Probe head nested into cradle piece, to pivot within grooves of
the base piece to establish a pivot axis parallel to the body surface. Another pair of grooves is available
for pivoting about another, perpendicular pivot axis (also parallel to the body surface). Right: orientation
sensor attached to an ultrasound probe with Velcro straps.

Volume Reconstruction: Initial Experiments


With the orientation sensor and the first-generation fixture attached to the probe to restrict
motion of the probe, the first dataset was acquired by synchronizing the sensor readings with 2D
images acquired with a Broadsound L12-5 transducer (Broadsound Corp., Taiwan) and a
research ultrasound scanner (Vantage 256; Verasonics, Inc., Kirkland, Washington), as dia-
grammed in Figure 3. The fixture base was firmly held in place above the target by a ring stand
clamp, and the probe and cradle were manually and slowly pivoted within the base about each
of the available (perpendicular) axes, sweeping the imaging plane through the target volume
two separate times to generate two perpendicular image volume datasets with a common origin.
Image data were acquired of a hard-boiled chicken egg with a cracked shell sitting upright in a
room-temperature water bath and of a human fetal phantom of 21-week gestation (Model 068;
CIRS, Inc., Norfolk, Virginia).
Each acquired dataset consisted of 200 saved 2D ultrasound images with associated orientation
readings from a sweep about a single axis. MATLAB (MathWorks, Inc., Natick, Massachusetts)
was used to implement an offline, voxel-based 3D volume reconstruction method using the orien-
tation-tagged 2D image frames (pixels).13 In this reconstruction method, a 3D voxel mesh is pre-
allocated in memory as the output volume, the orientation-tagged 2D frames are spatially registered
with respect to the known pivot axis, and the voxel values are assigned by averaging the four
nearest-neighbor pixels’ values from the saved 2D image planes (see Figure 4).
Herickhoff et al. 7

Figure 3.  Low-cost 3D system schematic. Each image frame is labeled with data indicating the
ultrasound probe orientation prior to reconstruction.

Each perpendicularly acquired volume was inserted into its own 3D voxel mesh. The two
perpendicularly acquired volumes were then merged by multiplying the meshes on a voxel-by-
voxel basis, to highlight the intersection of the data and suppress artifacts due to noise in the
orientation sensor and the resulting mis-registration of 2D planes in the volume reconstruction
process.
To visualize the acquired 3D datasets offline, we used 3D Slicer, an open-source software
platform for image processing and visualization.44 All datasets were saved in either MetaImage
(Kitware, Inc., Clifton Park, New York) or Digital Imaging and Communications in Medicine file
format, which can be easily loaded and manipulated in a variety of programs. Although 3D
Slicer’s image-processing and volume rendering toolbox is largely devoted to MRI, CT, and
nuclear medicine datasets, many of its features were readily adaptable to the 3D ultrasound data-
sets generated by our device, particularly multi-planar and 3D-rendered visualization of volumet-
ric datasets.

Auto-orientation Labels and Graphics


Graphical display enhancements were developed using the OpenGL API (Khronos Group,
Beaverton, Oregon) and the IMU’s streaming orientation data. These enhancements included
dynamic display of the live image with accurate anatomical orientation labels and a graphical
display of the probe relative to the body. The anatomic orientation labels indicate the patient’s
left, right, anterior, posterior, superior, and inferior directions, and the labels are overlaid on the
top, bottom, left, and right of the screen simultaneously. A label in the lower left-hand corner of
the screen is also included to indicate the approximate anatomical plane (transverse, sagittal, or
coronal) of the image. If the probe orientation falls between two cardinal planes, the system dis-
plays modified labels representing the two closest anatomical planes, such as “sagittal/coronal”
in the lower left-hand corner. The margins of the 2D image sector also display modified labels,
such as “A/L” or “P/R” (indicating anterior/left or posterior/right). The graphical display includes
a mannequin torso adjacent to the image, displayed with a graphic of an ultrasound probe freely
twisting and rotating around the torso in real time in response to the sensor feedback to indicate
the general orientation of the probe and image plane with respect to the body axis (see Figure 5);
because no probe position information is obtained, the probe graphic is intentionally rendered
large relative to the torso so as to emphasize only a general orientation with respect to the body
axis and not imply a specific position. The automatic image labels and graphical representations
8 Ultrasonic Imaging 00(0)

Figure 4.  Diagram of a voxel-based volume reconstruction scheme. The empty 3D voxel mesh is
traversed (white arrows), proximity from the current voxel to the nearest available 2D image frame
(based on tagged orientation angles about a fixed axis) is calculated, and pixel data from the appropriate
2D image frame(s) is assigned to the voxels of the 3D mesh.

Figure 5.  Sample orientation display. Left: live image input with overlaid, live-updating anatomical
labels surrounding the imaging sector and an anatomical plane label in the lower left-hand corner. Right:
mannequin display with a freely rotating probe graphic to indicate the approximate orientation of the
probe in real-time.

of probe orientation can be saved with the 2D images to reduce the burden of documentation by
the sonographer and aid interpretation.

Results
The acquisition routine was able to save 2D images from the Verasonics research scanner with
tagged orientation reading from the sensor. The frame rate of the Verasonics scanner was approx-
imately two frames per second, leading to a sweep time of approximately 1.5 minutes. Offline
Herickhoff et al. 9

Figure 6.  Volumetric reconstruction (from two perpendicular sweeps, merged) of chicken egg with
cracked shell, viewed in 3D Slicer. Arrows indicate the 2-mm cracked shell flap, clearly seen in the 3D
rendering and two of the perpendicular cross-sectional views.

reconstruction of the image volumes using Matlab on a workstation with a 2.4-GHz processor
and 8 GB of RAM took approximately 15 minutes per volume.
Figure 6 (left) shows basic volumetric reconstruction of a cracked hard-boiled chicken egg
sitting upright in a room-temperature water bath using the default volume rendering software in
3D Slicer. The dome-like surface of the egg is seen in the rendered volume image, with the dis-
placed egg shell fragment visible (black arrows). The egg shell is approximately 1 mm thick, and
its surface appears rough due to slight mis-registration between the 2D planes used in the volume
reconstruction and between the perpendicularly acquired volumes prior to merging. No addi-
tional post-processing of the image data was done prior to volume rendering to improve image
quality. To the right of the rendered volume, orthogonal cross-sectional slice views of the image
volume are shown.
Figure 7 shows two views of the volumetric reconstruction of the hand of a 21-week human
fetal phantom. The varying lengths of the fingers and the ridges between them are distinguish-
able; the width of the entire hand is approximately 1 cm.

Discussion
These initial volumetric 3D images demonstrate the feasibility, challenges, and opportunities of
a low-cost 3D acquisition and reconstruction method. A single IMU sensor and a simple, light-
weight plastic fixture, which is customizable to the probe, are the only hardware the method
requires. Although customized holders or mounts have previously been used to attach or con-
strain ultrasound probes,45-54 the use of a low-cost orientation sensor with a simple fixture to limit
probe motion to rotation about an axis or point for an acquisition sweep is unique. The resulting
image acquisition process is very easy, including only a single degree-of-freedom pivot sweep of
10 Ultrasonic Imaging 00(0)

Figure 7.  Volumetric reconstruction (from two perpendicular sweeps, merged) of fingers of a fetal
phantom. Ridges, grooves, and varied lengths of the fingers are clearly seen. In total, the four fingers
measure approximately 1 cm across.

the probe, and results in volumetric images with notable features, good resolution, and intuitive
display of the probe orientation. It is also important to note that these ultrasound volume images
may be calibrated and transformed to match the patient’s frame of reference, and reviewed as a
stack of properly oriented transverse, sagittal, or coronal slices, in the same manner as CT and
MRI datasets; this frame of reference is more familiar to radiologists and other clinicians, thus
promising minimal workflow disruption and more intuitive image interpretation. However, there
are currently some notable limitations, and several potential refinements to this system augmen-
tation could improve performance.
One limitation of all 2D-to-3D ultrasound acquisition methods is that the volume is not pro-
duced repeatedly in real time, as can be done with live-3D matrix-array probes. Certain interven-
tional procedures require real-time volumetric ultrasound imaging guidance, and the described
low-cost 3D method would be inappropriate for such applications. In addition, the presented
2D-to-3D method is susceptible to distortion error from motion of the target tissue during an
acquisition sweep (however, error from cardiac or lung motion may be alleviated by incorporat-
ing ECG or respiratory gating into the acquisition and reconstruction55-58); in its current form, the
presented method is better suited for imaging stationary targets. Furthermore, freehand 3D ultra-
sound via probe position tracking allows a variety of scanning patterns (e.g., linear translation
normal to the image plane or panoramic sweep) and an expandable field of view, but the low-cost
3D method presented here has the drawback of a limited field of view, due to the fixture con-
straining the probe motion to pivoting about an axis.18,59
Future refinements to the current implementation may significantly improve imaging perfor-
mance. Because integration of the sensor with the Verasonics research scanner software was not
optimized for speed, the internal image save and sensor reading routine (i.e., the acquisition frame
rate) was relatively slow, requiring a longer sweep time and increasing the likelihood of target
motion or accidental probe translation during the sweep. Roughness in the 3D-printed fixture
Herickhoff et al. 11

(particularly for the first-generation fixture) and noise in the sensor reading can lead to non-uni-
form sweep speed (i.e., the cradle may slightly “catch” on the fixture base), non-uniform angular
sampling, and/or improper location of the 2D images with respect to the pivot axis. All of these
factors are possible sources of error that can lead to artifacts on feature surfaces in the 3D recon-
structions. These problems can be mitigated by smoothing the fixture surfaces to mate well with
the transducer and pivot smoothly about the intended axis, and using filtering methods to the sen-
sor readings to reduce noise. In addition, increasing the acquisition frame rate (thus reducing the
time to complete an acquisition sweep) would reduce the potential for error and result in higher
quality volume reconstructions. The volume reconstruction process was undesirably slow for
these initial experiments, but use of better algorithms and dedicated hardware (e.g., a programmed
graphical processing unit) would promise to reduce the reconstruction time considerably.
Reduced acquisition and reconstruction time is essential to enable more advanced and clinical
use of this setup. If, for example, frames of radiofrequency (RF) channel data were rapidly saved
instead of pixel data (as was done in the experiments described here), several different beam-
forming techniques such as swept synthetic aperture could be applied.60 However, operations
such as saving RF data and programming custom beam sequences require access to the scanner’s
internal software, which is generally restricted and proprietary for clinical ultrasound scanners.
Channel RF data from the Verasonics research scanner were not acquired in these experiments
due to the additional computational cost of beam forming and scan conversion that would have
been needed before the voxel-based reconstruction method could be implemented. Integration
with an existing clinical scanner would entail (a) designing a fixture customized for the intended
probe, (b) appropriate attachment of the low-cost orientation sensor to the probe, and (c) capture
of the 2D image frames from a video feed—potentially with post-processing.
Despite having particular limitations and need for refinement, this low-cost 3D method has
promise for numerous clinical applications. A system equipped with this device could enable
quick screening for vascular disease such as abdominal aortic aneurysm and carotid stenosis,
which may be readily performed by a novice user. Guidance or merely confirmation of needle
placement for tumor biopsy or ablation is another possible application for the device—in this situ-
ation, 3D imaging provides the clinician with a higher degree of confidence than 2D short-axis or
long-axis views. High-frequency probes could be used with the device to obtain high-resolution
volumetric images of the axillary lymph nodes as a screening for breast cancer. Low-cost 3D may
be clinically utilized in pediatrics as well, for brain volume or kidney volume scans.

Conclusion
Ultimately, this system augmentation enables volumetric 3D ultrasound imaging on 2D scanners
at a very low cost (under $250) through the addition of an orientation sensor and fixture. By
providing a simple and inexpensive means of acquiring complete volumetric 3D ultrasound
images with sensed orientation information and intuitive feedback displayed to the user, this is a
potential step toward solving the problem of operator dependence.
Currently, the most significant limitation of this method is that the imaging target must be
stationary for the duration of the acquisition sweep to avoid artifacts. Future work on this project
will seek to overcome this limitation and explore the problem of volumetric 3D reconstruction in
the presence of cardiac and respiratory motion.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or
publication of this article.
12 Ultrasonic Imaging 00(0)

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publi-
cation of this article: This work was funded by a Stanford-Coulter Translational Research Award supported
by the Wallace H. Coulter Foundation.

References
1. Picano E. Sustainability of medical imaging. BMJ. 2004;328(7439):578-80.
2. Scarabino T, Nemore F, Giannatempo GM, Bertolino A, Di Salle F, Salvolini U. 3.0 T magnetic reso-
nance in neuroradiology. Eur J Radiol. 2003;48(2):154-64.
3. Schenck JF. Safety of strong, static magnetic fields. J Magn Reson Imaging. 2000;12(1):2-19.
4. Dempsey MF, Condon B, Hadley DM. MRI safety review. Semin Ultrasound CT MRI. 2002; 23:392-
401.
5. De Wilde J, Rivers A, Price D. A review of the current use of magnetic resonance imaging in pregnancy
and safety implications for the fetus. Prog Biophys Mol Biol. 2005;87(2):335-53.
6. Tolsgaard MG, Ringsted C, Dreisler E, Nørgaard LN, Petersen JH, Madsen ME, et al. Sustained effect
of simulation-based ultrasound training on clinical performance: a randomized trial. Ultrasound Obstet
Gynecol. 2015;46(3):312-8.
7. Ahmad R, Alhashmi G, Ajlan A, Eldeek B. Impact of high-fidelity transvaginal ultrasound simulation
for radiology on residents’ performance and satisfaction. Acad Radiol. 2015;22(2):234-39.
8. Berg WA, Blume JD, Cormack JB, Mendelson EB. Operator dependence of physician-performed
whole-breast US: lesion detection and characterization. Radiology. 2006;241(2):355-65.
9. Glor FP, Ariff B, Hughes AD, Verdonck PR, Thom SMG, Barratt DC, et al. Operator dependence of
3-D ultrasound-based computational fluid dynamics for the carotid bifurcation. IEEE T Med Imaging.
2005;24(4):451-56.
10. Hossack JA, Gallinat DJ, Petersen TE, Molinari JJ, Little SC. Medical diagnostic ultrasonic transducer
probe and imaging system for use with a position and orientation sensor. US Patent 6,338,716, 2002.
11. Emmenegger N, Engfer O. 3-D ultrasound recording device. Google Patents; 2003. US Patent

6,605,041.
12. Prager RW, Ijaz UZ, Gee A, Treece GM. Three-dimensional ultrasound imaging. Proc IMechE, Part H:
J Engineering in Medicine. 2010;224(2):193-223.
13. Solberg OV, Lindseth F, Torp H, Blake RE, Hernes TAN. Freehand 3D ultrasound reconstruction
algorithms: a review. Ultrasound Med Biol. 2007;33(7):991-1009.
14. Lasso A, Heffter T, Rankin A, Pinter C, Ungi T, Fichtinger G. PLUS: open-source toolkit for ultra-
sound-guided intervention systems. IEEE T Biomed Eng. 2014;61(10):2527-37.
15. Gee A, Prager R, Treece G, Berman L. Engineering a freehand 3D ultrasound system. Pattern Recogn
Lett. 2003;24(4):757-77.
16. Boisvert J, Gobbi D, Vikal S, Rohling R, Fichtinger G, Abolmaesumi P. An open-source solution for
interactive acquisition, processing and transfer of interventional ultrasound images. In: Proceedings
MICCAI 2008, pp. 1-8. [Online]. Available: http://hdl.handle.net/10380/1459
17. Cheng P, Zhang H, Kim HS, Gary K, Blake MB, Gobbi D, et al. IGSTK: Framework and example
application using an open source toolkit for image-guided surgery applications. In: Medical Imaging
2006: Visualization, Image-Guided Procedures, and Display, San Diego, CA, 12-14 February 2006,
vol. 6141. Bellingham, WA: International Society for Optics and Photonics.
18. Treece GM, Gee AH, Prager RW, Cash CJ, Berman LH. High-definition freehand 3-D ultrasound.
Ultrasound Med Biol. 2003;29(4):529-46.
19. Stolka PJ, Kang H, Boctor E. The MUSiiC toolkit: modular real-time toolkit for advanced ultrasound
research. In: Proceedings MICCAI 2010, pp. 1-11. [Online]. Available: http://hdl.handle.net/10380/3172
20. Marz K, Franz AM, Seitel A, Winterstein A, Bendl R, Zelzer S, et al. MITK-US: real-time ultrasound
support within MITK. Int J Comput Ass Rad. 2014;9(3):411-20.
21. Kapoor A, Deguet A, Kazanzides P. Software components and frameworks for medical robot con-
trol. In: Proceedings 2006 IEEE International Conference on Robotics and Automation (ICRA 2006),
Orlando, FL, 15-19 May 2006, pp. 3813-8. New York: IEEE.
Herickhoff et al. 13

22. Von Spiczak J, Samset E, DiMaio S, Reitmayr G, Schmalstieg D, Burghart C, et al. Multimodal event
streams for virtual reality. In: Electronic Imaging 2007, San Jose, CA, 28 January 2007, p. 65040M.
Bellingham, WA: International Society for Optics and Photonics.
23. Taylor RM II, Hudson TC, Seeger A, Weber H, Juliano J, Helser AT. VRPN: a device-independent,
network-transparent VR peripheral system. In: Proceedings of the ACM Symposium on Virtual Reality
Software and Technology, Banff, AL, Canada, 15-17 November 2001, pp. 55-61. New York: ACM.
24. Chou JC. Quaternion kinematic and dynamic differential equations. IEEE T Robotic Autom.

1992;8(1):53-64.
25. Birkfellner W, Hummel J, Wilson E, Cleary K. Tracking devices. In: Peters T, Cleary K, eds. Image-
Guided Interventions. New York: Springer; 2008. pp. 23-44.
26. Hastenteufel M, Vetter M, Meinzer HP, Wolf I. Effect of 3D ultrasound probes on the accuracy of
electromagnetic tracking systems. Ultrasound Med Biol. 2006;32(9):1359-68.
27. Treece GM, Prager RW, Gee AH, Berman L. Correction of probe pressure artifacts in freehand 3D
ultrasound. Med Image Anal. 2002;6(3):199-214.
28. Prager RW, Rohling RN, Gee AH, Berman L. Automatic calibration for 3-D free-hand ultrasound.
Cambridge (UK): University of Cambridge, Department of Engineering; 1997. 27p.
29. Prager RW, Gee AH, Treece GM, Cash CJ, Berman LH. Sensorless freehand 3-D ultrasound using
regression of the echo intensity. Ultrasound Med Biol. 2003;29(3):437-46.
30. Birkfellner W, Watzinger F, Wanschitz F, Enislidis G, Kollmann C, Rafolt D, et al. Systematic distor-
tions in magnetic position digitizers. Med Phys. 1998;25(11):2242-8.
31. Hummel J, Figl M, Kollmann C, Bergmann H, Birkfellner W. Evaluation of a miniature electromag-
netic position tracker. Med Phys. 2002;29(10):2205-12.
32. Schicho K, Figl M, Donat M, Birkfellner W, Seemann R, Wagner A, et al. Stability of miniature elec-
tromagnetic tracking systems. Phys Med Biol. 2005;50(9):2089-98.
33. Hummel JB, Bax MR, Figl ML, Kang Y, Maurer C, Birkfellner WW, et al. Design and application of
an assessment protocol for electromagnetic tracking systems. Med Phys. 2005;32(7):2371-9.
34. Jaberzadeh S, Scutter S, Zoghi M. Accuracy of an electromagnetic tracking device for measuring hip
joint kinematics during gait: effects of metallic total hip replacement prosthesis, source-sensor distance
and sensor orientation. Australas Phys Eng Sci Med. 2005;28(3):184-9.
35. Goldsmith A, Pedersen P, Szabo T. An inertial-optical tracking system for portable, quantitative, 3D
ultrasound. In: Ultrasonics Symposium (IUS 2008), Beijing, China, 2-5 November 2008, pp. 45-9.
New York: IEEE.
36. Stolka PJ, Choi J, Wang J, Choti M, Boctor EM. 5-DoF trajectory reconstruction for handheld ultra-
sound with local sensors. In: Ultrasonics Symposium (IUS 2009), Rome, Italy, 20-23 September 2009,
pp. 1864-7. New York: IEEE.
37. Stolka PJ, Wang XL, Hager GD, Boctor EM. Navigation with local sensors in handheld 3D ultrasound:
initial in-vivo experience. In: SPIE Medical Imaging, Lake Buena Vista, FL, 12 February 2011, p.
79681J. Bellingham, WA: International Society for Optics and Photonics.
38. Owen K, Mauldin FW, Hossack JA. Transducer motion estimation using combined ultrasound signal
decorrelation and optical sensor data for low-cost ultrasound systems with increased field of view. In:
Ultrasonics Symposium (IUS 2011), Orlando, FL, 18-21 October 2011, pp. 1431-4. New York: IEEE.
39. Housden RJ, Treece GM, Gee AH, Prager RW, Street T. Hybrid systems for reconstruction of freehand
3D ultrasound data. Cambridge, UK: University of Cambridge, Department of Engineering; 2007. 20 p.
40. Lammel G. The future of MEMS sensors in our connected world. In: 2015 28th IEEE International
Conference on Micro Electro Mechanical Systems (MEMS), Estoril, Portugal, 18-22 January 2015, pp.
61-4. New York: IEEE.
41. Chan KY, Rajakaruna N, Rathnayake C, Murray I. Image deblurring using a hybrid optimization algo-
rithm. In: 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6-11 July 22014,
pp. 1243-9. New York: IEEE.
42. Brill A, Frank JA, Kapila V. Using inertial and visual sensing from a mounted smartphone to stabilize
a ball and beam test-bed. In: American Control Conference (ACC 2016), Boston, MA, 6-8 July 2016,
pp. 1335-40. New York: IEEE.
14 Ultrasonic Imaging 00(0)

43. Pang G, Liu H. Evaluation of a low-cost MEMS accelerometer for distance measurement. J Intell
Robot Syst. 2001;30(3):249-65.
44. Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin JC, Pujol S, et al. 3D Slicer as an image
computing platform for the Quantitative Imaging Network. Magn Reson Imaging. 2012;30(9):1323-41.
45. Berger MS. Ultrasound-guided stereotaxic biopsy using a new apparatus. J Neurosurgery.

1986;65(4):550-4.
46. Giller CA, Giller AM. A new method for fixation of probes for transcranial Doppler ultrasound. J
Neuroimaging. 1997;7(2):103-5.
47. Gee AH, Houghton NE, Treece GM, Prager RW. A mechanical instrument for 3D ultrasound probe
calibration. Ultrasound Med Biol. 2005;31(4):505-18.
48. Hsu PW, Prager RW, Gee AH, Treece GM. Rapid, easy and reliable calibration for freehand 3D ultra-
sound. Ultrasound Med Biol. 2006;32(6):823-35.
49. Housden RJ, Gee AH, Treece GM, Prager RW. Sensorless reconstruction of unconstrained freehand
3D ultrasound data. Ultrasound Med Biol. 2007;33(3):408-19.
50. Comeau RM, Fenster A, Peters TM. Integrated MR and ultrasound imaging for improved image
guidance in neurosurgery. In: Medical Imaging’98, San Diego, CA, 21 January 1998, pp. 747-54.
Bellingham, WA: International Society for Optics and Photonics.
51. Comeau RM, Sadikot AF, Fenster A, Peters TM. Intraoperative ultrasound for guidance and tissue shift
correction in image-guided neurosurgery. Med Phys. 2000;27(4):787-800.
52. Hassenpflug P, Prager RW, Treece GM, Gee AH. Speckle classification for sensorless freehand 3-D
ultrasound. Ultrasound Med Biol. 2005;31(11):1499-508.
53. Elliot TL, Downey DB, Tong S, McLean CA, Fenster A. Accuracy of prostate volume measurements
in vitro using three-dimensional ultrasound. Acad Radiol. 1996;3(5):401-6.
54. Tong S, Downey D, Cardinal H, Fenster A. A three-dimensional ultrasound prostate imaging system.
Ultrasound Med Biol. 1996;22(6):735-46.
55. Belohlavek M, Foley DA, Seward JB, Greenleaf JF. Three-dimensional (3D) echocardiography: recon-
struction algorithm and diagnostic performance of resulting images. In: Visualization in Biomedical
Computing 1994, Rochester, MN, 4 October 1994, pp. 680-92. Bellingham, WA: International Society
for Optics and Photonics.
56. Delcker A, Tegeler C. Influence of ECG-triggered data acquisition on reliability for carotid plaque
volume measurements with a magnetic sensor three-dimensional ultrasound system. Ultrasound Med
Biol. 1998;24(4):601-5.
57. Palombo C, Kozakova M, Morizzo C, Andreuccetti F, Tondini A, Palchetti P, et al. Ultrafast three-
dimensional ultrasound: application to carotid artery imaging. Stroke. 1998;29(8):1631-7.
58. Barratt DC, Davies AH, Hughes AD, Thom SA, Humphries KN. Optimisation and evaluation of an
electromagnetic tracking device for high-accuracy three-dimensional ultrasound imaging of the carotid
arteries. Ultrasound Med Biol. 2001;27(7):957-68.
59. Stolka PJ, Kang HJ, Choti M, Boctor EM. Multi-DoF probe trajectory reconstruction with local sensors
for 2D-to-3D ultrasound. In: 2010 IEEE International Symposium on Biomedical Imaging: From Nano
to Macro, Rotterdam, 14-17 April 2010, pp. 316-9. New York: IEEE.
60. Bottenus N, Long W, Zhang H, Jakovljevic M, Bradway D, Boctor E, et al. Feasibility of swept syn-
thetic aperture ultrasound imaging. IEEE T Med Imaging. 2016;35(7):1676-85.

Das könnte Ihnen auch gefallen