Sie sind auf Seite 1von 11

computer methods and programs in biomedicine 113 (2014)

894903

j o u r n a l h o m e p a g e : w w w.i n t l . e l s e v i e r h e a l t h . c o m / j o u r n a l s / c m p b

A marker-based watershed method for X-ray


image segmentation
Xiaodong
Zhang a , Fucang Jia a , Suhuai Luo b , Guiying Liu
a,c
, Qingmao Hu a,
a

Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, 1068 Xueyuan
Boulevard, University Town of Shenzhen, Shenzhen 518055, PR China
b
School of Design Communication and IT, The University of Newcastle, Callaghan, NSW 2308, Australia
c
Nanfang Medical University, 1838 Guangzhou Avenue, Guangzhou 510515, PR China

a r t i c l e

i n f o

a b s t r a c t

Article history:

Digital X-ray images are the most frequent modality for both screening and diagnosis in

Received 14 May 2013

hos- pitals. To facilitate subsequent analysis such as quantication and computer aided

Received in revised form

diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed

30 October 2013

segmenta- tion method was proposed to segment background of X-ray images. The

Accepted 20 December 2013

method consisted of six modules: image preprocessing, gradient computation, marker


extraction, watershed segmentation from markers, region merging and background

Keywords:

extraction. One hundred clin- ical direct radiograph X-ray images were used to validate the

Computer-aided diagnosis

method. Manual thresholding and multiscale gradient based watershed method were

Direct radiography

implemented for comparison. The

X-ray image

proposed method yielded a dice coefcient of 0.964 0.069, which was better than that of
the manual thresholding (0.937 0.119) and that of multiscale gradient based watershed
method (0.942 0.098). Special means were adopted to decrease the computational cost,

Watershed segmentation

including getting rid of few pixels with highest grayscale via percentile, calculation of
gradi- ent magnitude through simple operations, decreasing the number of markers by
appropriate thresholding, and merging regions based on simple grayscale statistics. As
a result, the
processing time was at most 6 s even for a 3072 3072 image on a Pentium 4 PC with 2.4
GHz
CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool
for diagnosis and quantication of X-ray images.
2014 Elsevier Ireland Ltd. All rights reserved.

1.

Introduction

With the development of medical equipment and computer


technologies, computer aided diagnosis (CAD) [14] has
shown great clinical value and become a research spot in
the area of modern medical imaging. It makes use of
medical images to extract tissues of interest for diagnosis
to relieve burden

of medical experts and augment diagnosis. With the invention of direct radiographs (DR) to have a spatial resolution
being comparable to or even better than computed tomography images (as high as 0.14 mm) and good contrast, its value
in screening and diagnosis is increasingly recognized [5].
With the increased availability and spatial resolution of DR
images, more skills and more manpower are needed to
interpret the DR images. Due to complexity and variability of
DR imaging, DR

Corresponding author. Tel.: +86 0755 86392214; fax: +86 0755 86392299.
E-mail address: qm.hu@siat.ac.cn (Q. Hu).
0169-2607/$ see front matter 2014 Elsevier Ireland Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.cmpb.2013.12.025

computer methods and programs in biomedicine 113 (2014)


894903

895

Fig. 1 An X-ray image that could not be segmented with grayscale thresholding.

images may have very complicated background. For


example, the background can have complicated shape and
complicated grayscale distribution such as the case of
variable grayscales that overlap with those of foreground.
Removing background could facilitate subsequent steps of
CAD, which is the focus of this paper.
Though there are efforts on segmentation with some sort
of user intervention [69], automatic segmentation is
preferred in real applications.
Grayscale thresholding is to classify a pixel according to
its grayscale by comparing with a threshold or thresholds.
The threshold could be determined using or without using
prior knowledge [10]. Global thresholding could work well
in two typical scenarios: background grayscales do not
overlap with those of foreground [11], or those foreground
and background pixels have overlapping grayscales but are
disjointed in space. It fails otherwise. Fig. 1 shows an X-ray
image that could not be segmented well with global
thresholding (the segmentation result with threshold
manually set is shown in Fig. 10b).
Another widely used automatic method is watershed segmentation [12,13]. It regards grayscale image as a
topographic relief with every local minimum and its
adjacent pixels forming a catchment basin. Watershed is
formed to prevent merging of adjacent basins. Usually
gradient images are used as input of watershed-based
segmentation. Watershed seg- mentation in its original
form has a good response to weak edges but will yield
over-segmentation as each local mini- mum will form a
catchment basin. There are efforts to handle the oversegmentation. Wang [14] proposed a multi-scale gradient
algorithm to decrease the number of local mini- mums and
morphological reconstruction to eliminate small local
minimums caused by noise or quantization. Chen and coworkers [15] utilized an anisotropic diffusion ltering to
remove image noise before computing gradient using multiscale morphological gradient operators. These two methods
aimed to decrease number of local minimums through
multi- scale morphological gradient operators. However the
scale range depends heavily on image edge width and is
hard to determine. Large scale range will increase the
computational cost while small scale range may lose some
edge informa- tion. Marker-based watershed segmentation
is another way to prevent over-segmentation. It directly
makes use of image markers as the catchment basins
without extracting image local minimums.
How
to
determine markers remains a key

issue. Manually selecting markers on image is one way in


some easy situations. But they need to be determined
automati- cally in most real problems. Rodrguez et al. [16]
proposed to obtain markers for blood vessels segmentation
based on local grayscale standard deviation. This method
might not be easily extended for non-vessel images.
In this paper, a marker-based watershed segmentation
method is proposed to segment DR images. The method
has been validated on one hundred clinical DR images and
compared with manual thresholding and multiscale gradient
based watershed method [9].

2.

Materials and methods

2.1.

Materials

Altogether 100 DR images of different human body parts


including 34 chests, 33 hands and 33 legs were obtained from
the Beijing Aerospace Zhongxing Medical Systems Co. Ltd.
These images were produced using three kinds of DR systems
with different pixel sizes of 0.14 mm, 0.15 mm, and 0.40 mm,
respectively. The image dimensions ranged from 1024 1024
to 3072 3072. The images were 16 bits with a grayscale
range from 0 to 65,536. The background had greater average
grayscale than the foreground (which is the tissue to be
imaged such as hand). Fig. 2 showed respectively typical
images of the hand, chest and leg, and their corresponding
histograms with percentiles.

2.2.

Methods

The proposed algorithm consists of 6 modules (Fig. 3): image


preprocessing to eliminate the inuence of very few pixels
with highest grayscales, calculation of gradient magnitude
with a simple gradient operator, determination of seed pixels based on thresholding of gradient magnitudes, watershed
segmentation from markers, region merging, and derivation
of background through grayscale thresholding. Details are
described below.

2.2.1.

Image preprocessing

The purpose of preprocessing is twofold: noise removal


while preserving edges, and highlighting grayscales that
are of

896

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

Fig. 2 Typical images of the hand, chest and leg, and their corresponding histograms with 0100 percentiles and 10
90 percentiles.

interest. Here for DR images, we are more concerned of the


impact of few pixels with very large grayscales which are
possibly formed due to noise. If not suppressed, these few
pixels will occupy the most part of the grayscale range. To
preserve edges and keep low computation cost, a histogram
based method is used to eliminate the impact of these pixels
using percentile.
Denote the original image as I(x,y) and its histogram
(rela- tive frequency) as HI . The threshold T1 is computed by
formula (1).
i

T1 = {min(i)|

j=0

HI (j) 1 }

(1)

where is a parameter set in advance to approximate the


percentage of pixels with highest grayscale in the image
to be suppressed. Only those pixels with a grayscale not
greater than threshold T1 will be kept for subsequent
processing.

2.2.2.

Computation of gradient magnitudes

For DR images, the boundary between the background and


foreground is prominent even though there may exist other
prominent boundaries within the background or foreground.
Gradient magnitudes are chosen as the input for segmenting
DR images. To decrease the computational cost, a simple
gra- dient operator based on
calculation of 2 nite
differences is adopted.
The grayscale nite differences in X and Y directions are
denoted as Dx and Dy and are calculated by formulae (2) and
(3), respectively. The larger one of Dx and Dy is taken as the
gradient magnitude. Compared to traditional methods, no
multiplication and square root operation is required which
will contribute to the low computational cost.
Dx = |I(x + 1, y) + I(x + 2, y) I(x 1, y) I(x 2, y)|
Dy = |I(x, y + 1) + I(x, y + 2) I(x, y 1) I(x, y 2)|
Grad(x, y) = max(Dx , Dy )

(2)
(3)
(4)

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

897

Fig. 3 Flow chart of the proposed method.

HG (j) 1 }

The gradient magnitude is rescaled to one byte to be


within [0,255] to decrease the storing and computational cost
for sub- sequent processing.

T2 = {min(i)|

2.2.3.

where is a parameter to approximate the percentage of


edge pixels between the background and the foreground. The
steps to derive seed pixels are described as below.

Derivation of gradient threshold and seed pixels

The traditional watershed method takes every local minimum pixel as a seed point to grow by merging surrounding
non-seed pixels to result in the over-segmentation problem. Although region merging could be employed, it is hard
to derive appropriate rules for merging. Another means to
get around over-segmentation is marker-based watershed
method. Unfortunately, there is no general way to produce
seed pixels or markers.
As is shown in Fig. 4, gradient magnitudes of most pixels
distribute in the lower value range while a small proportion
which corresponds to the sharp edges located in foreground
contour have large gradient magnitudes. It reveals that
gradi- ent magnitudes of most contour pixels are greater
than those within background and foreground. Based on
these obser- vations, we propose to derive markers through
thresholding gradient magnitudes.
Denote the histogram of gradient image as HG . The
thresh- old T2 is obtained by formula (5)

(5)

j=0

(a) Compute gradient magnitude distribution histogram.


(b)
Set
a percentage value which represents the
proportion of sharp edge pixels occupied in the image.
The percentage value is generally set conservatively so
that the minimum magnitude of foreground contour will
still be counted as a sharp edge pixel.
(c) Calculate the gradient threshold T2 according to formula
(2) given the percentile .
(d) All pixels with gradient magnitude smaller than T2 are
taken as seed pixels. A mask image is generated in which
1 and 0 represent seed and non-seed pixels, respectively.

2.2.4.

Watershed segmentation from markers

The principle of watershed segmentation from markers is


illustrated in Fig. 5. Five of the 7 local minimums (A, B, C,
F and G, Fig. 5) have gradient magnitudes smaller than
the

898

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

Fig. 4 Gradient image and its histogram.

gradient threshold. According to the Section 2.2.3, all pixels


with their gradient magnitudes not greater than the
gradient threshold are marked as seed pixels. Every
connected compo- nent is treated as a catchment basin
with a unique label in the beginning. Other pixels, including
local minimums above the gradient threshold like D and E
are taken as non-seed pixels. When the algorithm starts,
the water level from the gradient threshold will keep rising
step by step. Unmarked pixels surrounding the catchment
basins will be merged into the adjacent basin and marked
with corresponding label every time. With the increase in
water level, unmarked local mini- mums will be marked
eventually. For example, D and E will be marked with the
same label as that of F.
The gradient image and mask image are inputs of the
watershed segmentation. At rst, every connected component is extracted from the mask image with a unique label.
All neighboring non-marked pixels of these labeled components are kept in a set of queues which corresponds to a set
of gradient magnitudes from low to high. With water level
rising from the gradient threshold, pixels with gradient magnitudes equal to the current water level are marked a label

causes some interior pixels of the background/foreground to


have a gradient magnitude greater than the gradient threshold. Thus it is desirable to merge regions in order to extract
the background and foreground. In our particular case, we
intend to be conservative to classify some background regions
as fore- ground regions when decision is hard to make. In
order to minimize the number of background regions being
wrongly merged into foreground regions, a region similarity
parame- ter T3 is used to check if two regions should be
merged. Details of the merging process are described below.

(a) Initialization. Compute average grayscales of all initial


regions. Then merging process is iterated from the rst
region.
(b) Find the region S with the smallest average grayscale
among all adjacent regions of the current region R.
(c) When two regions R and S meet condition (6), the current
region R is merged into region S as described in step 4,
otherwise turn to step (b) and process the next region.

the same as the component it belongs to, and the watershed

|Rgray Sgray |

is formed when neighboring catchment basins are about to


merge. Meanwhile, new surrounding non-marked pixels are
added into these queues. The process stops when all pixels
are marked.

Rgray

2.2.5.

Region merging

The number of regions (catchment basins) drops signicantly


with the setting of seed pixels, but there are still regions
having similar properties that should be classied as the
same class (background or foreground) due to variation in
grayscales that

<T

(6)

where T3 is a parameter to be set in advance, Rgray and


Sgray represent the average grayscales of regions R and
S, respectively.
(d) Change the label of all pixels in region R to the same as
that of the region S and update the average grayscale
of the merged region.
(e) Iterates steps (b) and (c) until there is no region that
should be merged (Fig. 6).

2.2.6.

Extraction of the background

After region merging, some large regions are left inside the
foreground and background. These regions need to be
classi- ed as either background or foreground. Here it is
supposed that the average grayscale of the foreground is
lower than that of the background. A grayscale threshold can
be used to divide these regions according to their average
grayscale. First nd the largest region with its average
grayscale Bmax larger than the average grayscale of the
Fig. 5 Principle of watershed segmentation from markers.

image, and the largest region with its average grayscale Fmax
smaller than the average grayscale

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

899

3.

Experiments

Experiments were carried out to validate the algorithm by


applying it to the analysis of 100 clinical DR images with
man- ual delineation by LGY as the ground truth. The
proposed algorithm was implemented with C++, and each
data could be automatically processed within 6 s on a
Pentium 4 PC with
2.4 GHz CPU (4 cores) and 2G RAM. The segmentation accuracy was quantied using dice coefcient [17] calculated
with formula (8).

Dice = 2|A B|
|A| + |B|

(8)

where A B is the number of intersecting pixels between two


segmentations, |A| and |B| are the number of pixels of the
binary images A (from segmentation) and B (from manual
delineation), respectively.

3.1.

Parameters determination

Experiments were carried out to nd the values of the 4


param- eters (, , T3 , and ) to be respectively 0.005, 0.2, 0.4

Fig. 6 Flow chart of region merging.

of the image. The grayscale threshold T4 can be computed


using formula (7).
T4 = Bmax + (1 ) Fmax

(7)

where is a parameter to be set in advance and belongs to


[0.5, 1).
Then all regions with their average grayscale greater than
the threshold will be classied as the background, and the
rest belong to foreground.

and 0.8 that could yield acceptable segmentation to be


used as the initial parameters.
Fig. 1 was used to test the accuracy dependency on
param- eter variation throughout this section. When
changing one parameter, the other 3 parameters were xed.
The parameter is to exclude few pixels with very high
grayscales for subsequent processing. Fig. 7 showed the
corre- sponding image and histogram of the preprocessed
image of Fig. 1. As expected, the histogram after processing
(Fig. 7) has a more balanced distribution than that of the
original image (Fig. 1).
Table 1 listed the segmentation accuracy with varied
while the other parameters were: = 0.2, T3 = 0.4, and =
0.8 (Fig. 8).
The parameter was the proportion of pixels that would
not be considered as having a prominent gradient on the
con- tour between the background and foreground.
Experimenting different values was carried out with
results listed in Table 2 ( = 0.005, T3 = 0.4 and = 0.8).
After watershed segmentation from markers, regions were
merged according to their areas and average grayscales.

Fig. 7 Preprocessed image and its histogram.

900

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

Table 1 Segmentation accuracy results with different

values.

Dice coefcient
0.001
0.005
0.01
0.05
0.1
0.2
0.3
0.4
0.5

0.979
0.979
0.979
0.979
0.979
0.977
0.977
0.977
0.973

Table 4 Segmentation results with different values.

Dice coefcient

0.5
0.6
0.7
0.8
0.9

0.979
0.979
0.979
0.979
0.977

Based on these experiments, the parameters were xed


as = 0.005, = 0.2, T3 = 0.3 and = 0.8 for quantication.
The
average dice coefcient of the 100 data was 0.964 0.069. The
average dice coefcients for chests, hands, and legs were,
respectively, 0.992 0.010, 0.939 0.101, and 0.963 0.052.

3.2.

Fig. 8 Initial segmentation with different values, the


left image was computed with = 0.2, while the right one
was obtained with = 0.4.

Table 2 Segmentation results with different values.

Number of
regions

Computation
time (ms)

Dice coefcient

Comparison with manual grayscale thresholding

The proposed method has been compared with manual


grayscale thresholding method. As an appropriate threshold
could hardly be determined for the complicated cases of DR
images, we opted to manually adjust the threshold to yield
the best possible segmentation. For the 100 cases, 24 cases
could not be segmented correctly by manual thresholding.
The aver- age dice coefcients of the manual thresholding
method (with
= 0.005 and = 0.2) were 0.937 0.119. A matched pair ttest
was performed using the paired dice coefcients across all
test images. It was found that the proposed method could
yield statistically better segmentation accuracy than the
manual
grayscale thresholding method (p =
0.0062).

3.3.

Comparison with the other watershed method

0.01
0.05
0.10
0.20
0.30
0.40
0.50

1
80
253
337
615
615
615

297
343
359
375
390
406
390

0
0.988
0.986
0.979
0.945
0.945
0.945

A multi-scale gradient based watershed algorithm [14] was


implemented for comparison. Multiscale gradients were computed by the formula (9) as below.

0.60

1468

640

0.590

MG(I) =

Parameter T3 was chosen to check if two regions were


similar
according to formula (6).
From left to right, Fig. 9 showed an initial segmentation,
segmentation after region merging, and the foreground
image after background removal. Experiment results with
different T3 ranging from 0.1 to 0.5 were shown in Table 3
( = 0.005, = 0.2 and = 0.8).
Experiments were carried out to see the segmentation
dependency upon (Table 4) with = 0.005, = 0.2 and T3 =
0.4.

Table 3 Segmentation results with different T3 values.


T3
Number of
Number of
Dice coefcient
regions before
regions after
0.5
337
1
0
merging
merging
0.45
0.4
0.3
0.2
0.1

337
337
337
337
337

1
23
43
118
181

0
0.984
0.979
0.979
0.979

i=1

[(I Bi I Bi ) Bi1 ]

(9)

where and denote dilation and erosion and Bi is structuring element with different sizes. The size of structuring
element Bi is (2 i + 1) (2 i + 1) and B0 is a structuring element containing only 1 pixel. The number of structuring
elements n was set as 3 in our experiments. Then a
reconstruc- tion by erosion was performed to eliminate
minimums with contrast lower than a constant h. The
marker image used in reconstruction was generated by
adding this constant h to the gradient image. In the
experiments, h was set to 3. For a fair comparison, the
regions produced by [14] were merged using the same
procedure (with = 0.005, T3 = 0.3 and = 0.8) to yield
an average dice coefcient of 0.942 0.098. The comparative
results were listed in Table 5. The proposed method could
yield
statistically
segmentation
accuracy time
than [14] (p =
In addition,
we better
also compared
the execution
0.0094)
based onmethod
the paired
the images
100 dice might
coefcients.
between
the
proposed
andt-test
[14].of As
have
different sizes, one image with size of 3072 3072 was
selected and resampled to have 2048 2048 and 1024 1024
sizes, to
compare the time efciency (Table 6).

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

901

Fig. 9 (a) Initial segmentation. (b) Region merging and (c) nal segmentation.

Table 5 Segmentation accuracy results of


three methods.
Method

Average Dice
Coefcient

Standard Deviation

Proposed method
Thresholding

0.964
0.937

0.069
0.119

Multiscale gradient
based watershed

0.942

0.098

4.

Discussion

A marker-based X-ray image segmentation method was


proposed in the paper. The major contributions are:
determi- nation of markers through thresholding the
grayscale gradient magnitudes, and merging regions based
on simple grayscale statistics customized to DR images to
enhance accuracy (Table 5) and decrease computational
cost (Table
6). The proposed method could segment
images with overlapping grayscales between background
and foreground (Fig. 10), and is generally applicable for
images having greater grayscale gra- dient magnitudes on
the boundary between the background and foreground
than within the background or foreground regions.
Experiments were carried out to test the sensitivity of
parameters
is a parameter to eliminate few pixels with largest
grayscales. As can be seen in Table 1, the segmentation
accu- racy keeps very stable even when its value varies
from 0.1% to 50%. It is recommended to set to be a
relative small value which will not exclude foreground pixels
with the largest grayscale, for example 0.5% in our
experiments. It should not be larger than the proportion that
the background occupies.
The parameter is to extract reasonable seed pixels
which are used as the markers in watershed segmentation.
As shown in Table 2, when is 0.01, only one region is
obtained after

Table 6 Comparison of time efciency.


1024 1024 2048 2048
Proposed method
(ms)

3072 3072

Multi-scale gradient based


watershed (ms)

watershed segmentation, meaning that the foreground contour has been broken so that the foreground and
background are merged. In order to retain continuity of the
foreground contour, should be large enough such that the minimum
gradient magnitude of foreground contour pixels is still
larger than the
gradient threshold. However, increasing will generate
more connected regions in the initial segmentation to make
merging more difcult and more time consuming. According to
our experiments, a value between 0.1 and 0.4 is appropriate
to balance between region merging and minimum gradient
magnitude of foreground contour pixels.
The parameter T3 is used to dene the similarity threshold
for merging regions. If T3 is too large as 0.5 in Table 3, it is
highly possible that all regions will be merged. On the other
hand, too small T3 may yield too many regions after
merging. So a value around 0.3 seems a balanced choice.
The parameter is to differentiate the background
and foreground according to their differences in average
grayscales. From Table 4 it can be seen that the
segmentation remains stable when the parameter is in the
range of 0.5 and
0.9.
Due to the grayscale overlap between the background
and foreground of X-ray DR images, grayscale thresholding
(even with manual derivation of optimum threshold) will
yield sta- tistically lower accuracy as expected (Table 5).
Watershed segmentation can overcome the grayscale
overlap problem as it is based on catchment basins instead of
grayscales. However, the usual way to derive catchment
basins is via local minimum of
grayscale gradient
magnitudes, which in turn will have over-segmentation due
to too many undesirable local mini- mums. Multiscale
gradient based watershed method [14] is an attempt to
suppress local minimums, but its application to X- ray DR
images is not as good as the proposed method (Table 5) due to
the great variability of local minimums that could not be
suppressed. Marker based watershed method sounds a way
to derive catchment basins, but generally there does not
exist a structure either in background or foreground that
can be used as markers (such as the case of white matter as
a marker in brain segmentation). A natural way to get the
markers is to explore gradient threshold as is done in this
paper, which employs the fact that gradient magnitudes
within the back- ground/foreground are generally smaller.
To the best of our
577

1856

5382

1419

5429

11,950

knowledge, this is
the rst trial to
explore gradient

magnitudes for markers of


watershed segmentation.

Special means have been adopted to decrease the computational cost. Firstly, a percentile is introduced to ignore

902

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

Fig. 10 Comparison of the segmentation results between the proposed method and the thresholding method, (a) the
original image with substantial grayscale overlap between the background and foreground, (b) segmentation from
manual thresholding, and (c) segmentation based on the proposed method.
pixels with highest grayscales for further processing, which
are likely noisy pixels and would yield high gradient magnitudes if not excluded. Excluding these pixels by percentile
instead of ltering has two advantages: low computational
cost and decreasing grayscale range for subsequent
processing without changing grayscales of foreground.
Secondly, gradient magnitudes are calculated through
additions and subtrac- tions, avoiding multiplication and
square root operations. Experiments show the simplied
calculation of
gradient magnitudes could provide the
necessary information for sub- sequent
watershed
segmentation. Thirdly, the number of markers has been
decreased substantially by thresholding gra- dient
magnitudes, which will decrease the number of regions to be
merged to decrease the computational cost (Table 2).
Finally, regions are merged based on simple grayscale statistics (average) instead of other measures that need complex
computation such as texture. As a result, the processing
time
is at most 6 s even for a 3072 3072 image on a Pentium 4 PC
with 2.4 GHz CPU (4 cores) and 2G RAM which is more than
one time faster than that of the multiscale gradient based
watershed method [14].
The present study is not without limitations. Firstly, it is
assumed that the background has a larger average
grayscale than that of the background. This assumption
may be vio- lated for extreme cases as shown in Fig. 11. This
problem could only be solved during imaging such as
controlling eld of view. Secondly, to achieve real-time image
processing, one way is to decrease the computational cost
while the other way is to use complex hardware such as
graphics processing unit to speed up. We only addressed the
rst way and will leave the second way for further study.

5.

Conclusion

A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method
consisted of six parts: image preprocessing, gradient computation, marker extraction, watershed segmentation, regions
merging and background extraction. One
hundred DR
images were
used
to
quantify
the
segmentation
accuracy. Man- ual thresholding and multiscale gradient
based watershed method [14] were implemented for
comparison. The proposed
method yielded a dice coefcient of 0.964 0.069, which was
better than that of the manual thresholding (0.937 0.119)
and that of multiscale gradient based watershed method
(0.942 0.098). Special means have been adopted to decrease
the computational cost, including getting rid of few pixels
with highest grayscale via percentile, calculation of gradient
mag- nitude through additions, decreasing the number of
markers by appropriate thresholding, and merging regions
based on simple grayscale statistics. As a result, the
processing time
was at most 6 s even for a 3072 3072 image on a Pentium 4
PC with 2.4 GHz CPU (4 cores) and 2G RAM which was more
than one time faster than that of the multiscale gradient
based watershed method. Experiments on tolerance to
param- eters showed that the method was not sensitive to
the varied parameters. The proposed method could be a
potential tool for both diagnosis and computer aided
diagnosis of X-ray images.

Conict of interest statement


We declare that all authors have no conicts of interest in
the authorship or publication of this contribution.

Acknowledgements

Fig. 11 Imperfect segmentation as part of the background


regions is even darker than the foreground: left for the
original image and right for the segmentation of the
proposed method.

This work has been supported by National Program on Key


Basic Research Project (Nos. 2013CB733800, 2013CB733803),
Key Joint Program of National Natural Science Foundation
and Guangdong Province (No. U1201257), Program
for
Devel- opment of Talent Team Resources in Guangdong
Province (No. 2011S013and No. 201001D0104648280), the
Strategic Part- nership Program between Guangdong
Province and
Chinese Academy of
Sciences (No.
2011B090300079), National Natural Science Foundation of
China (No. 61272328), and Guangdong

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 3 ( 2 0 1 4 ) 894903

903
Natural Science Foundation (No. S2011010001820). Authors
would like to thank the Beijing Aerospace Zhongxing Medical
Systems Co, Ltd for providing the clinical data.

r ef er en ce
s

[1] K. Doi, M.L. Giger, H. MacMahon, et al., Computer-aided


diagnosis: development of automated schemes for
quantitative analysis of radiographic images, Seminars
in Ultrasound, CT and MRI 13 (1992) 140152.
[2] K. Doi, Overview on research and development of
computer-aided diagnostic schemes, Seminars
in Ultrasound, CT and MRI 25 (2004)
404410.
[3] K. Doi, Computer-aided diagnosis in medical imaging:
historical review, current status and future potential,
Computerized Medical Imaging and Graphics 31 (2007)
198211.
[4] M.L. Giger, Computerized analysis of images in the
detection and diagnosis of breast cancer, Seminars in
Ultrasound, CT and MRI 25 (2004) 411418.
[5] G.J. Bansal, Digital radiography: a comparison with
modern conventional imaging, Postgraduate Medicine 82
(2006)
425428.
[6] T. Berber, A. Alpkocak, P. Balci, O. Dicle, Breast mass
contour segmentation algorithm in digital mammograms,
Computer Methods and Programs in Biomedicine 110 (2)
(2013)
150159.
[7] C. Zohios, G. Kossioris, Y. Papaharilaou, Geometrical
methods for level set based abdominal aortic aneurysm
thrombus and outer wall 2D image segmentation, Computer
Methods and Programs in Biomedicine 107 (2) (2012)
202217.

[8] C. Ballangan, X.Y. Wang, Mi. Fulham, S. Eberl, D.D. Feng,


Lung tumor segmentation in PET images using graph cuts,
Computer Methods and Programs in Biomedicine 109 (3)
(2013) 260268.
[9] K.K. Delibasis, A. Kechriniotis, I. Maglogiannis, A novel tool
for segmenting 3D medical images based on generalized
cylinders and active surfaces, Computer Methods and
Programs in Biomedicine 111 (1) (2013) 148165.
[10] Q.M. Hu, Z.J. Hou, W.L. Nowinski, Supervised
range-constrained thresholding, IEEE Transactions on Image
Processing 15 (1) (2006) 228240.
[11] E. Poletti, F. Zappelli, A. Ruggeri, E. Grisan, A review of
thresholding strategies applied to human chromosome
segmentation, Computer Methods and Programs in
Biomedicine 108 (2) (2012) 679688.
[12] L. Vincent, P. Soille, Watersheds in digital spaces: an
efcient algorithm based on immersion simulations, IEEE
Transactions on Pattern Analysis and Machine Intelligence
13 (6) (1991) 583598.
[13] L. Vincent, Morphological grayscale reconstruction in image
analysis: applications and efcient algorithms, IEEE
Transactions on Image Processing 2 (1993) 176201.
[14] D. Wang, A multiscale gradient algorithm for image
segmentation using watersheds, Pattern Recognition 30 (12)
(1997) 20432052.
[15]
J.C. Chen, Y. Wu,
W. Li, Watershed segmentation
algorithm for medical image based on
anisotropic
diffusion ltering, Journal of Computer Applications 6 (4)
(2008) 15271529.
[16] R. Rodrguez, T.E. Alarcn, O. Pacheco, A new strategy to
obtain robust markers for blood vessels segmentation by
using the watersheds method, Computers in Biology and
Medicine 35 (8) (2005) 665686.
[17] Q.M. Hu, G.Y. Qian, M. Teistler, S. Huang, Automatic and
adaptive brain morphometry on MR images, RadioGraphics
28 (2) (2008) 345356.

Das könnte Ihnen auch gefallen