Sie sind auf Seite 1von 11

1158

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 6, JUNE 2015

Label Image Constrained Multiatlas Selection


Pingkun Yan, Senior Member, IEEE, Yihui Cao, Yuan Yuan, Senior Member, IEEE,
Baris Turkbey, and Peter L. Choyke

AbstractMultiatlas based method is commonly used in medical image segmentation. In multiatlas based image segmentation,
atlas selection and combination are considered as two key factors affecting the performance. Recently, manifold learning based
atlas selection methods have emerged as very promising methods. However, due to the complexity of prostate structures in
raw images, it is difficult to get accurate atlas selection results
by only measuring the distance between raw images on the manifolds. Although the distance between the regions to be segmented
across images can be readily obtained by the label images, it
is infeasible to directly compute the distance between the test
image (gray) and the label images (binary). This paper tries to
address this problem by proposing a label image constrained
atlas selection method, which exploits the label images to constrain the manifold projection of raw images. Analyzing the data
point distribution of the selected atlases in the manifold subspace, a novel weight computation method for atlas combination
is proposed. Compared with other related existing methods, the
experimental results on prostate segmentation from T2w MRI
showed that the selected atlases are closer to the target structure and more accurate segmentation were obtained by using our
proposed method.
Index TermsAtlas-based, computer vision, image segmentation, manifold learning.

I. I NTRODUCTION
ROSTATE cancer is the second cause of cancer death
among American men [1]. Accurate segmentation of
the prostate can be helpful for assisting the diagnosis of
the prostate cancer. Traditionally, the prostate magnetic resonance (MR) image segmentations are performed manually
by experts. However, manual segmentation is tedious, time
consuming, and not reproducible. To overcome these shortcomings, a large number of automated image segmentation
methods have been proposed [2][5]. Although these existing
methods are effective in some cases, automated segmentation
of prostate MR image is still very challenging due to the
unclear boundary information in some areas [6]. As shown

Manuscript received March 12, 2013; revised January 4, 2014


and May 13, 2014; accepted May 17, 2014. Date of publication
November 14, 2014; date of current version May 13, 2015. This work
was supported by the National Natural Science Foundation of China under
Grant 61172142 and Grant 61172143. This paper was recommended by
Associate Editor X. You.
P. Yan, Y. Cao, and Y. Yuan are with the Center for Optical Imagery
Analysis and Learning, State Key Laboratory of Transient Optics and
Photonics, Xian Institute of Optics and Precision Mechanics, Chinese
Academy of Sciences, Xian 710119, China (e-mail: pingkun.yan@opt.ac.cn;
yihui.cao@opt.ac.cn; yuany@opt.ac.cn).
B. Turkbey and P. Choyke are with the National Institutes of
Health, National Cancer Institute, Bethesda, MD 20892 USA (e-mail:
turkbeyi@mail.nih.gov; pchoyke@mail.nih.gov).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCYB.2014.2346394

Fig. 1.
Left: original MR image with two red arrows to illustrate the
weakened boundary. Right: prostate segmentation delineated by an expert with
red contour.

in Fig. 1, the red contour in the MR image on the right is a


segmentation of the prostate delineated by an expert. It can
be seen that the boundary is very weak in the areas indicated
by the two red arrows in the original MR image in the left
of Fig. 1. Experts segment these areas mainly according to
their knowledge of the anatomical structure of the prostate.
Therefore, it is important to use the anatomical knowledge in
the automated methods.
Multiatlas based segmentation, for its full automation and
high accuracy, has become one of the popular automated segmentation techniques [7]. The atlas essentially depicts the
shapes and locations of anatomical structures and together
with the spatial relationships between them [8]. Thus, atlasbased segmentation is one of the most common methods
applied to the automated segmentation of the prostate MR
image [9][11]. Generally, an atlas consists of a raw image
and its corresponding segmented label image. In the process
of multiatlas based segmentation, each atlas is first registered
to the target image, resulting in a deformed atlas close to the
image to be segmented. Then, a subset of atlases is selected
from the deformed atlases based on a certain of selection criteria. Finally, the selected atlases are combined into a single
binary template for segmentation. Among the three steps of
multiatlas based method, the strategy of atlas selection is one
of the most critical factors affecting the accuracy of segmentation [12], [13]. Besides that, atlas combination is another
important ingredient [14], where assigning the proper weight
for each selected atlas is a crucial factor.
The approaches of multiatlas based segmentation can be
divided into global atlas based and local atlas based. In
global atlas based methods [11], [15][18], the measure of
similarity in atlas selection and the weights assignment in
atlas combination are based on the whole images. Local atlas

c 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
2168-2267 
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

YAN et al.: LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

(a)

1159

(b)

Fig. 2. (a) Misleading manifold projection due to the influence of other anatomical structures in raw images. (b) Manifold projection constrained by the
label images to reduce the influence and preserve the neighborhood structure.

based [19], [20] are patch-based strategies. Images are first


divided into patches, and then the atlas selection and combination are both based on patches. Although the accuracy of
segmentation by using local based approaches is better than
using the global based approaches, local based methods brings
more complicated calculation. In this paper, we mainly focus
on the global atlas based method.
A. Related Works
According to the space that the procedure of atlas selection based, the existing global multiatlas based segmentation
can be divided into original image space based and subspace
based. In the category of original image space based methods,
similarity-based selection [11], [21][23] is the most popular used criteria. Take the method [11] as an example, their
atlas selection and combination are both based on the normalized mutual information (NMI). The atlas selection is based
on measuring the similarity between images. The combination weights assignment is also according to the NMI values
between test image and selected atlases. NMI evaluates the
similarity between images in the original image space.
Recently, manifold learning based atlas selection methods,
in which the atlases can be defined as points on a manifold
space to measure their similarity, have attracted a lot of attention of the researchers. A number of works [24][31] have
shown that the medical images are embedded in a lowerdimensional manifold space, and the intrinsic similarity [31]
between these images can be better uncovered in this space.
Based on this assumption, Wolz et al. [15] proposed an atlas
selection method (LEAP) based on the lower-dimensional
manifold space which is constructed by using the multidimensional scaling (MDS) [32]. However, in their atlas combination
step, the computation of combination weights was also based
on the original space by evaluating the NMI values between
the selected atlas and test image. Similarly, our previous
work [16] applied classical locality preserving projections
(LPP) [33] algorithm to project the images into a manifold
space for atlas selection.
B. Motivation of the Proposed Method
In existing methods, other anatomical structures may
affect the performance of atlas selection. For example, when

segmenting the prostate from T2w MR images delineated by


the red contour as shown in Fig. 1(b), the existing manifold
based methods are often distracted by the surrounding structures. Fig. 2(a) shows an example of the neighborhood structure of the prostate regions in the original high-dimensional
image space. However, since the neighborhood uncovered by
directly applying the manifold learning algorithms [32], [33]
reflects the similarity between the entire images but not the
regions to be segmented, the original neighborhood structure may not be preserved by the manifold projection using
the existing methods [15], [16]. Influenced by surrounding
tissues and anatomical structures, e.g., rectum and bladder,
the atlas selection results might be misleading. Therefore,
it is highly desirable to be able to measure the similarity
between the regions of interest across images. Compared
with the raw images, the shape and size information of
the regions to be segmented are readily available in label
images. However, directly computing the similarity between
the test image (gray level) and the label images (binary) is
infeasible.
In this paper, we propose a data-driven atlas selection
method: label image constrained atlas selection (LICAS). The
idea is to employ label images to constrain the computation
of the affinity matrix of raw images in constructing the
lower-dimensional manifold space, as shown in Fig. 2(b).
The region of interest information from the label images
is exploited together with the raw images when learning
the manifold projection. Due to the constraint, the intrinsic
similarity between the target regions can be uncovered in the
lower-dimensional manifold space. In this space, the selected
atlases are closer to the test image in terms of the regions
of interest, and then the final fused template can improve the
performance of the segmentation.
Based on this manifold subspace analysis, a novel weight
assignment method for atlas combination is also proposed
in this paper. Inspired by the work locally linear embedding (LLE) [34], the raw images and the label images of the
selected atlases can be assumed to share an identical manifold
structure in the lower-dimensional space. Therefore, weight
assignment for the selected label images can be considered as
computing the weights for the reconstruction of the data points
of raw images in the manifold subspace. Then, the computed

1160

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 6, JUNE 2015

reconstruction weights can be mapped for label images for


combination.
Our main contributions in this paper are twofold.
1) A new manifold projection method is developed by
taking the label image information into account for
selecting atlases on a lower-dimensional manifold for
image segmentation, which has been overlooked by
other existing methods.
2) The atlas combination weights are computed by solving a problem of reconstruction of data points in the
manifold subspace.
To the best of our knowledge, this is the fist work that
uses the label images to reduce the influence of other anatomical structures for atlas selection, and analyzes the problem
of weights computation for atlas combination in a manifold
subspace [16], [35]. Compared with three other recent methods [11], [15], [16], it has been shown that our method is
efficient and superior in performance.
The rest of this paper is organized as follows. Firstly, The
details of our proposed atlas selection method (LICAS) and
the computation of atlas combination weights are presented
in Sections II and III, respectively. In Section IV, we briefly
describe the framework of the proposed method. The performance of the proposed method are reported and discussed in
Section V. Finally, conclusions are drawn in Section VI.
II. L ABEL I MAGE C ONSTRAINED ATLAS S ELECTION
Ideally, the atlas selection method should measure the similarity between only the regions of interest across images. Thus,
a good manifold projection should not only preserve the
neighborhood of the original manifold of raw images, but also
consider the intrinsic similarity between the regions of interest.
Therefore, the label image constrained atlas selection method
is proposed as follows.
T
represent data points in a
Let Y = y1 , y2 , . . . , yn
d-dimensional space, where each point yi d is mapped by
a projection matrix P from a raw image data point xRi m
in the original m-dimensional manifold (d << m) as
xRi yi = PT xRi .

(1)

In order to preserve the neighborhood of the original manifold of raw image in projection, the objective function of the
projection is

2
PR = arg min
yi yj SRij
(2)
PR

ij

where SR is a matrix describing the similarity between the raw


images. The matrix SR acts as a penalty, if xRi and xRj are close
to each other in the original manifold space, the value of SRij
will be large. To minimize the energy function (2), yi and yj
should be close as well.
Similarly, the other matrix SL is used to construct a new
projection objective function for label image as

2
yi yj SLij
(3)
PL = arg min
PL

ij

where SL describes the similarity between label images. Since


the shape and size information of the regions to be segmented

are readily available in label images, SL makes the projection also uncover the intrinsic shape similarity between the
regions of interest across images. In addition, the similarity
metrics SR and SL are defined based on the standard spectral
graph theory [36]. Both the similarity metrics are symmetric,
i.e., SRij = SRji , and SLij = SLij . The detailed definition is as
below

d(xRi ,xRj )

, if the nodes xRi and xRj


exp
t
(4)
SRij =
are connected

0,
otherwise
where d(xRi , xRj ) is the distance between the images of xRi and
xRj , based on a certain image distance measure (IDM). t is a
normalization parameter, and it is set to 1, empirically. The
other matrix SL is defined in the same way as SR , according
to the graph constructed by label images.
By adding the constraint in (3) to the original projection
computation (2), a constrained manifold projection can be
obtained by minimizing the following objective function:

 
2
2

yi yj SRij + (1 ) yi yj SLij (5)
arg min
P

ij

where the parameter adjusts the importance of the raw image


and the label image. Therefore, the projection matrix P can
be obtained by resolving the objective function in (5) as
2
2

1  
yi yj SRij + (1 ) yi yj SLij
2
ij

2
1 
yi yj Sij
2
ij

= P XR DXRT P PT XR SXRT P
= PT XR LXRT P
T

(6)

where Sij = SRij + (1 )SLij , and


XR = (xR1 , . . . , xRn ).
D is a diagonal matrix that Dii = j Sij . L = D S is the
classical Laplacian matrix [36]. To normalize the data points in
the lower-dimensional space, a scale constraint is imposed as
Y T DY = I PT XR DXRT P = I.

(7)

Therefore, solving the objective function (5) is equivalent


to computing
P = arg min PT XR LXRT P
P

given PT XR DXRT P = I.

(8)

Note that the matrices D and L are both symmetric and


positive semidefinite, so are the matrices XR DXRT and XR LXRT .
Therefore, the vector the pi , ith column of the matrix P, can
be obtained by solving the following eigenvalue decomposition
problem:
XR LXRT pi = i XR DXRT pi .

(9)

Ranking the eigenvalues of the solution of (9) to be 1 <


2 , < . . . , < d , the projection matrix P can be constructed
by the corresponding eigenvectors of the solutions of (9), P =
(p1 , . . . , pd ), and P is a m d matrix.

YAN et al.: LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

1161

Once the matrix P is obtained, the raw images can be


mapped to the lower-dimensional manifold space according
to (1). Then the test image data point xT can be projected into
the same d-dimensional space
xT t = PT xT

(10)

where t represents the data point of test image in the lowerdimensional space, which is illustrated by the red point in
Figs. 2 and 4. After that, the similarity between the regions
of interest across images can be measured by computing the
simple Euclidean distance from t to each yi
disi = t yi 2 .

(11)

Then, the atlas selection order is obtained by ranking the


distance from the nearest to the farthest (dis1 < dis2 <
< disn ). Consequently, a subset of K atlases {Rk , Lk }
can be selected according to the atlas selection order, namely
dis1 < dis2 < < disK .
III. S UBSPACE A NALYSIS FOR W EIGHTED C OMBINATION
In atlas combination step, weight assignment for the selected
atlases is also an important factor affecting the segmentation
performance. In this paper, the weight computation is based
on the lower-dimensional manifold subspace. The objective
of combination is to make the result close to ground truth
as much as possible. That is to say, the goal is to minimize
the difference between the ground truth and the combination
result. Therefore, the weight vector w1 can be computed as
w1 = arg min Lg
s.t.

w1

K


w1k Lk 22

k=1

w1k = 1, w1k 0

(12)

where {w1k , (k = 1, 2, . . . , K)} is the weight of each selected


label images Lk . Lg is the segmentation ground truth of the
test image. Each Lk denotes the label image of the selected
atlases, and the combination result is
w1k Lk . However, due
to the two variables w1 and Lg are both unknown, (12) cannot
be directly solved by simultaneously estimating w1 and Lg .
On the other hand, raw images are always known. Inspired
by the work of locally linear embedding (LLE) [34], the raw
images and the label images of the selected atlases can be
assumed to be embedded in the same lower-dimensional space,
as shown in Fig. 3. Therefore, the challenge of atlas combination can be considered as the problem of the reconstruction
of data points in the lower-dimensional manifold subspace.
Corresponding to (12), the weights for reconstruction can be
rewritten as
w2 = arg min t
s.t.

w2

K


w2k yk 22

k=1

w2k = 1, w2k 0.

(13)

Apparently, this is a linearly constrained convex quadratic


programming (QP) problem. It can be solved by several classical approaches [37]. Considering the w2 is not a large-scale

Fig. 3. Illustration of the mapping of combination weights. Assuming the


raw images and the label image are embedded in the same manifold [34], the
weight for combining label images combination can be computed by solving
the problem of data points reconstruction in the lower-dimensional space.

vector, we applied an active-set approach [38] to solve this


problem.
Once the weights vector w2 is obtained, by mapping the
weights vector w2 to the weights vector w1 , the automatic
segmentation image La can be obtained as follows:
La =

K


w1k Lk .

(14)

k=1

Eventually, the test image T can be segmented by the


obtained label image La .
IV. OVERALL W ORKFLOW
The workflow of the proposed method is described briefly
in this section. There are three main steps in the proposed
method: image preprocessing, atlases selection, and combination. The latter two steps are described in Sections II and III,
respectively. For image preprocessing step, it is performed in
two steps.
1) Z-Score Normalization: In order to make the different raw MR images of atlases within the same
dynamic range, the classical z-score normalization [39]
is employed as
R i =

R i i
i

(15)

where R i (i = 1, 2, . . . , N) is the raw MR image of


atlases. i is the mean of R i and i is the standard deviation of R i . R i is the normalized image of the original
the z-score normalization
image R i . For a test image T,
is also performed on it.
2) Registration: After image normalization, each normalized raw image of atlases R i is aligned to the normalized
test image T, as shown in the left part of the first
row in Fig. 4. The alignment is implemented by a 3-D
rigid registration and followed by 3-D nonrigid B-spline
deformable registration using the public medical image
registration tool elastix [40] (using the default parameter
settings). With the parameters of transformation yielded
from the alignment, each atlas {R i , L i } is warped to the
test image T, generating the deformed atlas {Ri , Li }.

1162

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 6, JUNE 2015

Fig. 4. Workflow of our proposed method. The upper row shows the process for analyzing the raw images of the atlases, and the lower row is the processing
steps of the corresponding label images.

The complete workflow of the proposed method is


illustrated in Fig. 4.
V. E XPERIMENTS
A. Materials and Evaluation Methods
The performance of the proposed method was tested on
60 T2w prostate 3-D MR images from 60 different patients
obtained by using endorectal coil with 3T Philips magnetic
resonance imaging (MRI) scanner. The size of each image
is 512 512 26 pixels with the resolution of 0.3 mm
0.3 mm 3 mm. The performance of the atlas selection was
measured by the overlap between the selected deformed label
images of atlas and the ground truth of test image, where each
ground truth was manually delineated by an experienced radiologist. Then, the performance of segmentation was evaluated
subsequently by measure the overlap between the automatic
segmentation results and the ground truth.
The overlap was defined by the dice similarity coefficient (DSC) value as
  
2 Lg L
(16)
DSC(Lg , L) =  
Lg  + |L|
where Lg denotes the ground truth and L represents the
selected label image of atlas Lk or the automatic segmentation La . The operator |x| is the area of region x. DSC value
varies from 0 to 1. A higher value suggests more overlap and
higher similarity between the target regions across the images
or the more accurate segmentation performance.
B. Image Distance Measure (IDM)
To fairly compare, we employed two different kinds of
IDMs for the construction of the similarity metrics SR and
SL . Compared with the atlas selection of [11] and [15], we
applied NMI for measuring image distance, which is denoted

as LICAS-1. The distance d(xRi , xRj ) of IDM for similarity


metric SR1 in (4) is defined as




d xRi , xRj = 1 NMI Ri , Rj .
(17)
Then the Euclidean distance is used to define the distance
d(xRi , xRj ) in LICAS-2
2
 

(18)
d xRi , xRj = xi xj  .
For the similarity metric SL , LICAS-1 and LICAS-2 used
the same strategy by measuring the overlap between two
images to compute the distance of d(xLi , xLj ), defined as




d xLi , xLj = 1 DSC Li , Lj
(19)
where the computation of DSC is defined as in (16).
C. Atlas Selection Results
1) Qualitative Results: Fig. 5 shows an example of the
selected atlases along the atlas selection order from 1th to
10th (described in the last paragraph in Section II). Examples
of the selected results based on the methods [11], [15], [16],
LICAS-1 and LICAS-2 are shown in Fig. 5 from top to
bottom, respectively. The red surface is the test images
ground truth Lg . The surface of each selected label image
Lk , (k = 1, 2, . . . , 10) is shown by green surface. As seen in
Fig. 5, the selected images in bottom two rows (LICAS-1s and
LICAS-2s) are closer to the ground truth than the results (top
three rows) based on the other three methods [11], [15], [16].
It can be seen that our proposed atlas selection method,
LICAS, can improve the performance of atlas selection.
2) Quantitative Results: To evaluate the performance of different atlas selection methods quantitatively, the leave-one-out
validation approach was applied. For each test sample, the
first 10 atlases were selected based on each methods atlas
selection order. Then, the DSC value between each selected

YAN et al.: LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

1163

Fig. 5. Randomly picked example with ground truth segmentation shown by the red surface. Along the atlas selection order from 1 to 10, the selected label
images (green surfaces) are presented, respectively. The results, from top to bottom, are obtained by using NMI-based [11], LEAP [15], non-LICAS [16],
LICAS-1, and LICAS-2, respectively.

According to the description in Section V-B, the main


difference between the proposed method and the methods
in [15] and [16] is that the information of label image is
introduced in the proposed method LICAS. Form Fig. 6, it
can be seen that the results obtained by LICAS-1 are significantly better (p < 0.05, t-test) than the results based on
the LEAP method [15]. The results of LICAS-2 are better
than the results of the method [16] in most cases with some
rare exceptions, e.g., the 4th selected atlas. Theoretically, the
tendency of overlap is declining along with the atlas selection order, and the ideal atlas selection is from the most
similar to the least. Thus, the DSC value should be decreasing along the atlas selection order. But in practice, it is not
always true.
D. Segmentation Results
Fig. 6. Statistical performance of atlas selection on five methods along the
atlas selection order from 1 to 10.

atlas Lk , (k = 1, 2, . . . , 10) and the ground truth Lg was computed. The average DSC values of all the 60 tests are shown in
Fig. 6. It can be seen that the average DSC values based on our
LICAS-1 (red bars) and LICAS-2 (blue bars) are higher than
the results based on the existing methods [11] (cyan bars), [15]
(magenta bars), and [16] (green bars) in most cases along the
atlas selection order. It is demonstrated that, in the statistical
sense, the idea of employing the label images to constrain the
manifold projection is helpful to improve the performance of
atlas selection indeed.

To compare the effect of the selected atlas on the


final segmentation performance, the selected atlases (label
images) were combined into a template for segmentation. We
applied the same weight computation algorithm, described in
Section III, for methods in [16], LICAS-1, and LICAS-2.
For method in [11] and [15], according to their methods, the
weights computation was based on the NMI values between
the selected atlas and the test image.
1) Qualitative Results: By combining the selected 10
atlases into a template La , each test image can be segmented.
For qualitatively comparing the performance of segmentation, we selected randomly six test samples from the 60 data
set, as shown in Fig. 7. Each column contains segmentation results of one test sample based on the five different

1164

Fig. 7.

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 6, JUNE 2015

Some segmentation results of five methods. Yellow surfaces are the automatic segmentation results and the red surfaces are the ground truth.

(a)

(b)

Fig. 8. (a) Average DSC of segmentation results of the five methods by varying the number of selected atlases from 1 to 10. (b) Comparison of the
segmentation results by box-and-whisker diagrams.

methods [11], [15], [16] LICAS-1, LICAS-2 (from the top to


bottom). The ground truth Lg described by the red surface and
the segmentation results La s surface is presented by yellow.
Compared with the ground truth, the automatic segmentation

results obtained by LICAS-1 and LICAS-2 are closer than the


results of the methods [11], [15], [16]. It illustrates that the
proposed LICAS can also help to improve the performance of
segmentation.

YAN et al.: LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

1165

proposed method LICAS can yield better and more stable


segmentation results. It should be noted that the results of
LICAS-1 seemly have more outliers than the other methods,
even though, they are still above the smallest observations of
other methods.
3) Weights of Combination: To illustrate the performance
of the proposed weight assignment method in atlas combination, we compared our approach with the classical NMI-based
weight assignment approaches [11], [15]. For fair comparison, the selected atlases used for the NMI-Based Weights
combination method are the same as the selected atlases of
LICAS-1. A weight wk for a selected atlas Ak in NMI-based
method [11], [15] is computed as below
NMI (T, Ak )
wk = K
.
k=1 NMI (T, Ak )
Fig. 9. Based on the same atlas selection results (LICAS-1), it compares the
influence of different combination weights on the segmentation performance.

2) Quantitative Results: Based on the leave-one-out cross


validation, the average DSC values of all 60 test samples
using different methods were shown in Fig. 8(a), ranging the
atlas selected number from 1 to 10. Comparing the results
of method [11] (cyan line) with the results of method [16]
(green line), the green lines are higher than the cyan line
in all cases. It means that the atlas selection in manifold
space can help to improve the performance of the multiatlases
based segmentation. Compared the lines of LEAP [15] with
our previous work [16], the magenta line is far below other
lines. It may because that comparing with the MDS algorithm
used in [15], the main merit of LPP algorithm used in [16]
is that LPP attempts to preserve not only the local neighborhood in projection but also the original datas properties.
Thus, the results of selected atlases in this method is more
accurate than the results of LEAP [15]. More importantly,
in [16] the weights of atlas combination are assigned globally by analyzing the selected atlases data points distribution
in a manifold subspace. Note that comparing these state-of-arts
methods [11], [15], [16] with the proposed method LICAS-1
(red line) and LICAS-2 (blue line), the lines of LICSA-1 and
LICAS-2 are higher than the other three lines. It suggests that
the proposed method is effective to improve the final segmentation performance. To be more specific, compared with the
methods using NMI to build high-dimensional manifold, the
method LICAS-1 obtained significant (p < 0.05, t-test) better results than the method [15]. Furthermore, comparing the
method LICAS-2 with our pervious method [16], the results of
LICAS-2 (blue line) are better than the green line in all cases.
Furthermore, the box-and-whisker diagram was used to
compare the segmentation results statistically, as shown in
Fig. 8(b). The figure describes five indices. smallest observation, lower quartile (Q1), median (Q2), upper quartile (Q3),
and largest observation. Some red crosses are the data (outlier) lower than the smallest observation. It can be seen that the
median DSC values of LICAS-1 and LICAS-2 are higher than
those of other methods [11], [15], [16], and the interquartile
ranges (Q3-Q1) of LICASs are smaller than the interquartile ranges of the other three methods. It illustrates that the

(20)

The final averaged 60 segmentation results are shown in


Fig. 9. By varying the atlas selection number from 1 to 59,
it can be seen that the results based on the proposed method
(red line) are higher than the NMI-based results (green line)
in most cases. Especially, when the selected number getting
larger, the results of NMI-based is getting worse. But the
results based on our methods are still stable with large atlas
selection number. It shows that the proposed method, subspace analysis for weighted combination, is more suitable for
the model of atlas combination.
4) Segmentation Error Distribution: Finally, the Mollweide
map [41] was used to illustrate the spatial distribution of
segmentation errors over the prostate surface. The top and
bottom areas on the Mollweide map represent the apex and
cranial of prostate, respectively. For each test sample, the
shortest distance between the surfaces of ground truth Lg
and the automatic segmentation La was first computed at
every points. Then, the computed distances were projected
onto the Mollweide map. By averaging the 60 test samples
results, the segmentation errors were presented by pseudocolor maps in Fig. 10. From top to bottom, the rows show
the results based on the methods [11], [15], [16] LICAS-1
and LICAS-2, respectively. Each column contains the results
based on different atlas selected number: 1, 5, 10. Comparing
the color distribution on the Mollweide map obtained by different methods, the results show that the segmentation errors
of LICAS-1 and LICAS-2 are smaller than other methods
obviously. Furthermore, it can be seen that the larger segmentation errors mainly occurred at the regions of apex and
cranial in all methods.
E. Parameter Setting Discussion
1) Dimensionality d: The proper number of dimensionality d is essentially decided by the intrinsic degrees of
freedom of the prostate structure. In practice, it is hard to
estimate due to the complexity of prostate structure. In our
experiments, we evaluated the segmentation results over the
dimensionality varying from 1 to 59, where the number 59
is the upper bound defined by the size of training data.
We selected five results based on different parameter d to
illustrate the performance. As shown in Fig. 11, where (a)

1166

Fig. 10.

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 6, JUNE 2015

Spatial distribution of automatic segmentation errors on the Mollweide map based on the five methods.
TABLE I
AVERAGE DSC VALUES B ETWEEN THE F IRST S ELECTED ATLAS AND THE G ROUND T RUTH BY VARYING F ROM 0 TO 1

and (b) are the results of LICAS-1 and LICAS-2, respectively. It can be seen that the best segmentation performance
was achieved by setting the reduced dimensional number
d to 57 in both LICAS-1 and LICAS-2. It illustrates that
the dimension number 57 may be closer to the intrinsic
degrees of freedom of the anatomical structure of prostate.
The results mentioned in other parts in this paper are all based
on this number of dimension. Similarly, we set the embedding dimension d as 57 in [16] and 40 in [15], respectively,
according to the best performance by varying the number d
from 1 to 59.

2) Parameter : The most important parameter in the proposed method is in the objective function of (5), which
weighs the importance of the raw image and the label image
for atlas selection in the proposed method LICAS. We evaluated the performance over varying from 0 to 1, with the
step length of 0.1. Based on the first selected atlas, the average
DSC values between Lg and L1 are shown in Table I. It can
be seen that the best result was obtained by setting = 0.5
in LICAS-1 and = 0.3 in LICAS-2. It is indicated that
the label images play a more important role in atlas selection
than the raw images for the 0.5. It is worth mentioning

YAN et al.: LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

(a)
Fig. 11.

1167

(b)

Comparison of segmentation performance by varying the number of reduced dimension d. (a) LICAS-1. (b) LICAS-2.

that our previous work [16] is special case of LICAS-2 by


setting to 1.
VI. C ONCLUSION
In this paper, we proposed a novel manifold learning based
atlas selection method and a new weight computation algorithm for atlas combination in multiatlas based segmentation.
In atlas selection step, it employs the label images to constrain
the manifold projection to reduce the influence of surrounding anatomical structures in raw images. Constrained by the
label images, the manifold projection is able to help uncover
the intrinsic similarity between the regions of interest across
images. In the step of atlas combination, the weight computation for combining the selected label images is computed by
the reconstruction of data points of the selected raw images in
the lower-dimensional space. By comparing with three other
state-of-the-art atlas selection methods [11], [15], [16], the
experimental results showed that the selected atlases are closer
to the test images based on the proposed method, and the final
performance of the segmentation was also improved. To the
best of our knowledge, the proposed method is the first work
that employs the label images to reduce the influence of other
anatomical structures in raw images for atlas selection in multiatlas based segmentation, and the first method that computes
the weights for atlas combination by using subspace analysis.
Although the performance of atlas selection and the final
segmentation has been improved compared with the existing
methods [11], [15], [16], according to Fig. 10, it can been
seen that the segmentation errors are much larger in the apex
and caudal regions of prostate. Thus, in our future work, we
will investigate the methods to improve the segmentation performance in these regions. We will also further study more
efficient method to improve the performance of atlas selection
and segmentation.
R EFERENCES
[1] P. Yan, S. Xu, B. Turkbey, and J. Kruecker, Discrete deformable model
guided by partial active shape model for TRUS image segmentation,
IEEE Trans. Biomed. Eng., vol. 57, no. 5, pp. 11581166, May 2010.

[2] X. Gao, B. Wang, D. Tao, and X. Li, A relay level set method for
automatic image segmentation, IEEE Trans. Syst., Man, Cybern. B,
Cybern., vol. 41, no. 2, pp. 518525, Apr. 2011.
[3] D. Pham, C. Xu, and J. Prince, Current methods in medical image
segmentation, Annu. Rev. Biomed. Eng., vol. 2, no. 1, pp. 315337,
2000.
[4] Y. Guo, Y. Zhan, Y. Gao, J. Jiang, and D. Shen, MR prostate segmentation via distributed discriminative dictionary (DDD) learning, in Proc.
IEEE 10th Int. Symp. Biomed. Imaging (ISBI), San Francisco, CA, USA,
Apr. 2013.
[5] Y. Gao, S. Liao, and D. Shen, Prostate segmentation by sparse representation based classification, in Medical Image Computing and
Computer-Assisted InterventionMICCAI. Berlin, Germany: Springer,
2012, pp. 451458.
[6] N. Werghi, Y. Xiao, and J. P. Siebert, A functional-based segmentation
of human body scans in arbitrary postures, IEEE Trans. Syst., Man,
Cybern. B, Cybern., vol. 36, no. 1, pp. 153165, Feb. 2006.
[7] A. Elnakib, G. Gimelfarb, J. S. Suri, and A. El-Baz, Medical
image segmentation: A brief survey, in Multi Modality State-of-theArt Medical Image Segmentation and Registration Methodologies. New
York, NY, USA: Springer, 2011, pp. 139.
[8] T. Rohlfing, R. Brandt, R. Menzel, D. Russakoff, and C. Maurer,
Quo vadis, atlas-based segmentation? in Handbook of Biomedical
Image Analysis. New York, NY, USA: Kluwer Academic/Plenum,
2005, pp. 435486.
[9] S. Martin, V. Daanen, and J. Troccaz, Atlas-based prostate segmentation
using an hybrid registration, Int. J. Comput. Assisted Radiol. Surg.,
vol. 3, no. 6, pp. 485492, 2008.
[10] J. Dowling, J. Fripp, P. Freer, S. Ourselin, and O. Salvado, Automatic
atlas-based segmentation of the prostate: A MICCAI 2009 prostate segmentation challenge entry, in Proc. Worskshop Med. Image Comput.
Comput. Assist. Interv. II, London, U.K. pp. 1724.
[11] S. Klein et al., Automatic segmentation of the prostate in 3D MR
images by atlas matching using localized mutual information, Med.
Phys., vol. 35, pp. 14071417, Mar. 2008.
[12] T. Rohlfing, R. Brandt, R. Menzel, and C. Maurer, Evaluation of atlas
selection strategies for atlas-based image segmentation with application
to confocal microscopy images of bee brains, Neuroimage, vol. 21,
no. 4, pp. 14281442, 2004.
[13] P. Aljabar, R. Heckemann, A. Hammers, J. Hajnal, and D. Rueckert,
Multi-atlas based segmentation of brain images: Atlas selection and its
effect on accuracy, Neuroimage, vol. 46, no. 3, pp. 726738, 2009.
[14] X. Artaechevarria, A. Muoz-Barrutia, and C. Ortiz-de-Solorzano
Combination strategies in multi-atlas image segmentation:
Application to brain MR data, IEEE Trans. Med. Imag., vol. 28,
no. 8, pp. 12661277, Aug. 2009.
[15] R. Wolz, P. Aljabar, J. Hajnal, A. Hammers, and D. Rueckert, LEAP:
Learning embeddings for atlas propagation, Neuroimage, vol. 49,
no. 2, pp. 13161325, 2010.

1168

[16] Y. Cao et al., Segmenting images by combining selected atlases


on manifold, in Medical Image Computing and Computer-Assisted
InterventionMICCAI. Berlin, Germany: Springer, 2011, pp. 272279.
[17] S. K. Warfield, K. H. Zou, and W. M. Wells, Simultaneous truth
and performance level estimation (staple): An algorithm for the validation of image segmentation, IEEE Trans. Med. Imag., vol. 23,
no. 7, pp. 903921, Jul. 2004.
[18] T. R. Langerak et al., Label fusion in atlas-based segmentation
using a selective and iterative method for performance level estimation
(simple), IEEE Trans. Med. Imag., vol. 29, no. 12, pp. 20002008,
Dec. 2010.
[19] P. Coup et al., Patch-based segmentation using expert priors:
Application to hippocampus and ventricle segmentation, Neuroimage,
vol. 54, no. 2, pp. 940954, 2011.
[20] F. Rousseau, P. A. Habas, and C. Studholme, A supervised patch-based
approach for human brain labeling, IEEE Trans. Med. Imag., vol. 30,
no. 10, pp. 18521862, Oct. 2011.
[21] C. Studholme, D. Hill, and D. Hawkes, An overlap invariant entropy
measure of 3D medical image alignment, Pattern Recognit., vol. 32,
no. 1, pp. 7186, 1999.
[22] Q. Wang et al., Construction and validation of mean shape atlas templates for atlas-based brain image segmentation, in Information
Processing in Medical Imaging. Berlin, Germany: Springer,
2005, pp. 689700.
[23] M. Wu, C. Rosano, P. Lopez-Garcia, C. Carter, and H. Aizenstein,
Optimum template selection for atlas-based segmentation,
Neuroimage, vol. 34, no. 4, pp. 16121618, 2007.
[24] J. Hamm, D. H. Ye, R. Verma, and C. Davatzikos, GRAM: A framework for geodesic registration on anatomical manifolds, Med. Image
Anal., vol. 14, no. 5, pp. 633642, 2010.
[25] S. Gerber, T. Tasdizen, S. Joshi, and R. Whitaker, On the manifold
structure of the space of brain images, in Medical Image Computing and
Computer-Assisted InterventionMICCAI. Berlin, Germany: Springer,
2009, pp. 305312.
[26] R. Wolz, P. Aljabar, J. Hajnal, and D. Rueckert, Manifold learning for
biomarker discovery in MR imaging, in Machine Learning in Medical
Imaging. Berlin, Germany: Springer, 2010, pp. 116123.
[27] A. K. H. Duc, M. Modat, K. K. Leung, T. Kadir, and S. Ourselin,
Manifold learning for atlas selection in multi-atlas based segmentation
of hippocampus, Proc. SPIE, vol. 8314, Feb. 2012, Art. ID 83140Z.
[28] M. Wang, H. Li, D. Tao, K. Lu, and X. Wu, Multimodal graph-based
reranking for web image search, IEEE Trans. Image Process., vol. 21,
no. 11, pp. 46494661, Nov. 2012.
[29] M. Wang et al., Unified video annotation via multigraph learning,
IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 5, pp. 733746,
May 2009.
[30] Y. Guo, G. Wu, J. Jiang, and D. Shen, Robust anatomical correspondence detection by hierarchical sparse graph matching, IEEE Trans.
Med. Imag., vol. 32, no. 2, pp. 268277, Feb. 2013.
[31] J. B. Tenenbaum, V. de Silva, and J. C. Langford, A global geometric
framework for nonlinear dimensionality reduction, Science, vol. 290,
no. 5500, pp. 23192323, 2000.
[32] I. Borg and P. Groenen, Modern Multidimensional Scaling: Theory and
Applications. New York, NY, USA: Springer, 2005.
[33] X. He and P. Niyogi, Locality preserving projections, in Proc. Adv.
Neural Inf. Process. Syst., vol. 16. 2004, pp. 153160.
[34] S. Roweis and L. Saul, Nonlinear dimensionality reduction by locally
linear embedding, Science, vol. 290, no. 5500, pp. 23232326, 2000.
[35] Y. Cao, X. Li, and P. Yan, Multi-atlas based image selection
with label image constraint, in Proc. 11th Int. Conf. Mach. Learn.
Applicat. (ICMLA), vol. 1. Boca Raton, FL, USA, 2012, pp. 311316.
[36] F. R. K. Chung, Spectral Graph Theory, vol. 92. Providence, RI, USA:
Amer. Math. Soc., 1997.

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 6, JUNE 2015

[37] A. Antoniou and W.-S. Lu, Practical Optimization: Algorithms and


Engineering Applications. New York, NY, USA: Springer, 2007.
[38] M. J. D. Powell, A fast algorithm for nonlinearly constrained
optimization calculations, in Numerical Analysis (Lecture Notes in
Mathematics), vol. 630, G. A. Watson, Ed. Berlin, Germany: Springer,
1978, pp. 144157.
[39] L. Carbo, M. Haider, and I. Yetik, Supervised prostate cancer segmentation with multispectral MRI incorporating location information,
in Proc. IEEE Int. Symp. Biomed. Imaging Nano Macro, Chicago, IL,
USA, 2011, pp. 14961499.
[40] S. Klein, M. Staring, K. Murphy, M. Viergever, and J. Pluim, Elastix:
A toolbox for intensity-based medical image registration, IEEE Trans.
Med. Imag., vol. 29, no. 1, pp. 196205, Jan. 2010.
[41] J. Snyder, Flattening the Earth: Two Thousand Years of Map Projections.
Chicago, IL, USA: Univ. Chicago Press, 1993, pp. 112113.

Pingkun Yan, (S04M06SM10) photograph and biography not available


at the time of publication.

Yihui Cao received the B.E. degree in communication engineering from the School of
Telecommunication Engineering, Xidian University,
Xian, China, in 2010, and the M.E. degree from the
Xian Institute of Optics and Precision Mechanics,
Chinese Academy of Sciences, Xian, in 2013.
His current research interests include computer
vision, medical image processing, and machine
learning.

Yuan Yuan is a Full Professor with the Chinese Academy of Sciences,


Beijing, China. Her current research interests include visual information processing and image/video content analysis. She has published over 100 papers,
including over 70 in reputable journals, such as IEEE Transactions and Pattern
Recognition, as well as conferences papers, including CVPR, BMVC, ICIP,
and ICASSP.

Baris Turkbey, photograph and biography not available at the time of


publication.

Peter L. Choyke, photograph and biography not available at the time of


publication.

Das könnte Ihnen auch gefallen