Sie sind auf Seite 1von 9

A Hierarchical Graph-Based Segmentation Technique for High-Resolution Volume Data

Runzhen Huang and Kwan-Liu Ma


University of California at Davis

Abstract This paper presents a novel interactive approach to the problem of segmenting high-resolution volume data. The segmentation process starts by constructing a hierarchical graph representation of a coarser resolution of the data that is small enough to t in the video memory. This graph enables user to interactively sample and edit a feature of interest by drawing strokes on slices of the data while watching the images of the segmented volume. A subgraph representing the volumetric feature of interest is derived with a growing process, and can be used to extract the high-resolution version of the feature from the original volume data through an automatic mapping and renement procedure performed based on the statistical properties of the voxels internal to the feature. Our hierarchical graph representation and the associated operations overcome the ambiguous boundary conditions called partial volume effects caused by down-sampling and provide three levels of details to support the segmentation of ne features. We demonstrate with several examples the effectiveness of such a highly interactive approach to challenging 3D segmentation tasks.

1. Introduction Modern 3D imaging techniques such as Computed Tomography (CT) used in Non-destructive Testing (NDT) can generate very high resolution volume data, such as 204820481024 voxels taking over 4GB of storage space. The large size of data provides more accurate information about the subjects of study, but also presents great challenges to the associated feature segmentation, modeling and visualization tasks. Most of the conventional segmentation algorithms, such as region growing, are computationally expensive and therefore fail to offer desired interactivity on large volume data. The commonly used surface extraction methods, such as marching cubes [LC87], and the rendering methods, such as direct volume rendering, can also become problematic due to the large size of volume data. In this paper, we present an interactive segmentation technique for high-resolution volume data. Interactivity is made possible by using a graph representation of a down-sampled version of the data. The graph is hierarchical because some
1 2

of its nodes that belong to one feature can be fused into a higher-level graph node to represent the feature; meanwhile, these nodes can also be used to construct a lower-level but high-resolution graph from their corresponding high resolution region extracted from the original data. One problem with down-sampling the volume data is the partial volume effects which introduce fuzzy boundaries due to multiple objects contributing to one boundary voxel. In addition, downsampling will blur ne features and add more ambiguity. Our hierarchical graph representation and the accompanying operations can effectively address these two problems. The user segments a feature by interactively drawing on the slices of the down-sampled data or original data while previewing the images of the segmented volume. A metric is provided for the user to direct a greedy growing process which starts from the seed nodes selected by the user and results in a feature graph. This subgraph and the statistical properties of the feature are used to obtain the highresolution version of the feature from the original volume data through an mapping and renement procedure. Fine features inside blurry regions in down-sampled data can also be segmented by partitioning the high-resolution graph constructed from its corresponding high-resolution region. The

huangru@cs.ucdavis.edu ma@cs.ucdavis.edu

segmented ne features that belong to a bigger feature are incorporated with other parts extracted with the mapping and renement procedure to get the nal results. An MRI head data set (Figure 9 Left) and two NDT CT data sets (Figure 9 Right and Figure 6 Left) were used to test our approach. The results show that the features of interest in these three data sets can be correctly and efciently segmented. 2. Related Work Volume segmentation partitions a 3D image into regions in each of which the voxels share similar characteristics. The segmentation techniques can roughly be divided into two categories: hard segmentation and soft or fuzzy segmentation. Soft segmentation allows regions or classes to overlap while hard segmentation does not [YK01]. Soft segmentation is often used to handle partial volume effects in medical imaging. One approach to the hard segmentation problem is graphbased [SM00] [Wu93] [Boy01]. It represents an image as a graph and employs a graph-partitioning algorithm to nd a global optimal solution for segmentation. This method assumes the image is defect-free and does not support fuzzy segmentation. Furthermore, large graphs can quickly deteriorate performance. [Wu93] presents an improved GomoryHu algorithm [GH61] which uses one node to condense graph nodes and edges that do not contain minimum cuts, and therefore reduces the size of the graph and improves the performance. [Boy01] introduces a supervised graphbased segmentation which receives object or background samples and reuses the previous results to speed up the computation as more samples are identied. However, performance is still an issue for large image data. Moreover, a postprocessing step is often needed to get desired results due to over-segmentation by graph-based methods. Comparing to hard segmentation in which one voxel can only belong to one object, fuzzy segmentation can handle voxels that can be part of more than one object. For example, a partial volume consists of such voxels because they are so close to boundary that multiple objects contribute to their values. One popular fuzzy segmentation algorithm is the fuzzy-C means [Dun74]. It derives a degree of membership of image elements for each object by minimizing an objective function that measures the differences between elements. Another methodology is to model the statistical properties of fuzzy regions. For example, [CHK91] models the partial volume with Markov Random Field and seeks the global optimal solution with Maximum a posteriori (MAP) estimation. All these methods are costly because of using global optimization. To improve segmentation performance, programmable graphics hardware has been used in volume segmentation. [TLM03] classies volume data with a hardware-accelerated

neural network and a novel interface. [SHN03] makes use of sophisticated graphics hardware functionality to enable fast region growing and interactive visualization. These approaches are limited by the available video memory of the graphics hardware, hence do not work well for large data sets. The increasing size of volume data challenges volume segmentation. To handle large data sets, multi-resolution techniques have been studied [LHJ99]. [LLdB00] uses an octree to smooth and organize multi-resolution data sets. The initial segmentation obtained in low-resolution data is then rened by performing ltering and interpolation along the tree until the highest resolution level is reached. But their approach employs local-centroid clustering [WS88] without connectivity constraints and does not consider partial volume effects, which can generate inaccurate segmentation results. The octree representation also lacks exibility to represent complicated features. Our approach represents down-sampled image data with a hierarchical graph which supports fuzzy segmentation and feature renement. Instead of using costly graph-cut algorithms, we employ a greedy growing process to gain interactivity as well as the opportunity to assist and direct the growing with ve feature editing operations. Fine features can be segmented with high-resolution graphs and join in a bigger feature by incorporating into other high-resolution features which are obtained by mapping and rening the blurry low-resolution boundaries to the original ones. We show our method can effectively address partial volume effects and result in more accurate segmentation of large volume data. 3. Data Representation The size of a large data set hampers interactive rendering and segmentation. In our approach, a low-resolution version of the data is generated for previewing, as well as providing informative elements for users interaction. Here an informative element refers to a subvolume which consists of connected voxels that likely belong to the same feature. A data set is partitioned to many subvolumes according to a criterion derived from data statistics. We organize these subvolumes with a hierarchical graph to support subsequent interactive segmentation steps. 3.1. Data Partition Our data partition is based on a bottom-up merging process which starts from a graph in which a voxel is a node, two adjacent voxels are connected with an edge and the edge weight equals to the difference of two voxel values. This process consists of two steps. First, all voxels are grouped to initial subvolumes each of which only contains voxels connected by zero edges, that is, those voxels having the same value. The initial graph will be constructed with these initial subvolumes. Second, these subvolumes are merged iter.

3
S1
A B

S2
C A B C

(b) S1 S2

(a)

(c)

Figure 1: A 2D example of data partitioning and segmentation. (a) Two features S1 and S2 are partitioned to region A, B and C; (b) the graph of A, B and C; (c) the resulting graph of segmentation.

Figure 2: Three levels of detail based on the graph.

is an iterative procedure and terminates when no two subvolumes satises the merging condition. atively based on a merge function which considers the global statistic information of the data as well as the local statistic properties of two subvolumes under study. The statistical properties of a subvolume, together with its boundary voxels, are calculated and cached in the subvolume. Figure 1 (a) gives an example in 2D case. This data contains two features but is partitioned into A, B and C components. A and C each represents a different feature while B is a fuzzy region which can be part of any of the features. The merging procedure starts from initial subvolumes and stops when the merging condition is not satised any more. A merge function M can be dened as: M(a, b) =
Ng Na Nb Na +Nb Nb Na eg eab Ng ( Na +Nb ea + Na +Nb eb ) + Ng

3.2. Graph Representation The subvolumes generated by the data partition are organized as a hierarchical graph to support fuzzy segmentation, feature editing and renement. Each subvolume becomes a graph node and an edge is created for every pair of adjacent subvolumes. The weight of the edge equals to the difference of the average voxel values of these two subvolumes. A three level graph can be constructed as shown in Figure 2. First, the low-resolution graph consists of the graph nodes generated from the down-sampled data. Second, a group of low-resolution graph nodes that belong to a same feature can be clustered into a feature node all of which compose the feature graph. Several feature nodes can also be assembled into one bigger feature node in this level. At last, one set of low-resolution graph nodes which contain a ne feature can be used to extract their corresponding highresolution graph nodes which make up the high-resolution graph, in order to segment the ne feature which is difcult with the low-resolution graph. Notice that only the highresolution graph of ne features need to be generated, so the graph size should not become too large. Figure 3 illustrates the data structure of a graph node. The node_data stores the boundary voxels and the statistical properties of the subvolume, while the graph_node maintains the pointers to its parent node, neighboring nodes, offspring nodes. The data structure of the graph node and the three level graph representation support feature segmentation, editing and renement well. When segmenting a feature, those subvolumes which satisfy an uniformity criterion are clustered to form the features volume. Figure 1 (b) and (c) demonstrates a segmentation procedure based on the graph. When editing a feature, its graph nodes become the operational en-

where a and b are the two subvolumes under consideration, Na and Nb are their voxel numbers respectively, Ng is the total homogeneous voxel number in the data set, ea and eb are the average edge weight of a and b respectively, eg is the average edge weight of all edges connecting homogeneous voxels in the data, and eab is the edge weight for a and b. Ng and eg are global variables used to reect the image datas homogeneity. If eg is less than 1.0, merging does not happen because the data set is very homogeneous. Otherwise a new graph will be generated by merging. To calculate these two variables, a gradient magnitude threshold is selected based on the gradient magnitude histogram. Those voxels which has gradient magnitudes less than the threshold are dened as homogeneous voxels. Ng and eg are then computed from these voxels. The merge function considers both the local and global information of the data set. When it outputs a positive value, the merging happens. One subvolume can be merged into two different subvolumes, which can relieve partial volume effects. Figure 1 (c) illustrates an example. The merging step
.

Figure 3: The data structure of a graph node.

tities, which can be added, deleted, blocked, merged or split by drawing on slices without knowing the underlining graph. The connected graph nodes in different level represent different level of detail (LOD), as illustrated in Figure 2. Depending on the feature size, an appropriate detail level is selected to obtain the accurate segmentation. However, because of the large size of the high-resolution data, only the low-resolution graph is generated in preprocessing and used as the start point of segmentation. The low-resolution result can be used to conne the region where the high-resolution graph is constructed. Feature renement then is performed based on the high-resolution graph with the same way to that employed to segment the low-resolution feature. Comparing to other data structures supporting LOD, such as octree, the hierarchical graph representation is constructed based on the features of interest. Each graph node, that is a subvolume differentiates itself with its likelihood of belonging to one feature. From this viewpoint, the hierarchical graph representation is feature centric, which is more exible and intuitive for feature segmentation. The statistic properties and boundary voxels cashed in each graph node also make this task more efcient. However, it is more complex and needs more storage space. 4. Interactive Feature Segmentation Feature segmentation is performed based on the graph introduced in the previous section. Figure 4 shows the steps of the segmentation process. First, some graph nodes are selected as seed nodes by drawing on a slice then identifying the graph nodes which intersect the strokes. Next, the neighboring nodes of the seed nodes are evaluated with an uniformity function to see how likely each of them belongs to the same feature to the seeds. By thresholding the uniformity value, a feature growing process is performed starting from the seed nodes and brings all neighboring nodes whose uniformity values are bigger than the threshold into the feature. Both an automatic growing method and a step-by-step growing method have been developed for segmenting different kinds of regions. The results can be edited by modifying

Figure 4: The steps of feature extraction based on the graph. As shown in Figure 2, the graph data contains three levels of detail and the volume data includes both low-resolution and high-resolution data.

the feature graph, and previewed by rendering the surfaces, volume and slice of the feature. After the low-resolution feature is segmented by growing and editing, an mapping and renement operation is performed to obtain the high-resolution feature. Fine features contained in blurry regions can be directly segmented with high-resolution graphs. Figure 5 displays the user interface of our system. The left image in Figure 6 shows the NDT Box data which was created to challenge our segmentation and visualization capability. The right image in Figure 6 shows the result of using a 2D transfer function to isolate the watch in the Box, but the result is not satisfactory. Figure 7 shows the segmented result of the watch using our system, as well as the corresponding internal graph representation. Note that generally the user does not need to see or interact with the graph. But when segmenting very complex features, the user can operate directly on the graph to get optimal results. 4.1. Uniformity Criterion The feature growing relies on the uniformity value which measures the overlap of two subvolumes histograms as well as their edge weight. When two subvolumes are less homogeneous, their histograms may have some overlaps. The bigger the overlap, the more likely the two subvolumes belong to the same feature. But in some cases like in Figure 1, even though the regions have different iso-values and their histograms have no overlap, region B still belongs to both A and C. To handle this case, the edge weight of subvolumes is also taken into account in the uniformity function. So the uniformity function U is dened as follows: U(a, b) = 1 [n MIN( 2 i=0
ha (i) hb (i) eab 1 Na , Nb )] + 2 (1 emax )

where a and b are the two subvolumes in evaluation, ha and hb are the corresponding histograms, n is the maximum
.

Figure 6: Left: The Box rendered with hardware-accelerated NPR. Right: Rendering of the watch in the data using a conventional transfer function method.

Figure 5: The system interface in the middle of segmenting the watch in the Box data set. (a) Top left: sample and edit the watch by drawing strokes on the low-resolution slice: adding nodes with the red stroke; blocking nodes with the blue stroke; deleting nodes with the cyan stroke; (b) Top right: the bounding voxel facets of the high-resolution watch under segmentation; the 3D window connes the region to be edited, and its intersection area on the slice is also shown in (a); (c) Bottom left: the high resolution slice of the watch; the white boundary lines in (a), (b) and (c) depict the intersection between the slice and the bounding voxel facets; (d) 2D transfer functions used to volume render the data.

Figure 7: Left: volume rendering of the segmented watch; Right: its internal graph representation: the red balls represent the feature graph nodes segmented by red strokes, feature growing and editing; the yellow balls represent the neighboring nodes outside the watch; the blue balls are blocking graph nodes identied with blue strokes.

voxel value in the data set, MIN is a minimum function, Na and Nb are the voxel number of a and b respectively, eab is the edge weight for a and b, and emax is the maximum edge weight in the data set. The uniformity value has the range between zero and one. The bigger the value, the more likely the node belongs to one feature. Whenever seed nodes are selected, the uniformity values of their neighboring nodes are calculated. The bottom left window in Figure 5 shows the sorted uniformity values of the neighboring nodes with line markers along the bottom slider. The height of a marker is proportional to the voxel number of the corresponding node. 4.2. Feature Growing and Editing Two growing methods including automatic growing and step-by-step growing have been developed to segment different kinds of regions. A region which has a sharp bound.

ary can be efciently segmented by the automatic growing method. Whenever a node is merged into the region, the automatic growing brings the nodes neighbors into consideration, and recalculates the uniformity values of all candidate nodes. This procedure is performed iteratively until no nodes can be merged into the region. The step-by-step growing, only considers the direct neighbors of those already-in nodes at each step. The uniformity threshold is also redened for each growing step. This way needs more interaction but is better to segment a region which has a low contrast boundary. These two growing methods can be used together to segment a feature. First, the automatic growing is employed to grow those nodes with large likelihood values. Then the stepby-step growing is used to grow fuzzy nodes where some editing operations might be applied to avoid ill-growing caused by down-sampling. The uniformity threshold used in growing is dened by moving the bottom slider shown in Figure 5(c). Whenever

the threshold is dened, the bounding facets of segmented voxels are depicted for previewing in the top right window. The graph representation also facilitates editing operations including adding, deleting, blocking, merging, and splitting nodes. Adding and deleting have been discussed in section 3.2. The blocking operation prevents overgrowing by explicitly dening some nodes which are absolutely out of the feature. For example, two close but separate objects might become connected due to the partial volume effects after down-sampling, which causes over-segmentation. This situation can be avoided by dening blocking graph nodes which do not participate in the growing procedure. Merging is similar to the merging in preprocessing to reduce the graph size, or uses a feature node to replace all its offspring nodes. Splitting separates one graph node to at least two nodes, which is used to modify incorrect merging. Generally the graph is invisible to the user and the editing operations are applied to the graph according to the users drawing on the slice with different types of strokes. As shown in Figure 5(a), adding, deleting and blocking operations use strokes with different colors. Figure 5(a) and (c) illustrate the white intersection lines of the slice and the bounding facets of the segmented volume. This connection effectively helps user locate the place needed to be edited. A 3D window as in Figure 5(b) is also provided to conne the region eligible for feature segmentation and editing. For example, the user can delete all feature graph nodes contained in a 3D window, or only apply growing within it. The size and position of the 3D window can be changed via the knobs on the end of axes. Moreover, the 3D window is also used to dene the graph nodes which serves to the construction of high-resolution graph. 4.3. Fine Feature Segmentation Fine features are difcult to segment directly with downsampled data and the low-resolution graph because they are blurred. Due to their small size, more details from the original data are extracted with a 3D window and represented as a high-resolution graph. The graph generation is similar to the data partition described in section 3.1. The high-resolution slice of this region as shown in Figure 5(c) is also provided to facilitate feature segmentation and editing as introduced in previous section. The segmentation result obtained with a high-resolution graph is part of the nal feature; therefore, it should be preserved in latter mapping and renement procedure. In addition, editing operations applied on a high-resolution graph should update its corresponding low-resolution graph accordingly, and vise versa. By growing and editing the graph of a feature, the lowresolution or hybrid resolution feature can be segmented interactively. This feature is then mapped to the original highresolution data to rene blurry boundaries.

V2

V2 B V1

A
V1

(I)

( II )

( III )

(a)

(b)

Figure 8: Mapping. (a) A voxel in low-resolution volume maps to a voxel block in high-resolution volume; (b) the boundary renement procedure

4.4. Mapping and Renement Since the high-resolution sub-feature should be preserved, the mapping and renement only receive the input of those low-resolution graph not related to the sub-feature and merge the output result with the sub-feature. The mapping procedure maps the boundary voxels of a low-resolution feature to the original volume data, and then renes the fuzzy boundaries to obtain the high-resolution ones. One voxel in the low-resolution data corresponds to a voxel block whose position and size can be calculated with the down-sample rate, as illustrated in Figure 8 (a). A renement method is employed to obtain the accurate boundary of the high-resolution feature. Figure 8 (b) gives a 2D example where (I) is a low-resolution feature which is mapped to its high-resolution version shown in (II). The rened boundary is depicted in (III) where v1 is included but v2 is excluded. The renement process employs an ination algorithm to rene candidate voxels including all mapped boundary blocks and all neighboring blocks obtained by morphologically dilating the mapped boundary blocks [Loh98]. The ination renement grows the boundary voxels from the outmost internal voxels surrounded by the boundary blocks to the out-most candidate voxel layer. The criterion used in ination is calculated with the average voxel value v and the standard deviation of all internal block voxels. Whenever a voxel of the next layer has a value between [v a, v + a], the current layer is inated to include this voxel. a is a parameter dened by user to control the criterion.

4.5. Feature Rendering A segmented features can be depicted with surface rendering and hardware volume rendering. When a segmentation is under way, the bounding facets of segmented voxels are rendered for previewing. More smooth surfaces can be obtained with a method described in [HMMW03]. In short, the boundary is low-pass ltered to remove high frequency [RK82] and then the marching cubes method is applied to extract the features surfaces. The user can also render the volume of a feature with 2D transfer func.

tions [HM03] [KD98] accelerated by graphics hardware, as shown in Figure 5(d). 5. Results We present the test results with three data sets. One is an MRI head data set which contains a tumor and damaged tissue as shown in the left image of Figure 9. Even though this data set has only 256256128 voxels, it is used to show that our technique works well for medical data too. A downsampled volume with the size of 646432 is used to generate the data graph. The preprocessing takes 5.14 seconds. The other two data sets were created by the nondestructive testing group at the Los Alamos National Laboratory to test the new visualization techniques we have developed. The rst one which we call the Box has 10241024512 voxels, as already shown in the left image of Figure 6. A variety of objects were put into a small box which was scanned to produce the volume data. In our study, the volume data is down-sampled to 256256128 to generate the data graph. The preprocessing takes only 14.2 seconds. The second NDT data set is the scan of a ash light, called Maglight, with 5125122048 voxels. This data set contains a lot of mechanical components. We down-sampled it to 128128512. The preprocessing takes 51.73 seconds. Figure 10 shows the tumor and damaged brain tissue segmented from the MRI head data with our method. The intersection lines of the slice and the bounding voxel facets reveal the accuracy of the results. For the Box data set, the watch which contains a lot of ne features on the wristband is segmented using our approach and shown in Figure 7. We notice that the parts between the wristband and the watch have some damages, which is proved by the right image in Figure 6. Figure 11 shows the segmented components of the ash light. Each component shares fuzzy boundaries with at least one component. Our approach can overcome the partial volume effects therefore to disassemble these small components successfully.

Figure 9: Left: the head tumor; Right: the Maglight.

Surface rendering of the tumor, and a high-resolution slice.

The damage brain tissue volume and a high-resolution slice.

In our experiments, the feature graph can be obtained interactively from seconds to several minutes. The time is comparable to the parameter selection in some other segmentation methods such as region growing. Table 1 compares the performance of the feature growing plus the mapping to the performance of the region growing which are directly applied on the segmented feature volume obtained with our method in order to remove the affects of criterion selection. The timing numbers of feature growing are always sub-seconds therefore the process is highly interactive. The Volume rendering of the tumor, damage tissue and the rest of the brain. numbers also show that our method is faster than the region growing method with 3.14 times for the damage tissue, Figure 10: The tumor and damaged brain tissue in the MRI 3.97 times for the watch, 8.95 times for the tumor, and 50.03 head data. times for the egg in the Box. The larger the feature, the more speedup can be achieved.
.

Table 1: Performance results Approach Feature tumor damaged tissue watch egg Region Growing time(sec.) 0.2327 2.5658 4.3712 439.3253 Graph-based Method growing mapping 0.002 0.024 0.03 0.787 0.048 1.052 0.112 8.659

6. Conclusions We have presented a new approach to the problem of interactive data segmentation, specially for large volume data. Our approach uses down-sampled volume to construct a hierarchical graph which supports three levels of details in order to segment high-resolution features. Partial volume effects caused by down-sampling are removed by two ways including the graph editing and the feature renement. Fine features blurred or lost in down-sampled data are also extracted based on the hierarchical graph and its associated operations. In our approach, volume segmentation becomes interactively operating on the linked 2D and 3D displays of the data. Supplemented with hardware-accelerated volume visualization, the segmentation task is made more intuitive and easier than previous methods. Our test results show that the data partitioning and the resulting graph representation of the data make possible interactive and editable feature segmentation. Comparing to the methods that use conventional algorithms like region growing, graph partition, and FCM to segment low-resolution features and then apply mapping and renement to get high-resolution results, with our method the user can interactively control the segmentation process and edit the feature through an intuitive interface. This capability allows the user to perform more complex segmentation tasks on a PC as we have demonstrated. There are several promising directions for further research. First, we will study how to automate the growing process to make the approach more efcient. Second, we will investigate how to improve the interactive segmentation interface by even more tightly coupling the graph operations and volume visualization. Finally, we plan to exploit the programmable features of the graphics card to accelerate some of the volume segmentation operations to increase interactivity further. References [Boy01] B OYKOV Y.Y.; J OLLY M.-P.: Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. In Proceedings of International Conference on Computer Vision (2001), vol. 1, pp. 105112. [CHK91] C HOI H. S., H AYNOR D. R., K I Y. M.: Partial volume tissue classication of multichannel magnetic

Figure 11: Top Left: one part of the Maglight; Others: ve components from the part shown on the top left image.

resonance images a mixel model. IEEE Transactions on Medical Imagin 10, 3 (1991), 395407. [Dun74] D UN J. C.: A fuzzy relative of the isodata process and its use in detecting compact well-separated cluster. Journal of Cybernetic 3, 32 (1974). [GH61] G OMORY R., H U T.: Multi-terminal network ow. SIAM Journal of Applied Mathematic 9 (1961), 551 57. [HM03] H UANG R., M A K.-L.: RGVis: Region growing based techniques for volume visualization. In Proceedings of the Pacic Graphics 2003 Conference (2003), pp. 355363. [HMMW03] H UANG R., M A K.-L., M C C ORMICK P., WARD W.: Visualizing industrial CT volume data for nondestructive testing applications. In Proceedings of IEEE Visualization 2003 Conference (2003), pp. 547 554. [KD98] K INDLMANN G., D URKIN J.: Semi-automatic generation of transfer functions for direct volume rendering. In Proceedings of 1998 Symposium on Volume Visualization (1998), pp. 7986. [LC87] L ORENSEN W. E., C LINE H. E.: Marching cubes: A high resolution 3d surface construction algorithm. In Proceedings of SIGGRAPH 87 (July 1987), pp. 163 169. [LHJ99] L A M AR E. C., H AMANN B., J OY K. I.: Multiresolution techniques for interactive texture-based volume visualization. In IEEE Visualization 99 (San Francisco, 1999), Ebert D., Gross M. Hamann B., (Eds.), pp. 355362. [LLdB00] L OKE R., L AM R., DU B UF J.: Fast segmentation of sparse 3d data by interpolating segmented boundaries in an octree. In Proc. 11th Portuguese Conf. on Pattern Recogn. (2000), pp. 185189.
.

[Loh98] L OHMANN G.: Volumetric Image Analysis. Wiley & Teubner Press, 1998. [RK82] ROSENFELD A., K AK A.: Digital Picture Processing, Volume 2, second ed. Academic Press, 1982. [SHN03] S HERBONDY A., H OUSTON M., NAPEL S.: Fast volume segmentation with simultaneous visualization using programmable graphics hardware. In Proceedings of the IEEE Visualization 20003 Conference (2003), pp. 171176. [SM00] S HI J., M ALIK J.: Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 8 (2000), 888905. [TLM03] T ZENG F.-Y., L UM E., M A K.-L.: A novel interface for higher-dimensional classication functions. In Proceedings of the IEEE Visualization 20003 Conference (2003), pp. 505512. [WS88] W ILSON R., S PAN M.: Image Segmentation and Uncertaint. Research Studies Press Ltd., Letchworth, Hertfordshire, UK, 1988. [Wu93] W U Z.; L EAHY R.: An optimal graph theoretic approach to data clustering: theory and its application to image segmentatio. IEEE Transactions on Pattern Analysis and Machine Intelligenc 15, 11 (1993), 1101111. [YK01] YOO T., K AKADIARIS I.: Volume segmentation, 2001.

Das könnte Ihnen auch gefallen