Sie sind auf Seite 1von 27

Defined as classification of all the picture elements or pixels in an image into different clusters that exhibit similar features.

Color , edges and textures are used as properties. Applications include object classification , image retrieval and medical imaging analysis, brain tumor detection and fingerprinting.

A new unsupervised color image segmentation algorithm is proposed known as G- SEGmentation algorithm. This algorithm exploits the information obtained from detecting edges in color images in CIE L*a*b* color space. Pixels without edges are clustered and labeled individually using color gradient detection technique.

Texture modeling is performed by color quantization and local entropy of quantized image. The obtained color and texture along with a region growth map consisting of all fully grown regions are used to perform a unique multi-resolution merging procedure to blend regions with similar characteristics.

The detected areas with no edges inside them are the initial clusters or seeds selected to initiate the segmentation of the image. The pixels that compose each detected region receive a label and the combination of pixels with the same label is referred as a seed. These seeds grow into the higher edge density areas, and additional seeds are created to generate an initial segmentation map.

1.

2.

3.

Selects clusters for images using gradient information in CIE L*a*b* color space. Characterizes the texture present in each cluster. Generates a final segmentation map by utilizing an effective merging approach.

Consists of three different modules as shown in fig.1. First module implements edge-detection algorithm to produce an edge map used in generation of adaptive gradient thresholds, which in turn dynamically select regions of contiguous pixels that display similar gradient and color values, producing an initial segmentation map.

The second module creates a texture characterization channel by first quantizing the input image, followed by entropy based filtering of the quantized colors of the image. The last module utilizes the initial segmentation map and the texture channel to obtain our final segmentation map.

Here regions where the gradient map displays no edges are searched. The selected regions form the initial set of seeds to segment the image. The region growth procedure also accounts for regions, which display similar edge values throughout, by detecting unattached regions at various edge density levels.

Patterns are composed of multiple shades of colors causing over-segmentation and misinterpretation of the edges surrounding these regions. Texture regions may contain regular patterns such as a brickwall, or irregular patterns such as leopard skins, bushes, and many objects found in nature. A method for obtaining information of patterns within an image is to evaluate the randomness present in various areas of that image. Entropy provides a measure of uncertainty of a random variable

Here an effective method is incorporated to analyze grouped data from the statistical field, to merge all over segmented regions. This method better known as a multivariate analysis allows to take regions that have been separated due to occlusion, or small texture differences, and merge them together.
The core of a multivariate analysis lies in highlighting the differences between groups that display multiple variables to investigate the possibility that multiple groups are associated with a single factor.

Using a multivariate analysis approach of all independent regions, the resultant distances between groups is used to merge similar regions. Since the image has been segmented into different groups, information can be gathered from each individual region.

To prevent the need to re-evaluate the distances for various groups after each stage of the region merging procedure, an alternate approach is introduced. Having the distances between groups, the smallest distance value is found, corresponding to a single pair of groups.

The similarity value is increased until a larger set of group pairs is obtained. The smallest group is merged first in this set and then continues to merge the next larger group. After the first merge, a check is performed to see if one of the groups being merged is now part of a larger group.

In this case all the pair combinations of the groups should belong to the pairs selected initially in the set to be merged together. Once all the pairs of the set have been processed, the distance is recomputed for the new segmentation map, and the process is repeated.

In problems such as segmentation, multiresolution analysis offers two key advantages over pixel-based methods: 1. It provides a way to trade-off class and spatial resolution; repeatedly blurring and subsampling the image decreases the noise and improves the class certainty, but at the expense of spatial resolution .

2. The use of a multi-resolution technique ensures both robustness in noise and efficiency of computation.

Result of G-SEG algorithm is as shown in figure below.

The input RGB image and its CIE L*a*b* counterpart are shown in Fig. 2(a) and (b), respectively. The outcome of gradient computation on the color converted input image, shown in Fig. (2c). The seed map at the end of the region growth procedure, obtained utilizing thresholds that are generated adaptively, is displayed in Fig. 2(d).

The texture channel generated using color quantization and local entropy calculation is depicted in Fig. 2(e) The segmentation map at the end of the region merging algorithm is shown in Fig. 2(f).

The Face image in Fig. 3(a) represents a moderately complete image with dissimilar texture content associated with the skin, hat and robe of the person. Observe that in Fig. 3(b) and 3(c), the GRF and JSEG algorithms over segment this image due to the texture and illumination disparity seen in various regions.

The texture model has been effective in handling different textures as seen in Fig. 3(d). The algorithm employs the CIE L*a*b* color space where the L* channel contains the luminance information in the image, incapacitates the illumination problem.

The GSEG algorithm is primarily based on color-edge detection, dynamic region growth, and culminates in a unique multi-resolution region merging procedure. The algorithm is robust to various image scenarios and is superior to the results obtained on the same image when segmented by other methods.

Das könnte Ihnen auch gefallen