Sie sind auf Seite 1von 3

DETECTION OF BRAIN TUMOR USING REGION BASED IMAGE

FUSION

The aim of this project is to detect all the information needed for the perfect diagnosis
of brain tumor which is scientifically named as “Ewing Sarcoma”. The main principle that is
used behind this detection is the Region Based Image Fusion. This image fusion algorithm is
applied on the Magnetic Resonance (MR) scan image the human brain.

The reason for going onto image fusion is that, in the medical image processing,
different sources of images produce complementary information and so one has to fuse all the
sources of images to get more details required for the diagnosis of the patients. In this method the
raw data is the MR scan image of a patient’s brain which is observed at different angles or
resolutions. The images possess both different as well as common information with respect to
each other. Thus when these images are fused together the redundant images are neglected and
the complementary images are added thereby producing an accurate diagnosis with a single
image. The detailed description is explained in the following paragraphs.

In general, the medical image consists of two different modalities. The first one is
imaging with a high resolution which is used to study the external structure of the brain. The
second one is imaging with low resolution used to the study the internal or underlying structure
of the brain. These multi-modality images are fused together as one image so that the resultant
image has better description than any other individual image. There are many methods for image
fusion. The two well known techniques are Pixel Based image fusion and Region Based image
fusion.

In Pixel Based technique, the images are treated as pixels and are processed one by one
i.e. pixel by pixel. The first proposed method was conversion of these image pixels into a
laplacian pyramid. The basic idea is to perform a Multi-Scale Transform (MST) on each
image, then construct a composite multiscale representation from these and finally apply inverse
MST to obtain the fused image. There was a disadvantage with this algorithm, since as the depth
of the pyramid increases, we may lose some part of the fused image. As for the fact that even
some small part can contain very important information required there was a need for another
method which could overcome these disadvantage.

In order to overcome the said the above said problem Region Based image fusion is
introduced. In this method, the images are not considered as individual pixels, but treated as a
region of similar pixels grouped together. There may be several such regions in an image. This
principle makes it more efficient the Pixel Based technique since it does not involve any loss of
information and it provides a faster and efficient processing.

The block diagram to guide us to implementation of this project is as follows


IMG A IMG B

Filter banks
Filter banks

Otsu segmentation
Fusion Rule

Activity level
Measurement

Decision map

Inverse Transform

Fused image

The filter bank is an array of band pass filters that separates the input signal to multiple
components, each of one carrying a single frequency sub-band of the original signal. The
decomposition of signal into multiple components is useful because it involves frequency
domain processing which has its own advantages over time domain processing. There are many
types of filter banks. In this project we adopt a type called Wavelet Transform. The wavelets
are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-
decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have
advantages over traditional Fourier transforms for representing functions that have
discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-
periodic and/or non-stationary signals.
The two images are given as input to these filter banks and are transformed to the
required format using the transform and fed as input to the Otsu segmentation block. This
segmentation block is the responsible one for separating the images into regions required for
processing. The reason we choose Otsu segmentation out of many methods present is that it
provides an unsupervised segmentation and ensures maximum seperability of the gray levels of
the image. The output from filter banks is an approximated image and the Otsu segmentation is
applied on this image because it contains more object information.
The next step is the activity level measurement. It is the study of useful information in
an image by splitting the images into different regions of small size. It is denoted by R. this is
calculated for each region and the results are observed for next phase or step.

The next phase in this project is the Decision Mapping. This step is based on the
activity level of the split regions. Based upon the value got as output the regions are named as
white region or black region or in simple words ‘image’ or the ‘background.’ Then this image is
being sent on for Inverse Transform. It is the final step of the whole process where the image is
reconstructed back to its original form but has only desired components and devoid of any
harmonics or noises present earlier. The fusion rule used here is set of principles which define
fusion of each pair of corresponding channels for each band.
The short description of this process is explained in order below

 Two images A and B are considered and the Wavelet transform of those images
are taken.
 A threshold level is selected to apply the Otsu Segmentation on the approximated
coefficients to get the segmented images IsegA and IsegB.
 As per the regions extracted from the segmented image, the corresponding regions
of the detail coefficients are taken into consideration, to find out activity.
 The regions with maximum activity are selected from the corresponding
coefficients and the decision map is constructed.
 The Max or Average value is chosen from the available data.
 All selected coefficients of the fused image are given to inverse laplacian
transform and it gives the final fused image with region based algorithm.

The final process in this project is the Simulation and Analysis. We use matlab
coding for this application. The images are processed using codes and the final fused
image is obtained as a simulated result of the processed images.

Das könnte Ihnen auch gefallen