Sie sind auf Seite 1von 54

Texture Analysis Based Segmentation and

Classification of Oral Cancer Lesions in Color


Images

TABLE OF CONTENTS

CHAPTER PAG
NO. TITLE E NO.

ABSTRACT
LIST OF FIGURES
LIST OF ABBREVATIONS

CHAPTER 1 : INTRODUCTION
1.1 GENERAL
1.1.1 THE IMAGE PROCESSING SYSTEM
1.1.2 IMAGE PROCESSING FUNDAMENTAL
1.2 OBJECTIVE
1. 1.3 EXISTING SYSTEM
1.3.1 EXISTING SYSTEM DISADVANTAGES
1.3.2 LITERATURE SURVEY
1.4 PROPOSED SYSTEM
1.4.1 PROPOSED SYSTEM ADVANTAGES
CHAPTER 2 :PROJECT DESCRIPTION
2.1 INTRODUCTION
2.1.1 RGB COLOR IMAGE
2.1.2 GRAYSCALE
2.1.3 MORPHOLOGICAL OPERATIONS
2.1.4 BORDER CORRECTED MASK
2.
2.1.5 SEGMENTATION
2.1.6 CONNECTED COMPONENT ANALYSIS (CCA)
2.2 APPLICATIONS
2.3 MODULE EXPLANATION
2.3.1 MODULE NAMES
2.3.2 MODULE DESCRIPTIONS
2.3.3 TECHNIQUE
CHAPTER 3 : SOFTWARE SPECIFICATION
3.1 GENERAL

3. 3.2 FEATURES OF MATLAB


3.2.1 INTERFACING WITH OTHER LANGUAGES
3.2.2 ANALYZING AND ACCESSING DATA
3.2.3 PERFORMING NUMERIC COMPUTATION
CHAPTER 4 : IMPLEMENTATION
4. 4.1 GENERAL
4.2 IMPLEMENTATION CODING
4.3 SNAPSHOTS
CHAPTER 5 : CONCLUSION & REFERENCES
5. 5.1 CONCLUSION
5.2 REFERENCES
LIST OF FIGURES

FIGURE NAME OF THE FIGURE PAGE NO


NO

1. BLOCK DAIGRAM FOR IMAGE PROCESSING SYSTEM

2. BLOCK DIAGRAM OF FUNDAMENTAL SEQUENCE


INVOLVED IN AN IMAGE PROCESSING SYSTEM

3. IMAGE PROCESSING TECHNIQUES

4 BLOCK DIAGRAM FOR PROPOSED SYSTEM


LIST OF ABBREVIATIONS

CHAPTER 1

INTRODUCTION
1.1 GENERAL

The term digital image refers to processing of a two dimensional picture by a digital
computer. In a broader context, it implies digital processing of any two dimensional data. A
digital image is an array of real or complex numbers represented by a finite number of bits. An
image given in the form of a transparency, slide, photograph or an X-ray is first digitized and
stored as a matrix of binary digits in computer memory. This digitized image can then be
processed and/or displayed on a high-resolution television monitor. For display, the image is
stored in a rapid-access buffer memory, which refreshes the monitor at a rate of 25 frames per
second to produce a visually continuous display.

1.1.1 THE IMAGE PROCESSING SYSTEM

Digitizer Mass Storage

Image Processor Digital Computer Operator Console

Hard Copy Device


Display

FIG 1.1 BLOCK DIAGRAM FOR IMAGE PROCESSING SYSTEM

DIGITIZER:
A digitizer converts an image into a numerical representation suitable for input into a
digital computer. Some common digitizers are

1. Microdensitometer
2. Flying spot scanner
3. Image dissector
4. Videocon camera
5. Photosensitive solid- state arrays.

IMAGE PROCESSOR:

An image processor does the functions of image acquisition, storage, preprocessing,


segmentation, representation, recognition and interpretation and finally displays or records the
resulting image. The following block diagram gives the fundamental sequence involved in an
image processing system.

Problem DomainImage Acquisition Segmentation n


Representation & Description

Knowledge Result
Preprocessing Recognition & interpretation
Base

FIG 1.2 BLOCK DIAGRAM OF FUNDAMENTAL SEQUENCE INVOLVED IN AN IMAGE PROCESSING SYSTEM

As detailed in the diagram, the first step in the process is image acquisition by an imaging
sensor in conjunction with a digitizer to digitize the image. The next step is the preprocessing step
where the image is improved being fed as an input to the other processes. Preprocessing typically
deals with enhancing, removing noise, isolating regions, etc. Segmentation partitions an image
into its constituent parts or objects. The output of segmentation is usually raw pixel data, which
consists of either the boundary of the region or the pixels in the region themselves.
Representation is the process of transforming the raw pixel data into a form useful for
subsequent processing by the computer. Description deals with extracting features that are basic
in differentiating one class of objects from another. Recognition assigns a label to an object
based on the information provided by its descriptors. Interpretation involves assigning meaning
to an ensemble of recognized objects. The knowledge about a problem domain is incorporated
into the knowledge base. The knowledge base guides the operation of each processing module
and also controls the interaction between the modules. Not all modules need be necessarily
present for a specific function. The composition of the image processing system depends on its
application. The frame rate of the image processor is normally around 25 frames per second.

DIGITAL COMPUTER:

Mathematical processing of the digitized image such as convolution, averaging, addition,


subtraction, etc. are done by the computer.

MASS STORAGE:

The secondary storage devices normally used are floppy disks, CD ROMs etc.

HARD COPY DEVICE:

The hard copy device is used to produce a permanent copy of the image and for the
storage of the software involved.

OPERATOR CONSOLE:
The operator console consists of equipment and arrangements for verification of
intermediate results and for alterations in the software as and when require. The operator is also
capable of checking for any resulting errors and for the entry of requisite data.

1.1.2 IMAGE PROCESSING FUNDAMENTAL:

Digital image processing refers processing of the image in digital form. Modern cameras
may directly take the image in digital form but generally images are originated in optical form.
They are captured by video cameras and digitalized. The digitalization process includes
sampling, quantization. Then these images are processed by the five fundamental processes, at
least any one of them, not necessarily all of them.

IMAGE PROCESSING TECHNIQUES:

This section gives various image processing techniques.

Image Enhancement

Image Restoration

IP
Image Analysis

Image Compression

Image Synthesis

FIG1.3: IMAGE PROCESSING TECHNIQUES

IMAGE ENHANCEMENT:
Image enhancement operations improve the qualities of an image like improving the
image’s contrast and brightness characteristics, reducing its noise content, or sharpen the details.
This just enhances the image and reveals the same information in more understandable image. It
does not add any information to it.

IMAGE RESTORATION:

Image restoration like enhancement improves the qualities of image but all the operations
are mainly based on known, measured, or degradations of the original image. Image restorations
are used to restore images with problems such as geometric distortion, improper focus, repetitive
noise, and camera motion. It is used to correct images for known degradations.

IMAGE ANALYSIS:

Image analysis operations produce numerical or graphical information based on


characteristics of the original image. They break into objects and then classify them. They
depend on the image statistics. Common operations are extraction and description of scene and
image features, automated measurements, and object classification. Image analyze are mainly
used in machine vision applications.

IMAGE COMPRESSION:

Image compression and decompression reduce the data content necessary to describe the
image. Most of the images contain lot of redundant information, compression removes all the
redundancies. Because of the compression the size is reduced, so efficiently stored or
transported. The compressed image is decompressed when displayed. Lossless compression
preserves the exact data in the original image, but Lossy compression does not represent the
original image but provide excellent compression.

IMAGE SYNTHESIS:
Image synthesis operations create images from other images or non-image data. Image
synthesis operations generally create images that are either physically impossible or impractical
to acquire.

APPLICATIONS OF DIGITAL IMAGE PROCESSING:

Digital image processing has a broad spectrum of applications, such as remote sensing
via satellites and other spacecrafts, image transmission and storage for business applications,
medical processing, radar, sonar and acoustic image processing, robotics and automated
inspection of industrial parts.

MEDICAL APPLICATIONS:

In medical applications, one is concerned with processing of chest X-rays,


cineangiograms, projection images of transaxial tomography and other medical images that occur
in radiology, nuclear magnetic resonance (NMR) and ultrasonic scanning. These images may be
used for patient screening and monitoring or for detection of tumors’ or other disease in patients.

SATELLITE IMAGING:

Images acquired by satellites are useful in tracking of earth resources; geographical


mapping; prediction of agricultural crops, urban growth and weather; flood and fire control; and
many other environmental applications. Space image applications include recognition and
analysis of objects contained in image obtained from deep space-probe missions.

COMMUNICATION:

Image transmission and storage applications occur in broadcast television,


teleconferencing, and transmission of facsimile images for office automation, communication of
computer networks, closed-circuit television based security monitoring systems and in military
communications.

RADAR IMAGING SYSTEMS:

Radar and sonar images are used for detection and recognition of various types of targets
or in guidance and maneuvering of aircraft or missile systems.

DOCUMENT PROCESSING:

It is used in scanning, and transmission for converting paper documents to a digital image
form, compressing the image, and storing it on magnetic tape. It is also used in document reading
for automatically detecting and recognizing printed characteristics.

DEFENSE/INTELLIGENCE:

It is used in reconnaissance photo-interpretation for automatic interpretation of earth


satellite imagery to look for sensitive targets or military threats and target acquisition and
guidance for recognizing and tracking targets in real-time smart-bomb and missile-guidance
systems.

1.2 OBJECTIVE

The main objective of our project is to increase the detection performance and it should
have less computational cost, or comparable cost when compared with the existing approaches.
The proposed hierarchical INCs detection approach is fast, adaptive, and fully automatic. The
presented CADe system should yield comparable detection accuracy and more computational
efficiency than existing systems, which should be use for clinical utility. This low-level VQ
illustrates adequate detection power for non GGO nodules, and is computationally more efficient
than the state of-the-art approaches.
1.3 EXISTING SYSTEM
Pulmonary nodules (lung nodules) are a mass of soft tissue located in the lungs which can
be diagnosed using any radiography techniques. Lung nodules does not cause any symptoms
until till becomes malignant. Malignant nodules are most often caused by lung cancer, but can
also be caused by cancer somewhere else in the body, for instance, breast cancer and colon
cancer often spread to the lungs. Any person with the symptom are taken chest X-ray, if there are
any abnormalities, they further investigate using MRI imaging. Lung nodules can be efficiently
detected in MRI imaging techniques. Since MRI are really expensive and a low economic
background people may not afford. The main objective is to develop a technique so that lung
nodules can be detected using X-ray imaging at an early stage. A multiresolution massive
training artificial neural network (MTANN) is an image processing technique used for
suppressing the contrast parameter of ribs and clavicles. The purpose is to develop the CADe
scheme with improved sensitivity and specificity by use of Virtual Dual Energy (VDE) chest
radiographs. Ribs and clavicles in the chest radiographs (X-ray images) are suppressed with
MTANN.

1.3.1 DISADVANTAGES OF EXISTING SYSTEM


 The existing methods are not faster and adaptive.

 The ribs may cause unwanted error in the detection of pulmonary


nodules.

 The processing time of x-ray image is more and so it delay’s the


result of identification of pulmonary nodule.

 Accuracy as well as efficiency is low.


1.3.2 LITERATURE SURVEY:

1. Cancer statistics, by R. Siegel, D. Naishadham, and A. Jemal


Maintaining a statewide cancer registry that meets both National Program of Cancer
Registries and Centers for Disease Control and Prevention (CDC) high quality data standards
and North American Association of Central Cancer Registries (NAACCR) gold certification
is accomplished through collaborative funding efforts.

2. Early lung cancer action project: Overall design and findings from baseline screening
by C. I. Henschke, D. I. McCauley, D. F. Yankelevitz, D. P. Naidich, G. McGuinness, O.
S. Miettinen, D. M. Libby, M. W. Pasmantier, J. Koizumi, N. K. Altorki, and J. P.
Smith
The choice of treatment for patients with cancers diagnosed as a result of screening are
selected by the treating physician in conjunction with the participant. However, each
participating institution must be committed to document, for each diagnosed case of lung
cancer, the timing and nature of the intervention(s) (if any) and also the prospective course in
respect to manifestations of metastases. The development and refinement of the screening
protocol has been a concern of the ELCAP (Early Lung Cancer Action Program) Group for
more than two decades, and it has been updated in the framework of the International
Conferences organized by this Group and in the resultant international consortium on
screening for lung cancer, I-ELCAP.

3. Lung Imaging and Computer Aided Diagnosis by A. El-Baz and J. Suri


An image-based CAD system for early detection of prostate cancer using
DCE-MRI is introduced. Prostate cancer is the most frequently diagnosed malignancy
among men and remains the second leading cause of cancer-related death in the USA with
more than 238,000 new cases and a mortality rate of about 30,000 in 2013. Therefore, early
diagnosis of prostate cancer can improve the effectiveness of treatment and increase the
patient’s chance of survival. Currently, needle biopsy is the gold standard for the diagnosis of
prostate cancer. However, it is an invasive procedure with high costs and potential morbidity
rates. Additionally, it has a higher possibility of producing false positive diagnosis due to
relatively small needle biopsy samples.

4. Guidelines for management of small pulmonary nodules detected on CT scans: A


statement from the Fleischner society by H. MacMahon, J. H. M. Austin, G. Gamsu, C.
J. Herold, J. R. Jett, D. P. Naidich, E. F. Patz, and S. J. Swensen
There is no clear consensus regarding the definition of a pulmonary nodule.
Yet, “nodule” is one of the most common words found in chest CT reports. A
committee of the Fleischner Society on CT nomenclature defined a pulmonary
nodule as “a round opacity, at least moderately well marginated and no greater
than 3 cm in maximum diameter”

5. Detection of Pulmonary Nodules Using MTANN in Chest Radiographs by Preetha.J, G.


Jayandhi
Computer aided scheme for pulmonary nodule detection in chest radiographs are to detect
the pulmonary nodules (lung nodules) that are overlapped by ribs and clavicles and to
minimize the false positive results caused by the ribs. Computed Tomography is used to
detect lung nodules, but X-Rays are preferred due to low cost and low radiation dose. But x-
rays does not effectively detect lung nodules because ribs and clavicles suppress the nodules
and produce a false positive result to the radiologists. The purpose is to develop the CADe
scheme with improved sensitivity and specificity by use of Virtual Dual Energy (VDE) chest
radiographs. Ribs and clavicles in the chest radiographs (X-ray images) are suppressed with
MTANN.
1.4 PROPOSED METHOD
Rule-Based Filtering Operations:

Feature-Based SVM/ANN Classification:

OVERALL DIAGRAM:

Image

Simple thresholding

High-level VQ

Connected component
analysis

Morphological
closing

Low-level VQ

INC
FIG 1.4: BLOCK DIAGRAM OF PROPOSED SYSTEM
CHAPTER 2

PROJECT DESCRIPTION

2.1 INTRODUCTION

[have to put intrudctuib to project here]


2.1.1 RGB COLOR IMAGE:

The RGB color model is an additive color model in which red, green, and blue light are


added together in various ways to reproduce a broad array of colors. The name of the model
comes from the initials of the three additive primary colors, red, green, and blue.

The main purpose of the RGB color model is for the sensing, representation, and display
of images in electronic systems, such as televisions and computers, though it has also been used
in conventional photography. Before the electronic age, the RGB color model already had a solid
theory behind it, based in human perception of colors.

RGB is a device-dependent color model: different devices detect or reproduce a given


RGB value differently, since the color elements (such as phosphors or dyes) and their response to
the individual R, G, and B levels vary from manufacturer to manufacturer, or even in the same
device over time. Thus an RGB value does not define the same color across devices without
some kind of color management.

Typical RGB input devices are color TV and video cameras, image scanners, and digital
cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma,
etc.), computer and mobile phone displays, video projectors, multicolor LED displays, and large
screens such as JumboTron. Color printers, on the other hand, are not RGB devices,
but subtractive color devices (typically CMYK color model).
Example of RGB color satellite image is given below

2.1.2 GRAYSCALE:

In photography and computing, a grayscale or greyscale digital image is an image in


which the value of each pixel is a single sample, that is, it carries only intensityinformation.
Images of this sort, also known as black-and-white, are composed exclusively of shades of gray,
varying from black at the weakest intensity to white at the strongest.

Grayscale images are distinct from one-bit bi-tonal black-and-white images, which in the
context of computer imaging are images with only the two colors, black, and white (also
called bilevel or binary images). Grayscale images have many shades of gray in between.
Grayscale images are also called monochromatic, denoting the presence of only one (mono)
color (chrome).

Grayscale images are often the result of measuring the intensity of light at each pixel in a
single band of the electromagnetic spectrum (e.g. infrared, visible light,ultraviolet, etc.), and in
such cases they are monochromatic proper when only a given frequency is captured. But also
they can be synthesized from a full color image; see the section about converting to grayscale.

Example of grayscale image is given below


2.1.3 MORPHOLOGICAL OPERATIONS:
To find the exact features we have to segment the lung region from the chest CT scan
image for easy computation. For segmenting the lung region from the chest CT scan image
morphological operation is carried out.

We defined an image as an (amplitude) function of two, real (coordinate) variables a(x,y)


or two, discrete variables a[m,n]. An alternative definition of an image can be based on the
notion that an image consists of a set (or collection) of either continuous or discrete coordinates.
In a sense the set corresponds to the points or pixels that belong to the objects in the image. This
is illustrated in figure below which contains two objects or sets A and B. Note that the coordinate
system is required. For the moment we will consider the pixel values to be binary as discussed in
Section 2.1 and 9.2.1. Further we shall restrict our discussion to discrete space (Z2). More
general discussions can be found in .

A binary image containing two object sets A and B.

The object A consists of those pixels a that share some common property:


As an example, object B  consists of {[0,0], [1,0], [0,1]}.

The background of A is given by Ac (the complement of A) which is defined as those


elements that are not in A:

We introduced the concept of neighborhood connectivity. We now observe that if an


object A is defined on the basis of C-connectivity (C=4, 6, or 8) then the background Ac has a
connectivity given by 12 - C. The necessity for this is illustrated for the Cartesian grid in Figure
36.

A binary image requiring careful definition of object and background connectivity.

FUNDAMENTAL DEFINITIONS
The fundamental operations associated with an object are the standard set
operations union, intersection, and complement { ,  , c} plus translation:

* Translation - Given a vector x and a set A, the translation, A + x, is defined as:

Note that, since we are dealing with a digital image composed of pixels at integer
coordinate positions (Z2), this implies restrictions on the allowable translation vectors x.
The basic Minkowski set operations--addition and subtraction--can now be defined. First
we note that the individual elements that comprise B are not only pixels but also vectors as they
have a clear coordinate position with respect to [0,0]. Given two sets A and B:

Minkowski addition -

Minkowski subtraction -

DILATION AND EROSION


From these two Minkowski operations we define the fundamental mathematical
morphology operations dilation and erosion:

Dilation -

Erosion -

where  . These two operations are illustrated in figures below for the objects
defined

   (a) Dilation D(A,B) (b) Erosion E(A,B)

A binary image containing two object sets A and B. The three pixels in B are "color-coded" as is
their effect in the result.

While either set A or B can be thought of as an "image", A is usually considered as the


image and B is called a structuring element. The structuring element is to mathematical
morphology what the convolution kernel is to linear filter theory.
Dilation, in general, causes objects to dilate or grow in size; erosion causes objects to
shrink. The amount and the way that they grow or shrink depend upon the choice of the
structuring element. Dilating or eroding without specifying the structural element makes no more
sense than trying to lowpass filter an image without specifying the filter. The two most common
structuring elements (given a Cartesian grid) are the 4-connected and 8-connected
sets, N4 and N8.

   (a) N4 (b) N8

The standard structuring elements N4 and N8.

Dilation and erosion have the following properties:

Commutative -

Non-Commutative -

Associative -

Translation Invariance -

Duality -

With A as an object and Ac as the background, eq. says that the dilation of an object is
equivalent to the erosion of the background. Likewise, the erosion of the object is equivalent to
the dilation of the background.

Except for special cases:

Non-Inverses -
Erosion has the following translation property:

Translation Invariance -

Dilation and erosion have the following important properties. For any arbitrary


structuring element B and two image objects A1 and A2 such that A1   A2 (A1 is a proper subset
of A2):

Increasing in A -

For two structuring elements B1 and B2 such that B1   B2:

Decreasing in B -

The decomposition theorems below make it possible to find efficient implementations for


morphological filters.

Dilation -

Erosion -

Erosion -

Multiple Dilations -

An important decomposition theorem is due to Vincent . First, we require some


definitions. A convex set (in R2) is one for which the straight line joining any two points in the
set consists of points that are also in the set. Care must obviously be taken when applying this
definition to discrete pixels as the concept of a "straight line" must be interpreted appropriately
in Z2. A set is bounded if each of its elements has a finite magnitude, in this case distance to the
origin of the coordinate system. A set is symmetric if B = -B. The sets N4 and N8 in Figure 38 are
examples of convex, bounded, symmetric sets.
Vincent's theorem, when applied to an image consisting of discrete pixels, states that for a
bounded, symmetric structuring element B that contains no holes and contains its own
center,  :

where  A is the contour of the object. That is,  A is the set of pixels that have a background pixel
as a neighbor. The implication of this theorem is that it is not necessary to process all the pixels
in an object in order to compute a dilation or (using eq. ) an erosion. We only have to process the
boundary pixels. This also holds for all operations that can be derived
from dilations and erosions. The processing of boundary pixels instead of object pixels means
that, except for pathological images, computational complexity can be reduced from O(N2)
to O(N) for an N x N image. A number of "fast" algorithms can be found in the literature that are
based on this result . The simplest dilation and erosion algorithms are frequently described as
follows.

* Dilation - Take each binary object pixel (with value "1") and set all background pixels (with
value "0") that are C-connected to that object pixel to the value "1".

* Erosion - Take each binary object pixel (with value "1") that is C-connected to a background
pixel and set the object pixel value to "0".

Comparison of these two procedures to eq. where B = NC=4 or NC=8 shows that they are


equivalent to the formal definitions for dilation and erosion.

   (a) B = N4 (b) B= N8

Illustration of dilation. Original object pixels are in gray; pixels added through dilation are in
black.
BOOLEAN CONVOLUTION
An arbitrary binary image object (or structuring element) A can be represented as:

where   and * are the Boolean operations OR and AND as defined in eqs. (81) and (82), a[j,k] is
a characteristic function that takes on the Boolean values "1" and "0" as follows:

and d[m,n] is a Boolean version of the Dirac delta function that takes on the Boolean values "1"
and "0" as follows:

Dilation for binary images can therefore be written as:

which, because Boolean OR and AND are commutative, can also be written as

Using De Morgan's theorem:

on eq. together with eq. , erosion can be written as:


Thus, dilation and erosion on binary images can be viewed as a form of convolution over
a Boolean algebra.

When convolution is employed, an appropriate choice of the boundary conditions for an


image is essential. Dilation and erosion--being a Boolean convolution--are no exception. The two
most common choices are that either everything outside the binary image is "0" or everything
outside the binary image is "1".

OPENING AND CLOSING


We can combine dilation and erosion to build two important higher order operations:

Opening -

Closing -

The opening and closing have the following properties:

Duality -

Translation -

For the opening with structuring element B and images A, A1, and A2, where A1 is a subimage


of A2 (A1   A2):

Antiextensivity -

Increasing monotonicity -

Idempotence -
For the closing with structuring element B and images A, A1, and A2, where A1 is a subimage
of A2 (A1   A2):

Extensivity -

Increasing monotonicity -

Idempotence -

The two properties given by eqs. and are so important to mathematical morphology that
they can be considered as the reason for defining erosion with -B instead of B in eq. .

HIT AND MISS OPERATION


The hit-or-miss operator was defined by Serra but we shall refer to it as the hit-and-
miss operator and define it as follows. Given an image A and two structuring elements B1 and B2,
the set definition and Boolean definition are:

hit-and-Miss -

where B1 and B2 are bounded, disjoint structuring elements. (Note the use of the notation from
eq. (81).) Two sets are disjoint if B1   B2 =  , the empty set. In an important sense the hit-and-
miss operatoris the morphological equivalent of template matching, a well-known technique for
matching patterns based upon cross-correlation. ere, we have a template B1 for the object and a
template B2 for the background.

SUMMARY OF THE BASIC OPERATIONS


The results of the application of these basic operations on a test image are illustrated
below. The various structuring elements used in the processing are defined. The value "-"
indicates a "don't care". All three structuring elements are symmetric.
     (a) (b) (c)

Structuring elements B, B1, and B2 that are 3 x 3 and symmetric.

The results of processing are shown in Figure 41 where the binary value "1" is shown in black
and the value "0" in white.

   

a) Image A b) Dilation with 2B c) Erosion with 2B

   

d) Opening with 2B e) Closing with 2B f) it-and-Miss with B1 and B2

Examples of various mathematical morphology operations.

The opening operation can separate objects that are connected in a binary image.


The closing operation can fill in small holes. Both operations generate a certain amount of
smoothing on an object contour given a "smooth" structuring element. The opening smoothes
from the inside of the object contour and the closing smoothes from the outside of the object
contour. The hit-and-miss example has found the 4-connected contour pixels. An alternative
method to find the contour is simply to use the relation:
4-connected contour -

or

8-connected contour -

SKELETON
The informal definition of a skeleton is a line representation of an object that is:

i) one-pixel thick,

ii) through the "middle" of the object, and,

iii) preserves the topology of the object.

These are not always realizable.

   (a) (b)

Counterexamples to the three requirements.

In the first example, it is not possible to generate a line that is one pixel thick and in the
center of an object while generating a path that reflects the simplicity of the object. In Figure 42b
it is not possible to remove a pixel from the 8-connected object and simultaneously preserve the
topology--the notion of connectedness--of the object. Nevertheless, there are a variety of
techniques that attempt to achieve this goal and to produce a skeleton.

A basic formulation is based on the work of Lantuéjoul . The skeleton subset Sk(A) is defined as:

Skeleton subsets -
where K is the largest value of k before the set Sk(A) becomes empty. (From
eq. ,  ). The structuring element B is chosen (in Z2) to approximate a
circular disc, that is, convex, bounded and symmetric. The skeleton is then the union of the
skeleton subsets:

Skeleton -

An elegant side effect of this formulation is that the original object can be reconstructed
given knowledge of the skeleton subsets Sk(A), the structuring element B, and K:

Reconstruction -

This formulation for the skeleton, however, does not preserve the topology, a requirement
described in eq. .

An alternative point-of-view is to implement a thinning, an erosion that reduces the


thickness of an object without permitting it to vanish. A general thinning algorithm is based on
the hit-and-miss operation:

Thinning -

Depending on the choice of B1 and B2, a large variety of thinning algorithms--and


through repeated application skeletonizing algorithms--can be implemented.

A quite practical implementation can be described in another way. If we restrict ourselves


to a 3 x 3 neighborhood, similar to the structuring element B = N8 in Figure 40a, then we can
view the thinning operation as a window that repeatedly scans over the (binary) image and sets
the center pixel to "0" under certain conditions. The center pixel is not changed to "0" if and only
if:

i) an isolated pixel is found


ii) removing a pixel would change the connectivity

iii) removing a pixel would shorten a line

As pixels are (potentially) removed in each iteration, the process is called


a conditional erosion. In general all possible rotations and variations have to be checked. As
there are only 512 possible combinations for a 3 x 3 window on a binary image, this can be done
easily with the use of a lookup table.

     

(a) Isolated pixel (b) Connectivity pixel (c) End pixel

Test conditions for conditional erosion of the center pixel.

If only condition

(i) is used then each object will be reduced to a single pixel. This is useful if we
wish to count the number of objects in an image. If only condition

(ii) is used then holes in the objects will be found. If conditions (i + ii) are used
each object will be reduced to either a single pixel if it does not contain a hole
or to closed rings if it does contain holes. If conditions (i + ii + iii) are used
then the "complete skeleton" will be generated as an approximation to eq. .

PROPAGATION
It is convenient to be able to reconstruct an image that has "survived" several erosions or
to fill an object that is defined, for example, by a boundary. The formal mechanism for this has
several names including region-filling, reconstruction, and propagation. The formal definition is
given by the following algorithm. We start with a seed image S(0), a mask image A, and a
structuring element B. We then use dilations of S with structuring element B and masked by A in
an iterative procedure as follows:
Iteration k -

With each iteration the seed image grows (through dilation) but within the set (object)
defined by A; S propagates to fill A. The most common choices for B are N4 or N8. Several
remarks are central to the use of propagation. First, in a straightforward implementation, as
suggested by eq. , the computational costs are extremely high. Each iteration requires O(N2)
operations for an N x N image and with the required number of iterations this can lead to a
complexity of O(N3). Fortunately, a recursive implementation of the algorithm exists in which
one or two passes through the image are usually sufficient, meaning a complexity of O(N2).
Second, although we have not paid much attention to the issue of object/background connectivity
until now , it is essential that the connectivity implied by B be matched to the connectivity
associated with the boundary definition of A (see eqs. and ). Finally, as mentioned earlier, it is
important to make the correct choice ("0" or "1") for the boundary condition of the image. The
choice depends upon the application.

SUMMARY OF SKELETON AND PROPAGATION


The application of these two operations on a test image , the skeleton operation is shown
with the endpixel condition (eq. i+ii+iii) and without the end pixel condition (eq. i+ii). The
propagation operation is illustrated in Figure 44c. The original image, shown in light gray, was
eroded by E(A,6N8) to produce the seed image shown in black. The original was then used as
the mask image to produce the final result. The border value in both images was "0".

Original = light gray Mask = light gray

Skeleton = black   Seed = black


   

a) Skeleton with end pixels b) Skeleton without end pixels c) Propagation with N8

GRAY-VALUE MORPHOLOGICAL PROCESSING


The techniques of morphological filtering can be extended to gray-level images. To
simplify matters we will restrict our presentation to structuring elements, B, that comprise a
finite number of pixels and are convex and bounded. Now, however, the structuring element has
gray values associated with every coordinate position as does the image A.

* Gray-level dilation, DG(*), is given by:

Dilation -

For a given output coordinate [m,n], the structuring element is summed with a shifted
version of the image and the maximum encountered over all shifts within
the J x K domain of B is used as the result. Should the shifting require values of the image A that
are outside the M x N domain of A, then a decision must be made as to which model for image
extension, should be used.

* Gray-level erosion, EG(*), is given by:

Erosion -

The duality between gray-level erosion and gray-level dilation--the gray-level counterpart


of eq. --is somewhat more complex than in the binary case:

Duality -
where "   " means that a[j,k] -> -a[-j,-k].

The definitions of higher order operations such as gray-level opening and gray-


level closing are:

Opening -

Closing -

The important properties that were discussed earlier such as idempotence, translation
invariance, increasing in A, and so forth are also applicable to gray level morphological
processing. The details can be found in Giardina and Dougherty .

In many situations the seeming complexity of gray level morphological processing is


significantly reduced through the use of symmetric structuring elements where b[j,k] = b[-j,-k].
The most common of these is based on the use of B = constant = 0. For this important case and
using again the domain [j,k]   B, the definitions above reduce to:

Dilation -

Erosion -

Opening -

Closing -

The remarkable conclusion is that the maximum filter and the minimum filter, , are gray-


level dilation and gray-level erosion for the specific structuring element given by the shape of the
filter window with the gray value "0" inside the window.
 

a) Effect of 15 x 1 dilation and erosion b) Effect of 15 x 1 opening and closing

Morphological filtering of gray-level data.

For a rectangular window, J x K, the two-dimensional maximum or minimum filter is


separable into two, one-dimensional windows. Further, a one-dimensional maximum or
minimum filter can be written in incremental form. This means that gray-level dilations and
erosions have a computational complexity per pixel that is O(constant), that is, independent
of J and K.

The operations defined above can be used to produce morphological algorithms for
smoothing, gradient determination and a version of the Laplacian. All are constructed from the
primitives for gray-level dilation and gray-level erosion and in all cases
the maximum and minimum filters are taken over the domain  .

MORPHOLOGICAL SMOOTHING
This algorithm is based on the observation that a gray-level opening smoothes a gray-
value image from above the brightness surface given by the function a[m,n] and the gray-
level closing smoothes from below. We use a structuring element B based on eqs. and .

Note that we have suppressed the notation for the structuring element B under
the max and min operations to keep the notation simple. Its use, however, is understood.
MORPHOLOGICAL GRADIENT
For linear filters the gradient filter yields a vector representation (eq. (103)) with a
magnitude (eq. (104)) and direction (eq. (105)). The version presented here generates a
morphological estimate of thegradient magnitude:

MORPHOLOGICAL LAPLACIAN
The morphologically-based Laplacian filter is defined by:

SUMMARY OF MORPHOLOGICAL FILTERS


The effect of these filters is illustrated below. All images were processed with a 3 x 3
structuring element as described in eqs. through . Figure 46e was contrast stretched for display
purposes and the parameters 1% and 99%.

   

a) Dilation b) Erosion c) Smoothing


   d) Gradient e) Laplacian

Examples of gray-level morphological filters.

2.1.4 BORDER CORRECTED MASK:


A mask is a filter. Concept of masking is also known as spatial filtering. Masking is also
known as filtering. In this concept we just deal with the filtering operation that is performed
directly on the image. In image processing, a kernel, convolution matrix, or mask is a
small matrix useful for blurring, sharpening, embossing, edge-detection, and more. This is
accomplished by means of convolution between a kernel and an image.
The mask is created to find the exact operations in an image. So that we can identify the
problems or the features which we need to find in an image.
The border corrected mask is a mask in which the edges are closed to find all the features
of an image.

Example of border corrected mask is given below

2.1.5 SEGMENTATION:

In computer vision, image segmentation is the process of partitioning a digital image into


multiple segments (sets of pixels, also known as superpixels). The goal of segmentation is to
simplify and/or change the representation of an image into something that is more meaningful
and easier to analyze. Image segmentation is typically used to locate objects and boundaries
(lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a
label to every pixel in an image such that pixels with the same label share certain characteristics.

The result of image segmentation is a set of segments that collectively cover the entire image, or
a set of contours extracted from the image (see edge detection). Each of the pixels in a region are
similar with respect to some characteristic or computed property, such as color, intensity,
or texture. Adjacent regions are significantly different with respect to the same
characteristic(s). When applied to a stack of images, typical in medical imaging, the resulting
contours after image segmentation can be used to create 3D reconstructions with the help of
interpolation algorithms like Marching cubes.

2.1.6 CONNECTED COMPONENT ANALYSIS (CCA) AND OBJECTS


EXTRACTION:

CCA is a well-known technique in image processing that scans an image and groups
pixels in labeled components based on pixel connectivity. An eight-point CCA stage is
performed to locate all the objects inside the binary image produced from the previous stage. The
output of this stage is an array of N objects shows an example of the input and output of this
stage.

2.2 APPLICATIONS:
The major applications of the proposed system basically points to medical diagnosis,
 The performance of our system is mainly for the detection of juxta-
pleural nodules.

 For the detection of pulmonary nodules in chest CT scans.

 It demonstrates the feasibility of our CADe system for clinical utility.


MEDICAL DIAGNOSIS:
The fast and adaptive detection of pulmonary nodules in thoracic CT images using
hierarchical vector quantization can be used for medical diagnosis for cancer candidate. Similar
steps can be used to detect cancer in various parts. Since we can find the tumor in the early stage
using the above method we can easily diagnose tumor at an early stage.

2.4 METHODOLOGIES:

2.4.1 MODULE NAMES


1. SELF-ADAPTIVE VQ ALGORITHM
2. INCS DETECTION VIA A HIERARCHICAL VQ SCHEME
3. FALSE POSITIVE REDUCTION FROM INCS

2.4.2 MODULE DESCRIPTIONS:


MODULE 1:
SELF-ADAPTIVE VQ ALGORITHM:

VQ was originally used for data compression in signal processing, and


became popular in a variety of research fields such as speech recognition,
face detection, image compression and classification, and image
segmentation. It allows for the modeling of probability density functions by
the distribution of prototype vectors. The general VQ framework evolves two
processes: 1) the training process which determines the set of codebook
vector according to the probability of the input data; and 2) the encoding
process which assigns input vectors to the codebook vectors. The well-known
Linde– Buzo–Gray (LBG) algorithm has been widely used for the design of
vector quantizer. The algorithm aims to minimize the mean squared error
and guarantees to converge to the local optimality.

MODULE 2:
INCS DETECTION VIA A HIERARCHICAL VQ SCHEME:
A very important but difficult task in the CADe of lung nodules is the
detection of INCs, which aims to search for suspicious 3-D objects as nodule
candidates using specific strategies. This step is required to be characterized
by a sensitivity that is as close to 100% as possible, in order to avoid setting
a priori upper bound on the CADe system performance. Meanwhile, the INCs
should minimize the number of FPs to ease the following FP reduction step.
This section presents our hierarchical VQ scheme for automatic detection
and segmentation of INCs.

MODULE 3:
FALSE POSITIVE REDUCTION FROM INCS:
Rule-Based Filtering Operations:
It is challenging to thoroughly separate nodules from attached structures due to their
similar intensities, especially for the juxta-vascular nodules (the nodules attached to blood
vessels). Since the thickness of blood vessels varies considerably (e.g., from small veins to large
arteries), a 2-D morphological opening disk with radius of 1 up to 5 pixels was adopted to detach
vessels at different degrees.
Feature-Based SVM Classification:
A supervised learning strategy is carried out using the SVM classifier to further reduce
FPs. Our feature-based SVM classifier relies on a series of features extracted from each of the
remaining INC after rule-based filtering operations.
CHAPTER 3
SOFTWARE SPECIFICATION

3.1 GENERAL
MATLAB (matrix laboratory) is a numerical computing environment and fourth-
generation programming language. Developed by Math Works, MATLAB
allows matrix manipulations, plotting of functions and data, implementation of algorithms,
creation of user interfaces, and interfacing with programs written in other languages,
including C, C++, Java, and Fortran.

Although MATLAB is intended primarily for numerical computing, an optional


toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities.
An additional package, Simulink, adds graphical multi-domain simulation and Model-Based
Design for dynamic and embedded systems.

In 2004, MATLAB had around one million users across industry and
academia. MATLAB users come from various backgrounds of engineering, science,
and economics. MATLAB is widely used in academic and research institutions as well as
industrial enterprises.

MATLAB was first adopted by researchers and practitioners in control engineering,


Little's specialty, but quickly spread to many other domains. It is now also used in education, in
particular the teaching of linear algebra and numerical analysis, and is popular amongst scientists
involved in image processing. The MATLAB application is built around the MATLAB
language. The simplest way to execute MATLAB code is to type it in the Command Window,
which is one of the elements of the MATLAB Desktop. When code is entered in the Command
Window, MATLAB can be used as an interactive mathematical shell. Sequences of commands
can be saved in a text file, typically using the MATLAB Editor, as a script or encapsulated into
a function, extending the commands available.

MATLAB provides a number of features for documenting and sharing your work. You
can integrate your MATLAB code with other languages and applications, and distribute your
MATLAB algorithms and applications.

3.2 FEATURES OF MATLAB

 High-level language for technical computing.


 Development environment for managing code, files, and data.
 Interactive tools for iterative exploration, design, and problem solving.
 Mathematical functions for linear algebra, statistics, Fourier analysis,
filtering, optimization, and numerical integration.
 2-D and 3-D graphics functions for visualizing data.
 Tools for building custom graphical user interfaces.
 Functions for integrating MATLAB based algorithms with external applications and
languages, such as C, C++, Fortran, Java™, COM, and Microsoft Excel.

MATLAB is used in vast area, including signal and image processing, communications,
control design, test and measurement, financial modeling and analysis, and computational. Add-
on toolboxes (collections of special-purpose MATLAB functions) extend the MATLAB
environment to solve particular classes of problems in these application areas.

MATLAB can be used on personal computers and powerful server systems, including
the Cheaha compute cluster. With the addition of the Parallel Computing Toolbox, the language
can be extended with parallel implementations for common computational functions, including
for-loop unrolling. Additionally this toolbox supports offloading computationally intensive
workloads to Cheaha the campus compute cluster. MATLAB is one of a few languages in which
each variable is a matrix (broadly construed) and "knows" how big it is. Moreover, the
fundamental operators (e.g. addition, multiplication) are programmed to deal with matrices when
required. And the MATLAB environment handles much of the bothersome housekeeping that
makes all this possible. Since so many of the procedures required for Macro-Investment Analysis
involves matrices, MATLAB proves to be an extremely efficient language for both
communication and implementation.

3.2.1 INTERFACING WITH OTHER LANGUAGES

MATLAB can call functions and subroutines written in the  C programming


language or FORTRAN. A wrapper function is created allowing MATLAB data types to be
passed and returned. The dynamically loadable object files created by compiling such functions
are termed "MEX-files" (for MATLAB executable).

Libraries written in Java, ActiveX or .NET can be directly called from MATLAB and


many MATLAB libraries (for example XML or SQL support) are implemented as wrappers
around Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be
done with MATLAB extension, which is sold separately by Math Works, or using an
undocumented mechanism called JMI (Java-to-Mat lab Interface), which should not be confused
with the unrelated Java that is also called JMI.

As alternatives to the MuPAD based Symbolic Math Toolbox available from Math Works,


MATLAB can be connected to Maple or Mathematica.

Libraries also exist to import and export MathML.

Development Environment

 Startup Accelerator for faster MATLAB startup on Windows, especially on


Windows XP, and for network installations.
 Spreadsheet Import Tool that provides more options for selecting and loading mixed
textual and numeric data.
 Readability and navigation improvements to warning and error messages in the
MATLAB command window.
 Automatic variable and function renaming in the MATLAB Editor.

Developing Algorithms and Applications

MATLAB provides a high-level language and development tools that let you quickly
develop and analyze your algorithms and applications.

The MATLAB Language

The MATLAB language supports the vector and matrix operations that are fundamental
to engineering and scientific problems. It enables fast development and execution. With the
MATLAB language, you can program and develop algorithms faster than with traditional
languages because you do not need to perform low-level administrative tasks, such as declaring
variables, specifying data types, and allocating memory. In many cases, MATLAB eliminates the
need for ‘for’ loops. As a result, one line of MATLAB code can often replace several lines of C
or C++ code.

At the same time, MATLAB provides all the features of a traditional programming
language, including arithmetic operators, flow control, data structures, data types, object-oriented
programming (OOP), and debugging features.

MATLAB lets you execute commands or groups of commands one at a time, without
compiling and linking, enabling you to quickly iterate to the optimal solution. For fast execution
of heavy matrix and vector computations, MATLAB uses processor-optimized libraries. For
general-purpose scalar computations, MATLAB generates machine-code instructions using its
JIT (Just-In-Time) compilation technology.

This technology, which is available on most platforms, provides execution speeds that
rival those of traditional programming languages.

Development Tools
MATLAB includes development tools that help you implement your algorithm
efficiently. These include the following:

MATLAB Editor 

Provides standard editing and debugging features, such as setting breakpoints and single
stepping

Code Analyzer 

Checks your code for problems and recommends modifications to maximize


performance and maintainability

MATLAB Profiler 

Records the time spent executing each line of code

Directory Reports 

Scan all the files in a directory and report on code efficiency, file differences, file
dependencies, and code coverage

Designing Graphical User Interfaces

By using the interactive tool GUIDE (Graphical User Interface Development


Environment) to layout, design, and edit user interfaces. GUIDE lets you include list boxes, pull-
down menus, push buttons, radio buttons, and sliders, as well as MATLAB plots and Microsoft
ActiveX® controls. Alternatively, you can create GUIs programmatically using MATLAB
functions.
3.2.2 ANALYZING AND ACCESSING DATA

MATLAB supports the entire data analysis process, from acquiring data from external
devices and databases, through preprocessing, visualization, and numerical analysis, to
producing presentation-quality output.

Data Analysis

MATLAB provides interactive tools and command-line functions for data analysis
operations, including:

 Interpolating and decimating


 Extracting sections of data, scaling, and averaging
 Thresholding and smoothing
 Correlation, Fourier analysis, and filtering
 1-D peak, valley, and zero finding
 Basic statistics and curve fitting
 Matrix analysis

Data Access

MATLAB is an efficient platform for accessing data from files, other


applications, databases, and external devices. You can read data from popular file formats, such
as Microsoft Excel; ASCII text or binary files; image, sound, and video files; and scientific files,
such as HDF and HDF5. Low-level binary file I/O functions let you work with data files in any
format. Additional functions let you read data from Web pages and XML.
Visualizing Data

All the graphics features that are required to visualize engineering and scientific data are
available in MATLAB. These include 2-D and 3-D plotting functions, 3-D volume visualization
functions, tools for interactively creating plots, and the ability to export results to all popular
graphics formats. You can customize plots by adding multiple axes; changing line colors and
markers; adding annotation, Latex equations, and legends; and drawing shapes.

2-D Plotting

Visualizing vectors of data with 2-D plotting functions that create:

 Line, area, bar, and pie charts.


 Direction and velocity plots.
 Histograms.
 Polygons and surfaces.
 Scatter/bubble plots.
 Animations.

3-D Plotting and Volume Visualization

MATLAB provides functions for visualizing 2-D matrices, 3-D scalar, and 3-D
vector data. You can use these functions to visualize and understand large, often complex,
multidimensional data. Specifying plot characteristics, such as camera viewing angle,
perspective, lighting effect, light source locations, and transparency.
3-D plotting functions include:

 Surface, contour, and mesh.


 Image plots.
 Cone, slice, stream, and isosurface.

3.2.3 PERFORMING NUMERIC COMPUTATION

MATLAB contains mathematical, statistical, and engineering functions to support all


common engineering and science operations. These functions, developed by experts in
mathematics, are the foundation of the MATLAB language. The core math functions use the
LAPACK and BLAS linear algebra subroutine libraries and the FFTW Discrete Fourier
Transform library. Because these processor-dependent libraries are optimized to the different
platforms that MATLAB supports, they execute faster than the equivalent C or C++ code.

MATLAB provides the following types of functions for performing mathematical


operations and analyzing data:

 Matrix manipulation and linear algebra.


 Polynomials and interpolation.
 Fourier analysis and filtering.
 Data analysis and statistics.
 Optimization and numerical integration.
 Ordinary differential equations (ODEs).
 Partial differential equations (PDEs).
 Sparse matrix operations.
MATLAB can perform arithmetic on a wide range of data types, including doubles,
singles, and integers.

CHAPTER 4

IMPLEMENTATION

4.1 GENERAL

Matlab is a program that was originally designed to simplify the


implementation of numerical linear algebra routines. It has since grown into
something much bigger, and it is used to implement numerical algorithms for a
wide range of applications. The basic language used is very similar to standard
linear algebra notation, but there are a few extensions that will likely cause you
some problems at first.

4.2 CODE IMPLEMENTATION

clc;
close all;
clear all;
warning off;
%%
%-------------------GET INPUT DATA--------------------------%
[f,p]=uigetfile('*.jpg;*.png;*.bmp;*.tif');
I=im2double(imread([p,f]));

%Convert to GRAY image

I=rgb2gray(I);
figure;
imshow(I);
title('INPUT IMAGE');

%%
%---------------SIMPLE THRESHOLDING-------------------------%
tic
I1=imerode(I,1);

img=im2bw(I1);

img_1=imcomplement(img);

border=imclearborder(img_1,8);
figure,
imshow(border);
title('LUNG REGION EXTRACTED IMAGE')

%%
%------------------FINDING THE LUNG MASK--------------------%
se=strel('disk',8);
I2=imerode(I,se);

img=im2bw(I2);

img_1=imcomplement(img);

border=imclearborder(img_1,8);
q=imclose(border,se);

mask=imdilate(q,se);
figure,
imshow(mask);
title('BORDER CORRECTED MASKED IMAGE');

%%
%-----------------SEGMENTED REGION----------------------%
gray=I.*mask;
figure,
imshow(gray);
title('SEGMENTED IMAGE');

%%
%----------------- DETECT INTITIAL NODULE---------------%
se1=strel('disk',8);
qq=imerode(gray,se1);
qq1=im2bw(qq);
figure,
imshow(qq1);
title('INITIAL NODULE CANDIDATE');

%%
%----------------RULE BASED FILTERING------------------%
rp = regionprops(qq1, 'BoundingBox', 'Area');
area = [rp.Area].';
[~,ind] = max(area);
bb = rp(ind).BoundingBox;
imshow(qq);
rectangle('Position', bb, 'EdgeColor', 'red');
title('RULE BASED DETECTED NODULE');

%%
%-------------CROPPING THE DETECTED REGION--------------%
crop=imcrop(qq,bb);
figure,
imshow(crop);
title('CROPPED DETECTED NODULE');

%%
%----------SVM TRAINING AND CLASSIFICATION-------------%
load final_data.mat

fd=final_data;
training1=double(final_data(1:128,1:40)');

training2=double(final_data(1:128,41:80)');
for i=1:20
label1(i,1)=1;
end
for j=21:40
label1(j,1)=2;
end
for i=41:60
label1(i,1)=3;
end
for j=61:80
label1(j,1)=4;
end
svmstruct = svmtrain(training1,label1(1:40,1), 'Kernel_Function','rbf',...
'boxconstraint',Inf,'showplot',true);
svm_classification
toc
%%
4.3 SNAPSHOTS:

[NEEED TO ADD ALL SNAP SHOTS STEP BY STEP]


CHAPTER 5

CONCLUSION AND REFERENCES

5.1 CONCLUSION

Das könnte Ihnen auch gefallen