Beruflich Dokumente
Kultur Dokumente
TABLE OF CONTENTS
CHAPTER PAG
NO. TITLE E NO.
ABSTRACT
LIST OF FIGURES
LIST OF ABBREVATIONS
CHAPTER 1 : INTRODUCTION
1.1 GENERAL
1.1.1 THE IMAGE PROCESSING SYSTEM
1.1.2 IMAGE PROCESSING FUNDAMENTAL
1.2 OBJECTIVE
1. 1.3 EXISTING SYSTEM
1.3.1 EXISTING SYSTEM DISADVANTAGES
1.3.2 LITERATURE SURVEY
1.4 PROPOSED SYSTEM
1.4.1 PROPOSED SYSTEM ADVANTAGES
CHAPTER 2 :PROJECT DESCRIPTION
2.1 INTRODUCTION
2.1.1 RGB COLOR IMAGE
2.1.2 GRAYSCALE
2.1.3 MORPHOLOGICAL OPERATIONS
2.1.4 BORDER CORRECTED MASK
2.
2.1.5 SEGMENTATION
2.1.6 CONNECTED COMPONENT ANALYSIS (CCA)
2.2 APPLICATIONS
2.3 MODULE EXPLANATION
2.3.1 MODULE NAMES
2.3.2 MODULE DESCRIPTIONS
2.3.3 TECHNIQUE
CHAPTER 3 : SOFTWARE SPECIFICATION
3.1 GENERAL
CHAPTER 1
INTRODUCTION
1.1 GENERAL
The term digital image refers to processing of a two dimensional picture by a digital
computer. In a broader context, it implies digital processing of any two dimensional data. A
digital image is an array of real or complex numbers represented by a finite number of bits. An
image given in the form of a transparency, slide, photograph or an X-ray is first digitized and
stored as a matrix of binary digits in computer memory. This digitized image can then be
processed and/or displayed on a high-resolution television monitor. For display, the image is
stored in a rapid-access buffer memory, which refreshes the monitor at a rate of 25 frames per
second to produce a visually continuous display.
DIGITIZER:
A digitizer converts an image into a numerical representation suitable for input into a
digital computer. Some common digitizers are
1. Microdensitometer
2. Flying spot scanner
3. Image dissector
4. Videocon camera
5. Photosensitive solid- state arrays.
IMAGE PROCESSOR:
Knowledge Result
Preprocessing Recognition & interpretation
Base
FIG 1.2 BLOCK DIAGRAM OF FUNDAMENTAL SEQUENCE INVOLVED IN AN IMAGE PROCESSING SYSTEM
As detailed in the diagram, the first step in the process is image acquisition by an imaging
sensor in conjunction with a digitizer to digitize the image. The next step is the preprocessing step
where the image is improved being fed as an input to the other processes. Preprocessing typically
deals with enhancing, removing noise, isolating regions, etc. Segmentation partitions an image
into its constituent parts or objects. The output of segmentation is usually raw pixel data, which
consists of either the boundary of the region or the pixels in the region themselves.
Representation is the process of transforming the raw pixel data into a form useful for
subsequent processing by the computer. Description deals with extracting features that are basic
in differentiating one class of objects from another. Recognition assigns a label to an object
based on the information provided by its descriptors. Interpretation involves assigning meaning
to an ensemble of recognized objects. The knowledge about a problem domain is incorporated
into the knowledge base. The knowledge base guides the operation of each processing module
and also controls the interaction between the modules. Not all modules need be necessarily
present for a specific function. The composition of the image processing system depends on its
application. The frame rate of the image processor is normally around 25 frames per second.
DIGITAL COMPUTER:
MASS STORAGE:
The secondary storage devices normally used are floppy disks, CD ROMs etc.
The hard copy device is used to produce a permanent copy of the image and for the
storage of the software involved.
OPERATOR CONSOLE:
The operator console consists of equipment and arrangements for verification of
intermediate results and for alterations in the software as and when require. The operator is also
capable of checking for any resulting errors and for the entry of requisite data.
Digital image processing refers processing of the image in digital form. Modern cameras
may directly take the image in digital form but generally images are originated in optical form.
They are captured by video cameras and digitalized. The digitalization process includes
sampling, quantization. Then these images are processed by the five fundamental processes, at
least any one of them, not necessarily all of them.
Image Enhancement
Image Restoration
IP
Image Analysis
Image Compression
Image Synthesis
IMAGE ENHANCEMENT:
Image enhancement operations improve the qualities of an image like improving the
image’s contrast and brightness characteristics, reducing its noise content, or sharpen the details.
This just enhances the image and reveals the same information in more understandable image. It
does not add any information to it.
IMAGE RESTORATION:
Image restoration like enhancement improves the qualities of image but all the operations
are mainly based on known, measured, or degradations of the original image. Image restorations
are used to restore images with problems such as geometric distortion, improper focus, repetitive
noise, and camera motion. It is used to correct images for known degradations.
IMAGE ANALYSIS:
IMAGE COMPRESSION:
Image compression and decompression reduce the data content necessary to describe the
image. Most of the images contain lot of redundant information, compression removes all the
redundancies. Because of the compression the size is reduced, so efficiently stored or
transported. The compressed image is decompressed when displayed. Lossless compression
preserves the exact data in the original image, but Lossy compression does not represent the
original image but provide excellent compression.
IMAGE SYNTHESIS:
Image synthesis operations create images from other images or non-image data. Image
synthesis operations generally create images that are either physically impossible or impractical
to acquire.
Digital image processing has a broad spectrum of applications, such as remote sensing
via satellites and other spacecrafts, image transmission and storage for business applications,
medical processing, radar, sonar and acoustic image processing, robotics and automated
inspection of industrial parts.
MEDICAL APPLICATIONS:
SATELLITE IMAGING:
COMMUNICATION:
Radar and sonar images are used for detection and recognition of various types of targets
or in guidance and maneuvering of aircraft or missile systems.
DOCUMENT PROCESSING:
It is used in scanning, and transmission for converting paper documents to a digital image
form, compressing the image, and storing it on magnetic tape. It is also used in document reading
for automatically detecting and recognizing printed characteristics.
DEFENSE/INTELLIGENCE:
1.2 OBJECTIVE
The main objective of our project is to increase the detection performance and it should
have less computational cost, or comparable cost when compared with the existing approaches.
The proposed hierarchical INCs detection approach is fast, adaptive, and fully automatic. The
presented CADe system should yield comparable detection accuracy and more computational
efficiency than existing systems, which should be use for clinical utility. This low-level VQ
illustrates adequate detection power for non GGO nodules, and is computationally more efficient
than the state of-the-art approaches.
1.3 EXISTING SYSTEM
Pulmonary nodules (lung nodules) are a mass of soft tissue located in the lungs which can
be diagnosed using any radiography techniques. Lung nodules does not cause any symptoms
until till becomes malignant. Malignant nodules are most often caused by lung cancer, but can
also be caused by cancer somewhere else in the body, for instance, breast cancer and colon
cancer often spread to the lungs. Any person with the symptom are taken chest X-ray, if there are
any abnormalities, they further investigate using MRI imaging. Lung nodules can be efficiently
detected in MRI imaging techniques. Since MRI are really expensive and a low economic
background people may not afford. The main objective is to develop a technique so that lung
nodules can be detected using X-ray imaging at an early stage. A multiresolution massive
training artificial neural network (MTANN) is an image processing technique used for
suppressing the contrast parameter of ribs and clavicles. The purpose is to develop the CADe
scheme with improved sensitivity and specificity by use of Virtual Dual Energy (VDE) chest
radiographs. Ribs and clavicles in the chest radiographs (X-ray images) are suppressed with
MTANN.
2. Early lung cancer action project: Overall design and findings from baseline screening
by C. I. Henschke, D. I. McCauley, D. F. Yankelevitz, D. P. Naidich, G. McGuinness, O.
S. Miettinen, D. M. Libby, M. W. Pasmantier, J. Koizumi, N. K. Altorki, and J. P.
Smith
The choice of treatment for patients with cancers diagnosed as a result of screening are
selected by the treating physician in conjunction with the participant. However, each
participating institution must be committed to document, for each diagnosed case of lung
cancer, the timing and nature of the intervention(s) (if any) and also the prospective course in
respect to manifestations of metastases. The development and refinement of the screening
protocol has been a concern of the ELCAP (Early Lung Cancer Action Program) Group for
more than two decades, and it has been updated in the framework of the International
Conferences organized by this Group and in the resultant international consortium on
screening for lung cancer, I-ELCAP.
OVERALL DIAGRAM:
Image
Simple thresholding
High-level VQ
Connected component
analysis
Morphological
closing
Low-level VQ
INC
FIG 1.4: BLOCK DIAGRAM OF PROPOSED SYSTEM
CHAPTER 2
PROJECT DESCRIPTION
2.1 INTRODUCTION
The main purpose of the RGB color model is for the sensing, representation, and display
of images in electronic systems, such as televisions and computers, though it has also been used
in conventional photography. Before the electronic age, the RGB color model already had a solid
theory behind it, based in human perception of colors.
Typical RGB input devices are color TV and video cameras, image scanners, and digital
cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma,
etc.), computer and mobile phone displays, video projectors, multicolor LED displays, and large
screens such as JumboTron. Color printers, on the other hand, are not RGB devices,
but subtractive color devices (typically CMYK color model).
Example of RGB color satellite image is given below
2.1.2 GRAYSCALE:
Grayscale images are distinct from one-bit bi-tonal black-and-white images, which in the
context of computer imaging are images with only the two colors, black, and white (also
called bilevel or binary images). Grayscale images have many shades of gray in between.
Grayscale images are also called monochromatic, denoting the presence of only one (mono)
color (chrome).
Grayscale images are often the result of measuring the intensity of light at each pixel in a
single band of the electromagnetic spectrum (e.g. infrared, visible light,ultraviolet, etc.), and in
such cases they are monochromatic proper when only a given frequency is captured. But also
they can be synthesized from a full color image; see the section about converting to grayscale.
FUNDAMENTAL DEFINITIONS
The fundamental operations associated with an object are the standard set
operations union, intersection, and complement { , , c} plus translation:
Note that, since we are dealing with a digital image composed of pixels at integer
coordinate positions (Z2), this implies restrictions on the allowable translation vectors x.
The basic Minkowski set operations--addition and subtraction--can now be defined. First
we note that the individual elements that comprise B are not only pixels but also vectors as they
have a clear coordinate position with respect to [0,0]. Given two sets A and B:
Minkowski addition -
Minkowski subtraction -
Dilation -
Erosion -
where . These two operations are illustrated in figures below for the objects
defined
A binary image containing two object sets A and B. The three pixels in B are "color-coded" as is
their effect in the result.
(a) N4 (b) N8
Commutative -
Non-Commutative -
Associative -
Translation Invariance -
Duality -
With A as an object and Ac as the background, eq. says that the dilation of an object is
equivalent to the erosion of the background. Likewise, the erosion of the object is equivalent to
the dilation of the background.
Non-Inverses -
Erosion has the following translation property:
Translation Invariance -
Increasing in A -
Decreasing in B -
Dilation -
Erosion -
Erosion -
Multiple Dilations -
where A is the contour of the object. That is, A is the set of pixels that have a background pixel
as a neighbor. The implication of this theorem is that it is not necessary to process all the pixels
in an object in order to compute a dilation or (using eq. ) an erosion. We only have to process the
boundary pixels. This also holds for all operations that can be derived
from dilations and erosions. The processing of boundary pixels instead of object pixels means
that, except for pathological images, computational complexity can be reduced from O(N2)
to O(N) for an N x N image. A number of "fast" algorithms can be found in the literature that are
based on this result . The simplest dilation and erosion algorithms are frequently described as
follows.
* Dilation - Take each binary object pixel (with value "1") and set all background pixels (with
value "0") that are C-connected to that object pixel to the value "1".
* Erosion - Take each binary object pixel (with value "1") that is C-connected to a background
pixel and set the object pixel value to "0".
(a) B = N4 (b) B= N8
Illustration of dilation. Original object pixels are in gray; pixels added through dilation are in
black.
BOOLEAN CONVOLUTION
An arbitrary binary image object (or structuring element) A can be represented as:
where and * are the Boolean operations OR and AND as defined in eqs. (81) and (82), a[j,k] is
a characteristic function that takes on the Boolean values "1" and "0" as follows:
and d[m,n] is a Boolean version of the Dirac delta function that takes on the Boolean values "1"
and "0" as follows:
Opening -
Closing -
Duality -
Translation -
Antiextensivity -
Increasing monotonicity -
Idempotence -
For the closing with structuring element B and images A, A1, and A2, where A1 is a subimage
of A2 (A1 A2):
Extensivity -
Increasing monotonicity -
Idempotence -
The two properties given by eqs. and are so important to mathematical morphology that
they can be considered as the reason for defining erosion with -B instead of B in eq. .
hit-and-Miss -
where B1 and B2 are bounded, disjoint structuring elements. (Note the use of the notation from
eq. (81).) Two sets are disjoint if B1 B2 = , the empty set. In an important sense the hit-and-
miss operatoris the morphological equivalent of template matching, a well-known technique for
matching patterns based upon cross-correlation. ere, we have a template B1 for the object and a
template B2 for the background.
The results of processing are shown in Figure 41 where the binary value "1" is shown in black
and the value "0" in white.
or
8-connected contour -
SKELETON
The informal definition of a skeleton is a line representation of an object that is:
i) one-pixel thick,
(a) (b)
In the first example, it is not possible to generate a line that is one pixel thick and in the
center of an object while generating a path that reflects the simplicity of the object. In Figure 42b
it is not possible to remove a pixel from the 8-connected object and simultaneously preserve the
topology--the notion of connectedness--of the object. Nevertheless, there are a variety of
techniques that attempt to achieve this goal and to produce a skeleton.
Skeleton subsets -
where K is the largest value of k before the set Sk(A) becomes empty. (From
eq. , ). The structuring element B is chosen (in Z2) to approximate a
circular disc, that is, convex, bounded and symmetric. The skeleton is then the union of the
skeleton subsets:
Skeleton -
An elegant side effect of this formulation is that the original object can be reconstructed
given knowledge of the skeleton subsets Sk(A), the structuring element B, and K:
Reconstruction -
This formulation for the skeleton, however, does not preserve the topology, a requirement
described in eq. .
Thinning -
If only condition
(i) is used then each object will be reduced to a single pixel. This is useful if we
wish to count the number of objects in an image. If only condition
(ii) is used then holes in the objects will be found. If conditions (i + ii) are used
each object will be reduced to either a single pixel if it does not contain a hole
or to closed rings if it does contain holes. If conditions (i + ii + iii) are used
then the "complete skeleton" will be generated as an approximation to eq. .
PROPAGATION
It is convenient to be able to reconstruct an image that has "survived" several erosions or
to fill an object that is defined, for example, by a boundary. The formal mechanism for this has
several names including region-filling, reconstruction, and propagation. The formal definition is
given by the following algorithm. We start with a seed image S(0), a mask image A, and a
structuring element B. We then use dilations of S with structuring element B and masked by A in
an iterative procedure as follows:
Iteration k -
With each iteration the seed image grows (through dilation) but within the set (object)
defined by A; S propagates to fill A. The most common choices for B are N4 or N8. Several
remarks are central to the use of propagation. First, in a straightforward implementation, as
suggested by eq. , the computational costs are extremely high. Each iteration requires O(N2)
operations for an N x N image and with the required number of iterations this can lead to a
complexity of O(N3). Fortunately, a recursive implementation of the algorithm exists in which
one or two passes through the image are usually sufficient, meaning a complexity of O(N2).
Second, although we have not paid much attention to the issue of object/background connectivity
until now , it is essential that the connectivity implied by B be matched to the connectivity
associated with the boundary definition of A (see eqs. and ). Finally, as mentioned earlier, it is
important to make the correct choice ("0" or "1") for the boundary condition of the image. The
choice depends upon the application.
Dilation -
For a given output coordinate [m,n], the structuring element is summed with a shifted
version of the image and the maximum encountered over all shifts within
the J x K domain of B is used as the result. Should the shifting require values of the image A that
are outside the M x N domain of A, then a decision must be made as to which model for image
extension, should be used.
Erosion -
Duality -
where " " means that a[j,k] -> -a[-j,-k].
Opening -
Closing -
The important properties that were discussed earlier such as idempotence, translation
invariance, increasing in A, and so forth are also applicable to gray level morphological
processing. The details can be found in Giardina and Dougherty .
Dilation -
Erosion -
Opening -
Closing -
The operations defined above can be used to produce morphological algorithms for
smoothing, gradient determination and a version of the Laplacian. All are constructed from the
primitives for gray-level dilation and gray-level erosion and in all cases
the maximum and minimum filters are taken over the domain .
MORPHOLOGICAL SMOOTHING
This algorithm is based on the observation that a gray-level opening smoothes a gray-
value image from above the brightness surface given by the function a[m,n] and the gray-
level closing smoothes from below. We use a structuring element B based on eqs. and .
Note that we have suppressed the notation for the structuring element B under
the max and min operations to keep the notation simple. Its use, however, is understood.
MORPHOLOGICAL GRADIENT
For linear filters the gradient filter yields a vector representation (eq. (103)) with a
magnitude (eq. (104)) and direction (eq. (105)). The version presented here generates a
morphological estimate of thegradient magnitude:
MORPHOLOGICAL LAPLACIAN
The morphologically-based Laplacian filter is defined by:
2.1.5 SEGMENTATION:
The result of image segmentation is a set of segments that collectively cover the entire image, or
a set of contours extracted from the image (see edge detection). Each of the pixels in a region are
similar with respect to some characteristic or computed property, such as color, intensity,
or texture. Adjacent regions are significantly different with respect to the same
characteristic(s). When applied to a stack of images, typical in medical imaging, the resulting
contours after image segmentation can be used to create 3D reconstructions with the help of
interpolation algorithms like Marching cubes.
CCA is a well-known technique in image processing that scans an image and groups
pixels in labeled components based on pixel connectivity. An eight-point CCA stage is
performed to locate all the objects inside the binary image produced from the previous stage. The
output of this stage is an array of N objects shows an example of the input and output of this
stage.
2.2 APPLICATIONS:
The major applications of the proposed system basically points to medical diagnosis,
The performance of our system is mainly for the detection of juxta-
pleural nodules.
2.4 METHODOLOGIES:
MODULE 2:
INCS DETECTION VIA A HIERARCHICAL VQ SCHEME:
A very important but difficult task in the CADe of lung nodules is the
detection of INCs, which aims to search for suspicious 3-D objects as nodule
candidates using specific strategies. This step is required to be characterized
by a sensitivity that is as close to 100% as possible, in order to avoid setting
a priori upper bound on the CADe system performance. Meanwhile, the INCs
should minimize the number of FPs to ease the following FP reduction step.
This section presents our hierarchical VQ scheme for automatic detection
and segmentation of INCs.
MODULE 3:
FALSE POSITIVE REDUCTION FROM INCS:
Rule-Based Filtering Operations:
It is challenging to thoroughly separate nodules from attached structures due to their
similar intensities, especially for the juxta-vascular nodules (the nodules attached to blood
vessels). Since the thickness of blood vessels varies considerably (e.g., from small veins to large
arteries), a 2-D morphological opening disk with radius of 1 up to 5 pixels was adopted to detach
vessels at different degrees.
Feature-Based SVM Classification:
A supervised learning strategy is carried out using the SVM classifier to further reduce
FPs. Our feature-based SVM classifier relies on a series of features extracted from each of the
remaining INC after rule-based filtering operations.
CHAPTER 3
SOFTWARE SPECIFICATION
3.1 GENERAL
MATLAB (matrix laboratory) is a numerical computing environment and fourth-
generation programming language. Developed by Math Works, MATLAB
allows matrix manipulations, plotting of functions and data, implementation of algorithms,
creation of user interfaces, and interfacing with programs written in other languages,
including C, C++, Java, and Fortran.
In 2004, MATLAB had around one million users across industry and
academia. MATLAB users come from various backgrounds of engineering, science,
and economics. MATLAB is widely used in academic and research institutions as well as
industrial enterprises.
MATLAB provides a number of features for documenting and sharing your work. You
can integrate your MATLAB code with other languages and applications, and distribute your
MATLAB algorithms and applications.
MATLAB is used in vast area, including signal and image processing, communications,
control design, test and measurement, financial modeling and analysis, and computational. Add-
on toolboxes (collections of special-purpose MATLAB functions) extend the MATLAB
environment to solve particular classes of problems in these application areas.
MATLAB can be used on personal computers and powerful server systems, including
the Cheaha compute cluster. With the addition of the Parallel Computing Toolbox, the language
can be extended with parallel implementations for common computational functions, including
for-loop unrolling. Additionally this toolbox supports offloading computationally intensive
workloads to Cheaha the campus compute cluster. MATLAB is one of a few languages in which
each variable is a matrix (broadly construed) and "knows" how big it is. Moreover, the
fundamental operators (e.g. addition, multiplication) are programmed to deal with matrices when
required. And the MATLAB environment handles much of the bothersome housekeeping that
makes all this possible. Since so many of the procedures required for Macro-Investment Analysis
involves matrices, MATLAB proves to be an extremely efficient language for both
communication and implementation.
Development Environment
MATLAB provides a high-level language and development tools that let you quickly
develop and analyze your algorithms and applications.
The MATLAB language supports the vector and matrix operations that are fundamental
to engineering and scientific problems. It enables fast development and execution. With the
MATLAB language, you can program and develop algorithms faster than with traditional
languages because you do not need to perform low-level administrative tasks, such as declaring
variables, specifying data types, and allocating memory. In many cases, MATLAB eliminates the
need for ‘for’ loops. As a result, one line of MATLAB code can often replace several lines of C
or C++ code.
At the same time, MATLAB provides all the features of a traditional programming
language, including arithmetic operators, flow control, data structures, data types, object-oriented
programming (OOP), and debugging features.
MATLAB lets you execute commands or groups of commands one at a time, without
compiling and linking, enabling you to quickly iterate to the optimal solution. For fast execution
of heavy matrix and vector computations, MATLAB uses processor-optimized libraries. For
general-purpose scalar computations, MATLAB generates machine-code instructions using its
JIT (Just-In-Time) compilation technology.
This technology, which is available on most platforms, provides execution speeds that
rival those of traditional programming languages.
Development Tools
MATLAB includes development tools that help you implement your algorithm
efficiently. These include the following:
MATLAB Editor
Provides standard editing and debugging features, such as setting breakpoints and single
stepping
Code Analyzer
MATLAB Profiler
Directory Reports
Scan all the files in a directory and report on code efficiency, file differences, file
dependencies, and code coverage
MATLAB supports the entire data analysis process, from acquiring data from external
devices and databases, through preprocessing, visualization, and numerical analysis, to
producing presentation-quality output.
Data Analysis
MATLAB provides interactive tools and command-line functions for data analysis
operations, including:
Data Access
All the graphics features that are required to visualize engineering and scientific data are
available in MATLAB. These include 2-D and 3-D plotting functions, 3-D volume visualization
functions, tools for interactively creating plots, and the ability to export results to all popular
graphics formats. You can customize plots by adding multiple axes; changing line colors and
markers; adding annotation, Latex equations, and legends; and drawing shapes.
2-D Plotting
MATLAB provides functions for visualizing 2-D matrices, 3-D scalar, and 3-D
vector data. You can use these functions to visualize and understand large, often complex,
multidimensional data. Specifying plot characteristics, such as camera viewing angle,
perspective, lighting effect, light source locations, and transparency.
3-D plotting functions include:
CHAPTER 4
IMPLEMENTATION
4.1 GENERAL
clc;
close all;
clear all;
warning off;
%%
%-------------------GET INPUT DATA--------------------------%
[f,p]=uigetfile('*.jpg;*.png;*.bmp;*.tif');
I=im2double(imread([p,f]));
I=rgb2gray(I);
figure;
imshow(I);
title('INPUT IMAGE');
%%
%---------------SIMPLE THRESHOLDING-------------------------%
tic
I1=imerode(I,1);
img=im2bw(I1);
img_1=imcomplement(img);
border=imclearborder(img_1,8);
figure,
imshow(border);
title('LUNG REGION EXTRACTED IMAGE')
%%
%------------------FINDING THE LUNG MASK--------------------%
se=strel('disk',8);
I2=imerode(I,se);
img=im2bw(I2);
img_1=imcomplement(img);
border=imclearborder(img_1,8);
q=imclose(border,se);
mask=imdilate(q,se);
figure,
imshow(mask);
title('BORDER CORRECTED MASKED IMAGE');
%%
%-----------------SEGMENTED REGION----------------------%
gray=I.*mask;
figure,
imshow(gray);
title('SEGMENTED IMAGE');
%%
%----------------- DETECT INTITIAL NODULE---------------%
se1=strel('disk',8);
qq=imerode(gray,se1);
qq1=im2bw(qq);
figure,
imshow(qq1);
title('INITIAL NODULE CANDIDATE');
%%
%----------------RULE BASED FILTERING------------------%
rp = regionprops(qq1, 'BoundingBox', 'Area');
area = [rp.Area].';
[~,ind] = max(area);
bb = rp(ind).BoundingBox;
imshow(qq);
rectangle('Position', bb, 'EdgeColor', 'red');
title('RULE BASED DETECTED NODULE');
%%
%-------------CROPPING THE DETECTED REGION--------------%
crop=imcrop(qq,bb);
figure,
imshow(crop);
title('CROPPED DETECTED NODULE');
%%
%----------SVM TRAINING AND CLASSIFICATION-------------%
load final_data.mat
fd=final_data;
training1=double(final_data(1:128,1:40)');
training2=double(final_data(1:128,41:80)');
for i=1:20
label1(i,1)=1;
end
for j=21:40
label1(j,1)=2;
end
for i=41:60
label1(i,1)=3;
end
for j=61:80
label1(j,1)=4;
end
svmstruct = svmtrain(training1,label1(1:40,1), 'Kernel_Function','rbf',...
'boxconstraint',Inf,'showplot',true);
svm_classification
toc
%%
4.3 SNAPSHOTS:
5.1 CONCLUSION