Sie sind auf Seite 1von 51

SEED SNAP

Session: BSCS Spring 2018-2019

Project Advisor: Muhammad Bilal

Submitted By

Ammar-Ahmed BSCSF15ME22

Iqra-Ashraf BSCSF15MM57

Shaheer-Habib BSCSF15MM01

Department of Computer Science & Information Technology


University of Sargodha
STATEMENT OF SUBMISSION

This is certified that Ammar-Ahmed Roll-No: BSCSF15ME22, Iqra-Ashraf Roll-No:


BSCSF15MM57, Shaheer-Habib Roll-No: BSCSF15MM01, successfully completed the
final project named as SEED SNAP, at the Department of Computer Science &
Information Technology, University of Sargodha, Sub Campus Mianwali to fulfill the
requirement of the degree of BS in Computer Science.

______________________ _____________________
Project Supervisor Project Coordination Office
Muhammad Bilal DOCS&IT -UOS

Lecturer
DOCS&IT -UOS

_____________________________ ________________________
External Examiner

i
Acknowledgments
We truly acknowledge the cooperation and help make by DR. Tauqeer,
Director, Department of Computer Science & Information Technology,
University of Sargodha, Mianwali. He has been a constant source of guidance
throughout the course of this project. We would also like to thanks,
agriculture Department Lahore for their help and guidance throughout this
project. We are also thankful to our friends and families whose silent support
led us to complete our project.

ii
Contents
1.1Overview: .................................................................................................................. 1
1.2. Brief History of Image Processing:......................................................................... 2
1.3. Classification: ......................................................................................................... 4
1.3.1. Supervised classification:..................................................................................... 5
1.3.2. Unsupervised classification: ................................................................................ 5
1.4. Challenges and Statements: .................................................................................... 5
1.4.1. Illumination: ......................................................................................................... 5
1.4.2. Deformation: ........................................................................................................ 6
1.4.3. Occlusion: ............................................................................................................ 6
1.4.4. Background clutter: .............................................................................................. 7
1.5. Attempt Has Made: ................................................................................................. 8
1.6. Project Goals & Objectives:.................................................................................... 8
1.7. Scope of reach: ........................................................................................................ 9
2.1. Introduction: .......................................................................................................... 10
2.2. Work in Image-processing: ................................................................................... 10
2.2.1. PASCAL Voc: ................................................................................................... 10
2.2.2. Vo Tracking Module: ........................................................................................ 10
2.2.3. RGB-D: .............................................................................................................. 11
2.2.4. Simultaneous Detection and Segmentation: ...................................................... 11
2.3. Deep Networking: ................................................................................................. 11
2.4. Agricultural Research Institute: ............................................................................ 11
2.5 Image Enhancement ............................................................................................... 12
2.5.1 Color Based Transformation ........................................................................... 15
2.6 Segmentation Based Techniques ........................................................................... 15
2.6.1 K-means Clustering ........................................................................................ 15
2.6.2 Thresholding ................................................................................................... 16
2.7 Features Extractions Based Techniques................................................................. 17
2.7.1 Detection of diseases using texture features ................................................... 17
2.8 Classifiers Based Techniques ................................................................................ 18
2.8.1 Support Vector Machine ................................................................................. 19
2.8.2 Artificial Neural Network ............................................................................... 21
2.8.3 K-nearest Neighbor ......................................................................................... 22
Chapter 3 .......................................................................................................................... 23
3.1. Introduction: .......................................................................................................... 23
3.1.1. Types of Neural Network: ................................................................................. 24

iii
Seed Snap iv

3.2. Convolutional Neural Network (CNN):................................................................ 25


3.2.1. Steps of Convolutional Neural Network (CNN):............................................... 25
3.2.1.1. Convolutional Layer: ...................................................................................... 26
3.2.1.2. Rectified Layer (ReLU): ................................................................................. 27
3.2.1.3. Pooling Layer: ................................................................................................. 27
3.3 Methodology: ......................................................................................................... 28
3.3.1. Hardware and Software Specification: .............................................................. 28
3.3.2. Load Images: ...................................................................................................... 28
3.3.3. Category Labels with Each Image: .................................................................... 30
3.3.4. Prepare Training and Test Image Sets: .............................................................. 32
3.3.5. Validation:.......................................................................................................... 32
3.4. Pre-process Images For CNN: .............................................................................. 33
3.5. Project Overview statement: ................................................................................. 33
Chapter 4 .......................................................................................................................... 34
Results and Discussion .................................................................................................... 34
4.1. Introduction: .......................................................................................................... 34
4.2. Pretrained Network: .............................................................................................. 34
4.2.1. First Attempt: ..................................................................................................... 34
4.3. Assumptions and Simulations parameters: ........................................................... 35
4.3.1. Inspect the First Layer: ...................................................................................... 35
4.3.2. Inspect the Last Layer: ....................................................................................... 36
4.4. Results:.................................................................................................................. 36
4.4.1. Training Results: ................................................................................................ 36
4.4.2. Testing Results: .................................................................................................. 38
4.4.3. Confusion Matrix: .............................................................................................. 38
4.4.4. Output or prediction Results: ............................................................................. 39
Conclusion and Future Work ........................................................................................... 40
5.1. Conclusion: ........................................................................................................... 40
5.2. Future Work: ......................................................................................................... 40

iv
List of figures
Figure 1 Digital picture produced in1921 .......................................................................... 3
Figure 2: Digital picture made in 1922 .............................................................................. 3
Figure 3 Unretouched cable picture of Generals Pershing and Foch. ................................ 4
Figure 4 some images for different types of illumination.................................................. 6
Figure 5 Some Images for Different Types of Deformation.............................................. 6
Figure 6 some images for occlusion. ................................................................................. 7
Figure 7 some snaps regarding background clutter. .......................................................... 7
Figure 8 black_white Plot of image ................................................................................... 8
Figure 9 Image of Neuron ................................................................................................ 23
Figure 10 Basic structure diagram for CNN ................................................................... 25
Figure 11 The working of the CNN model ...................................................................... 26
Figure 12 graph of ReLU ................................................................................................. 27
Figure 13 Seed_image Used In project ............................................................................ 30
Figure 14 Labels of images with the number of images in a folder ................................ 31
Figure 15 A balanced number of images for better training of the system...................... 31
Figure 16 First section of ResNet-50 model ................................................................... 34
Figure 17 Image to be converted in 224-224-3 ................................................................ 35
Figure 18 A simplex Code ............................................................................................... 35
Figure 19 classifer functions ............................................................................................ 36
Figure 20 Values of train images ..................................................................................... 36
Figure 21 Image Result of Training features using CNN ................................................ 37
Figure 22 Testing features results .................................................................................... 38
Figure 23 The mean accuracy of the images is shown ................................................... 38
Figure 24 The image will test from these training set ..................................................... 39
Figure 25 The result of an input image ............................................................................ 39

v
Abstract
The main objective of this study is to distinguish seeds on the basis of quality. Our study
is to develop a system which can classify the quality of seed on the basis of image
processing and modern computational techniques. A quick evaluation of publicly
available evaluation metrics is given, and an assessment with benchmark results, which
are widespread for a quantitative evaluation of image researches, is described. This
overview can serve as a short guidebook to novices in the field of image processing,
offering fundamental expertise and a universal grasp of the latest trendy studies, as well
as to skilled researchers searching for productive directions for future work for seed
analysis on bases of quality.

vi
Seed Snap 1

Chapter 1. Introduction

Chapter 1

Introduction

1.1Overview:
Image processing is a technique to function some operations on an image, in order to get
a more desirable image or to extract some beneficial records from it. It is a kind of signal
processing in which input is a picture and output might also be photo or characteristics
features associated with that image. Nowadays, picture processing is among rapidly
developing technologies. Its varieties core lookup location within engineering and pc
science.
Image processing essentially includes the following three steps:
Importing the image via photograph acquisition tools; Analyzing and manipulating the
image; output in which end result can be altered image or report that is based on picture
analysis. There are two kinds of methods used for photo processing namely, analog and
digital photograph processing. Analog photograph processing can be used for hard copies
like printouts and photographs. Image analysts use a variety of fundamentals of
interpretation whilst the usage of these visible techniques. Digital photograph processing
techniques assist in the manipulation of digital pictures with the aid of using computers.
The three prevalent phases that all kinds of records have to bear while the usage of the
digital approach is pre-processing, enhancement, and display, statistics extraction.
In order to become appropriate for digital processing, an image feature f(x,y) have to be
digitized each spatially and in amplitude. Typically, a frame grabber or digitizer is used
to pattern and quantize the analog video signal. Hence in order to create an image which
is digital, we need to covert non-stop data into digital form. There are two steps in which
it is done:
Sampling-Quantization
The sampling stage determines the spatial resolution of the digitized image, while the
quantization stage determines the number of gray degrees in the digitized image. A
magnitude of the sampled picture is expressed as a digital value in photo processing. The

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 2

transition between non-stop values of the photo function and its digital equal is called
quantization.
The number of quantization tiers must be high sufficient for human understanding of first-
class shading important points in the image. The occurrence of false contours is the main
trouble in the photo which has been quantized with insufficient brightness levels.

Resizing image
Image interpolation occurs when you resize or distort your image from a one-pixel grid to
another. Image resizing is essential when you need to amplify or minimize the total
number of pixels, whereas remapping can manifest when you are correcting for lens
distortion or rotating an image. Zooming refers to extend the extent of pixels so that when
you zoom an image, you will see more detail.
Interpolation works using regarded facts to estimate values at unknown points. Image
interpolation works in two directions and tries to gain a first-class approximation of a
pixel's intensity primarily based on the values at surrounding pixels. Common
interpolation algorithms can be grouped into two categories: adaptive and non-adaptive.
Adaptive strategies trade depending on what they are interpolating, whereas non-adaptive
techniques treat all pixels equally. Non-adaptive algorithms include the nearest neighbor,
bilinear, bicubic, spline, since, ranchos and others. Adaptive algorithms encompass many
proprietary algorithms in a licensed software program such as Image, Photo Zoom Pro
and Genuine Fractals.
Many compact digital cameras can perform both an optical and digital zoom. A camera
performs an optical zoom by moving the zoom lens so that it increases the magnification
of light. However, a digital zoom degrades first-rate through surely interpolating the
image. Even although the picture with digital zoom incorporates the same range of pixels,
the detail is really a long way much less than with an optical zoom.
1.2. Brief History of Image Processing:
The Field of photo processing is continually evolving. During the previous 5 years, there
has been a huge enlarge in the degree of the hobby in photo morphology, neural networks,
full-color image processing, image facts compression, image recognition, and knowledge-
based photograph analysis systems. Image processing strategies stem from two principal
utility areas: enchantment of pictorial statistics for human interpretation, and processing
of scene statistics for self-sustaining laptop perception.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 3

Image is better than any different data shape for our human being to perceive. Vision lets
in human beings to pick out and apprehend the world surrounding us. Image
understanding, picture analysis, and pc imaginative and prescient aim to reproduction the
effect of human imaginative and prescient with the aid of electronically (= digitally, in the
current context) perceiving and appreciation image(s)In digital photo processing system,
first step in the process is Image Acquisition it requires acquiring an image, After a digital
photograph has been obtained, the next step deals with Preprocessing its feature is to
enhance the photograph in ways that increase the hazard for success of the other processes,
the subsequent step deals with Segmentation it partitions an entered image into its
constituent components or objects, Representation & Description deals with making
data in the form that suitable for laptop processing, and after that Recognition is that
assigns a label to an object, and final Interpretation includes that means to an assemble of
identified objects.

 The origins of digital image processing

Figure 1 Digital picture produced in1921

Figure 2: Digital picture made in 1922

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 4

Figure 3 Unretouched cable picture of Generals Pershing and Foch.

Why Digitalized?
 Good quality for storage and transmission Interactivity.
 Variable- rate transmission on demand.
 Easy software conversion from one standard to another.
 Integration of various video applications.
 Editing capabilities, such as cutting and pasting, zooming,
 Removal of noise and blur.
Elements of Digital Processing
 Applications: multimedia, DSC, remote
 diagnosis, video conferencing, VOD, surveillance, ...
 Acquisition: sensing and digitizing
 Storage: short-term, on-line, and archival
 Processing: software on general-purpose or dedicated computers,
 hardware boards
 Communication: PSTN, ISDN, wireless, the Internet
 Display: TV monitors, slides/photos, CRT, printers.
1.3. Classification:
Image classification refers to the task of extracting statistics instructions from a multiband
raster image. The ensuing raster from image classification can be used to create thematic

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 5

maps. Depending on the interaction between the analyst and the laptop during
classification, there are two sorts of classification: supervised and unsupervised.
1.3.1. Supervised classification:
Supervised classification makes use of the spectral signatures bought from education
samples to classify an image. With the assistance of the Image Classification toolbar, you
can easily create education samples to signify the training you choose to extract. You can
also without problems create a signature file from the training samples, which is then used
by using the multivariate classification equipment to classify the image.
1.3.2. Unsupervised classification:
Unsupervised classification finds spectral categories (or clusters) during a multiband
image while not the analyst’s intervention. The Image Classification toolbar aids in
unsupervised classification by way of imparting.
1.4. Challenges and Statements:
We discuss the major challenges faced during Classifications:
1. Illumination.
2. Deformation.
3. Occlusion.
4. Background Clutter.

1.4.1. Illumination:
Illumination problems have been an important concern in many image processing
applications. The pattern of the histogram on an image introduces meaningful features;
hence within the process of illumination enhancement, it is important not to destroy such
information.

Q: How it affects our project?


If the light is low, then there should be a problem in seed classification and if seed is
deformed then it is another problem for seed classification.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 6

Figure 4 some images for different types of illumination.

1.4.2. Deformation:
As a subcategory or field of digital signal processing, digital image processing has many
advantages over analog image processing. It allows a much wider range of algorithms to
be applied to the input data and can avoid problems such as the build-up of noise and
signal distortion during processing.

Q: How it affects our Project?


If the seed is deformed or pic is not aligned it will be misclassified.

Figure 5 Some Images for Different Types of Deformation.

1.4.3. Occlusion:
A common issue with tracking an object in an environment with many moving objects is
occlusion. ... This system includes algorithms such as foreground object segmentation,
color tracking, object specification, and occlusion handling.

Q: How it affects our project?


If a part of seed or we can say the broken/half seed is there, then it is also affecting our
project because we have to train our algorithm accordingly.
You might see a part of the seed and it is easy for you as a person that it is a seed but we
also have to train our algorithm in the same way as we think.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 7

Figure 6 some images for occlusion.

1.4.4. Background clutter:


Removing random clutter (also called "noise") from digital images is an important aspect
of two-dimensional digital image processing. In most of the real-world images, vehicle
objects with a cluttered background containing trees, road views, buildings, people, etc.
tent to be noisy data or leads to the problem of clarity. The background feature covers the
major portion of the image.

Q: How it affects our project?


The foreground object the seed could actually quite look quite similar in appearance of
the background. So, we have to train our algorithm accordingly and it is a very challenging
issue.

Figure 7 some snaps regarding background clutter.

Well, in our case seeds have different types and all are of different qualities, shapes, and
colors so we must handle all these different variations.
So, this is a really challenging problem it is easy for our brain to differentiate because so
much of your brain is specifically tuned for dealing with these things but now if we want
our computer programs to deal with, we have to train our algorithm accordingly. All these
problems are fantastically a challenging problem. And we have to train our algorithm
accordingly. The images are photos of the scenes that have been taken from different
angles, positions, and different lighting conditions. These variations make this a
challenging task.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 8

1.5. Attempt Has Made:

Figure 8 black_white Plot of image

mean_image = imfilter (image, fspecial ('average’, [15,15]),'replica');


subtract = image -(mean_image+20);
black_white = im2bw(subtract,0);
subplot (1,2,1); imshow(black_white); title ('Threshold Image')
subplot (1,2,2); imshow(image); title('Average_Quality');
figure
imshow (readimage (imds, Average_Quality))
1.6. Project Goals & Objectives:

1.6.1. Project Goal:


It would be able to be used for any kind of seed grain, for employs to recognition of seed
in multinational companies and in general recognition of seed in market by common
person.

1.6.2. Project Objectives:

1. Identify the quality of seeds.


2. It must be reliable.
3. It should be accurate.
4. It should give a different and accurate result for a different quality of seeds.
5. It should be helpful in the agricultural department.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 9

1.7. Scope of reach:


1. Input (Seed Sample).
2. Output (Quality of Seed).
3. Data sets for seed images.
4. Data sets for quality of images.
5. MATLAB use as a platform. The algorithm used for generating results of seeds on
the basis of quality.

Our Study is to detect seed quality by using modern computational techniques and
machine learning. In this work, a quick evaluation of publicly available evaluation
metrics is given, and an assessment with benchmark results, which are widespread for a
quantitative evaluation of image researches, is described.
The suggested solution is computationally low-cost as it does not need any special
software that is used for performing different operations on images, so it can be used to
generate quality of images in MATLAB. Tests were performed on the Seeds database.
Test results showed success indicating that the proposed method is a new solution for use
in the real-world practice. It generates stable results even in the case when we face
different challenges like illumination, deformation, occlusion, background clutter, and
intraclass variation of the original image and in case of different scale and the presence of
shadow on seed images from local lighting or illumination. Furthermore, the method is
based on generating a quality of images directly from the testing images, and thus contains
the information about a seed.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 10

Chapter 2. Literature Review

Chapter 2

2.1. Introduction:
Artificial intelligence techniques have multi-dimension application in the field of image processing
and have massive academic and business potential [1]. Image contains different features like
illumination, Deformation, occlusion, intraclass variation, and background clutter, is used in a
different type of classification process and have application in the field of health and agriculture,
etc. Evaluation of seed quality of agriculture product using image processing and modern
computational techniques.
Different researchers have done so far in the field of agriculture using Deep learning, KNN
algorithm, 3D-plotting, Seed Germination, Bisayan Algorithm. As CNN work with many layers
and shows excellent results but particularly for semantic segmentation, a two-stage process is
mostly used because it has well in local pixel-wise feature and a global graphical model [2]. On
this two-stage process, different methods or processes were present which bet the performance of
this process.
2.2. Work in Image-processing:
2.2.1. PASCAL Voc:
Alexander G. Schwing and Raquel Uranus [3] convert this process into a single joint training
algorithm for semantic image segmentation and results were encouraging for the challenging
PASCAL VOC 2012 dataset.
2.2.2. Vo Tracking Module:
C Dubravko et al. [4] present tracking algorithm and an unsupervised video object segmentation
which is based on the adaptable architecture of the neural network and this method comprise: the
first one is a VO tracking module and the second is an initial VO estimation module. The object
tracking algorithm is implemented through network classifier and handled as a classification
problem, it provides better results if it compared to the algorithm of conventional motion-based
tracking because Network adaption is getting by using some cost efficient and efficient weight

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 11

updating algorithms which provide low degradation over the old knowledge of the network. For
this method, a retraining set is constructed and used which was based on initial VO estimation
results. From this method two different scenarios are investigated, the first one is concerned with
human entities in video conferencing applications and human face body detection based on
Gaussian distributions is achieved in this scenario while second shows the deep information for
identifying generic VOs in stereoscopic video sequences and using color and depth information
segmentation fusion is obtained in this scenario. To detect time instance for weight updating a
decision mechanism is also used. The results of the comparison show good performance even in
complicated content like object bending, occlusion.
2.2.3. RGB-D:
C. Couprie et al. [5] used RGB-D inputs for multi-class segmentation of indoor scenes and apply
the process of the multi-scale convolutional network which directly learns features from image
and depth information. The accuracy of 64.5% is got from state-of-the-art on the NYU-v2 depth
dataset. They illustrate that by using hardware such as FPGA, the labeling of the indoor scene in
videos sequences that could be processed in real-time.
2.2.4. Simultaneous Detection and Segmentation:
B Hariharan et al. [6] proposed a task called Simultaneous Detection and Segmentation (SDS),
which detect each instance of category in the image then mark the pixels that relate to it. SDS
requires individual object instances and segmentation, unlike classical semantic segmentation.
They introduced a novel architecture for SDS and then they apply some predictions for refinings
like category-specific and top-down figure-ground for refining the button-up proposals. ON
semantic segmentation, over the SDS baseline they show seven main points boost from which 16%
are relative, five points boost from which 10% relative over the state-of-the-art as well as
performance in object detection.
2.3. Deep Networking:
F. Seide et al. [7] showed problems of object detection by using DNNs. DNN is more than a
classifier which clearly shows objects of various classes. By using some network applications, it
can produce high-resolution object detection at a low cost.
2.4. Agricultural Research Institute:
Improved sorts of chickpea, pigeon pea, and mungbean developed via IARI have contributed
considerably to rainfed crop production. These sorts are of the brief period and most appropriate

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 12

for crop rotation, leading to extend in meals grains manufacturing and improvement in the protein
repute in the Indian diet. Chickpea varieties Pusa 1105, 1108 and 2024 are exceptionally adaptable
and excessive yielding Kabuli while Pusa 362, 1103, 372 and BGD seventy-two are broadly
adaptable and desi varieties.
2.5 Image Enhancement

R. Gavhale & Gawande have built the model for the identification of different diseases inside the
plants using the plants leaves with some image preprocessing techniques. Their mechanism consists
of five stages. At step one, they used the camera for the capturing of the initial image sets and then use
these sets for the preprocessing to enhance the images and color space. For segments, the infected
regions of images applying edge, region, and threshold-based segmentation techniques and then using
the texture, color features, and the shape were calculated. Finally, NN classifier is done for the texture
feature taxonomy purpose [8]. Deshpande et al. presented the established a graded method for
instinctive diseases recognition in pomegranate fruit. They are processing the image and identifying
the disease after the resizing, enhancing, correcting and removing the shadow of the images. K-mean
technique had been applied for the detection of the affected parts of the leaves. They mechanism
provided good accuracy and well identification of the diseases. [9].
Many other researchers have worked on the preprocessing scheme, where they are detecting the plant
diseases using the fuzzy logic. They were working for disease detection in the watermelon plant’s
leaves. Their approach is detecting the two major classes of diseases using RGB color extraction
technique. There are two diseases such as downy mildew and anthracnose/ and is classified. The
accuracy for the detection of the diseases is 67% and 70% respectively for mildew and anthracnose
[10]. The use of mobile phones is increasing rapidly nowadays. Among all other mobile phone users,
farmers and botanist are now also blessed with the mobile agriculture informatics that tends to aid in
agricultural statistics, monitoring and botanical research. To preserve nature, it is important to keep it
alive and protect it. For this purpose, leaf information is gathered and maintained. An automated plant
leaf image informatics includes acquiring leaf images, preprocessing these images, performing feature
extraction and leaf learning/matching over it. To provide a solution over mobile devices Relative Sub-
image Sparse Coefficient (RSSC) algorithm is proposed for classification of plant species over a
compact vector. The results obtained after (RSCC) algorithm are then combined with Gray Level Co-
occurrence Matrix (GLCM) feature using best-Nearest Neighbor (best-NN). The dataset that we

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 13

included was a leaf of three kinds i.e. Flavia, ICL and Diseased leaf datasets. To prove the high
accuracy rate of the proposed algorithm, it is compared with different other techniques. Keeping in
view the portability and mobility of mobile phones Android-based mobile client-server architecture is
also designed for plant leaf analysis [11].
The utilization of plants by humans is a known fact. The increase in plants disease is a threat to food
security. There have been many measures taken to protect these plants including integrated pest
management approaches. The most important step in controlling the diseases is firstly correctly
identifying them. Among all other technologies like the use of the internet for disease identification.
Smart phones have come up as the most convenient and efficient tool. In recent years image processing
and neural networks have been used for the classification of image dataset of plants. The crops disease
is identified by taking an image of the diseased plant as input and identifying crop disease as output
by using deep neural networks. For this purpose, a model has been trained to measure the execution
of our model’s dependent on their capacity to foresee the right harvest ailments match, given 38
conceivable classes. Using this model, the crops and diseases have been classified from 38 possible
classes in 993 out of 1000 images. Two most suitable architectures found in research for the
classification purpose are AlexNet and GoogLeNet which were designed in the context of the “Large
Scale Visual Recognition Challenge” for the ImageNet dataset. The research has been conducted on
PlantVillage data set collected in three different versions. i.e. colored images, Grey scale, and
segmented leaves images. The colored version of the dataset had a much high-performance rate in
terms of model’s ability to perform. In the conducted experiment it was assumed that the model would
detect the crop species and the disease status simultaneously. After many vigilant experimentations,
there exist many limitations that still need to be addressed [12].
The old ways of identifying diseases in plants have been very inappropriate in terms of efficiency and
cost. Plants like Soybean Leaf suffer diseases like Septoria Brown Spot, Bacterial Leaf Blight, and
Bean Leaf unit Mottle destroy the fields and cause loss to farmers. For the effective classification of
diseases previously grid paper method was found to be more appropriate but it was time taking and
inefficient in processing. Later in the 1970s, the research in the agricultural engineering image was
improved using the latest processing technologies. The research found many ways for the
classification and monitoring of the product quality and crop growth rate checking and identification
of plant disease. The research has basically been conducted to introduce the technology for the image
analysis with the purpose of estimating the severity level of soybean disease based on the diseased

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 14

area along with comparing the results with manual scoring using Kentucky diagram key. Severity can
be calculated by the ratio between the leaf area and the affected disease area. The severity of diseases
is assessed by image segmentation and k means clustering. After successful clustering steps were taken
to store the leaf image and using it as a reference image. Once the results are likely to be concluded
the reference image from the base image was subtracted. The comparison has also been conducted to
highlight the significance of image analysis in contrast to manual scoring. The results prove that this
method is more suitable in the identification of disease severity and even useful in pesticide control
applications [13].

The main purpose of this system is to design an android based image processing solution for finding
and classification of plant leaf diseases. The system will be an android application which can be run
on any android based smartphone. With the help of an Internet connection system detect the disease
and suggest a treatment for the detected disease. The system also has an admin who will be liable for
managing the dataset of infected plants and keep a record of the proper treatment of infected disease.
The framer will capture an image from android phone and submit for analysis. When farmer submits
the image for analysis image processing will be done in two phases. In the first phase, the noise will
be removed from the image, then; in segmentation phase image will be divided into 8-pixel block,
further processing is done on each block. After image division histogram is generated in the 2-
dimensional matrix for storing the value of each block. Distance measure equation is used to find the
similarity between the two inputs. Which will provide the actual result.
S. Abirami et al. [14] proposed a system of guava plant leaf disease detection using image
preprocessing techniques. In the first stage, the captured images are resized, and their contrast is
improved. In segmentation, stage image is divided into various segments on the basis of some features
or some resemblance. Segmentation is done using different methods like Region growing
segmentation method performs image partition into the region and apply threshold method to achieve
the goal using an image processing application. They developed a system which contains on five steps
image acquisition, preprocessing, segmentation, feature extraction, and classification. In the first step,
the image is acquired using the camera. The infected guava leaf is placed on a white surface without
light reflection. In color transformation stage diseased part of the color image is detected by YCbCr
(Green (Y), Blue (Cb), Red (Cr)), and CIELAB (a device-independent color space). By these
transformations, the disease affected parts of the leaf will be detected clearly. In feature extraction

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 15

stage different feature is extracted of the leaf using some methods like SIFT (Scale Invariant Feature
Transform). The extraction feature is used in classification. SVM and KNN are used for classification.
Finally, the incidence of diseases on the plant leaf is assessed.

2.5.1 Color Based Transformation

Gavhale et al. presented a framework for the identification of diseases affected part in the citrus leaf
they are recognizing the disease by using some techniques of image preprocessing including image
enhancement, RGB color vector transformation scheme, and K-means method. They focused to
implement the features extraction and recognition system to identify the leave diseases using the
proposed image analysis and classification schemes. The GLCM method is utilized for the texture
feature and color feature identification and then finally, SVM classifier had classified the detected
disease [15]. Kajale worked with image preprocessing patterns for instinctive diseases recognition.
Their study was effective on five diseases namely soft mold, early burn, late singer, gray mold and
minor achromatic color. This structure fundamentally has four main stages, firstly, the preliminary
images are taken and converted into HSI color space alteration, then the green pixel is screened and
concentrated using specific threshold values. For slices purpose, the affected images exhausting K-
means method and finally the texture features are removed using SGDM. For the texture investigation,
the plant leaf infections are considered [16].
For color transformation based technique of diseases detection on plant leaves proposed this paper.
By comparing the effects of HIS, YCbCr, and CIELAB color space scheme, in the procedure of
recognition of spot diseases was performed. The moderate filter is practiced for image smoothing. In
the end, deployed the Otsu technique on the color part and computing of threshold may be an exit to
get the spot diseases. There are various noises of background which indicates in an experiments results,
camera flash, and the vein. CIELAB color space is applied to reduce the noises [17].

2.6 Segmentation Based Techniques


2.6.1 K-means Clustering

Rishi & Gill presented contrasts schemes which use Otsu technique, cropping and image compression,
image classification using K-mean clustering to formulate the lesions images. The NN classifiers such
as “PNN, RBF, BPNN and GRNN” are used to the classification of grape and wheat diseases. The rice

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 16

and cotton leaf diseases using canny and Sobel filters and the features extraction are passing to classify
the diseases. The other diseases are “apple fruit lesions, rubber tree leaf lesions, chili plant lesions,
and orchid leaf lesions” are detected using Rishi & Gill, fuzzy logic, and Multi-class SVM classifiers
techniques [18].
A novel technique to detect the affected regions of the plant leaf presented in this paper. K-mean
clustering technique is used for different clusters to get different cluster sets based on ROI method
[19]. [20] presented a work on the detection and diagnosis of grape leaves diseases with the help of
image processing methods and artificial intelligence techniques. First, they captured images of leaves
from a digital camera. By applying thresholding green pixels are masked. Preprocessing is done by
antistrophic diffusion method to remove noise from leaves. Segmentation of diseases is performed by
using K-Mean clustering. The extraction of texture features of the affected portion of an image is done
by Gray Level Co-occurrence Matrix. Neural Network is used for classification.
Leaf of plants has much significance in the identification of diseases. Like the shape, texture, the color
of leaf depicts the disease the plant is suffering from, it, therefore, serves as a base for classification
and recognition of crops diseases. The disease chosen for study is diseases found in cucumber. Sparse
Representation based classification (SRC) is used for recognition of this disease, that’s why samples
included for the test are in the form of sparse linear combination. In this process, the optimization
problem is designed, solved and computed. K-mean clustering algorithm is then used for the
segmentation of images. The segmentation process involves collecting data, converting colors,
classifying the colors, labeling pixels and then selecting liaison image from the clustered image sets.
After segmentation there comes the turn to recognize the disease. The recognition process includes
preparing the data, performing color feature extraction followed by shape feature extraction and
feature combination. Once the optimization problem is solved. Seven different cucumber disease are
recognized by this process. The results computed shows that the proposed method provides high
accuracy and efficiency rate as it also reduces the computational cost [21].

2.6.2 Thresholding

Zhihua et al. offered a scheme for dissection built on area thresholding and color structures. The
structures are used for distinctive diseases spot. The efficiency of range thresholding is calculated
using diverse black thresholds [22]. Badnakhe & Deshmukh have equated two procedures for image
handling such as K-mean collection and Otsu threshold. The k-mean method is giving the better

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 17

outcomes as equated to Otsu threshold method [23]. Phadikar et al. suggested an automatic method
for rice leaves diseases system built on morphological operations. Automatic system was applied to
recognize the leaf blast and brown scratches of the rice plants. The radial distribution of a hue of the
center to the boundary of spot images is utilized for the feature to identify the lesions using SVM and
Bayes classifier. The features abstraction of rice leaves viruses were accomplished by the following
stages; initially, the rice leaf infected images are captured from fields. Then, to eliminate the noise of
the diseases leaves and improve the quality of images, mean filter pattern was implemented. Lastly,
Otsu segmentation method is used to get the exaggerated part of images. This taxonomy has two steps:
in the first step, healthy and plant leaves are identified based on a number of peaks in the histogram.
Secondly, the diseases leaves are predictable using Bayes and SVM classifiers. The system gave
precisions of 79.5% and 68.1% for Bayes and SVM classifiers respectively [24].
A. K. Dey, M. Sharma, and M. Meshram [25] developed image processing algorithms for the detection
of leaf root disease by identify color features. The proposed methodology consists of three stages. The
first stage is image acquisition in which image of 21×30 sq. cm is acquired from a digital camera. The
second stage is image preprocessing in which image of test leaf taken from the digital camera was
trimmed into a little measurement of size 16x20sq. cm. The cropping process has no loss on image or
area of interest on the image. The third stage is segmentation in which Color feature of the selected
image is used to differentiate the rotted leaf area from healthy leaf area. HVS color model provides a
pure awareness of rotted leaf part. Image segmentation is done by applying the Otsu method.
Threshold value is found on “H” element of “HSV” model. To find out the rotted leaf area number of
white pixels are multiplied with an identified correction factor.

2.7 Features Extractions Based Techniques


The features extractions is a kind of reduction approach, which effectively proposed a most
informative component of images. There are three kinds of feature namely texture, color and shape
2.7.1 Detection of diseases using texture features

Sanjay and his team worked on texture features of the different plant leaves. They are comparing the
texture feature of the under affected leaves with the normal leaves. There is four main step in this
mechanism. The transformation has RGB image capturing, that is the color generation process and
HSI act as the color descriptor. The segmentation of the images is carried out using with green pixel
under the selected threshold value. In the last step, SGDM method is used for the extraction of the

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 18

texture features [26]. K.Lalitha suggested an image handling method using classification and features
extractions. The images of leaf surfaces were calculated of the first RGB images, then generated the
HSI color space region with CCM method that is using the color and texture to get distinct options
that exhibit the picture results. All HSI images were used to create abstraction SGDM model. When
SGDMs were formed, a total of thirty-nine images texture preference was attained of all citrus leaves
as the sample. The classification tests were accomplished on the four dissimilar classification
approaches [27]. S. Arivazhagan et al. delivered a system to identify unhealthy areas of plants based
on texture sorts. Initially, the images are transformed into HSI color space transformation structure
created by the human observations. The texture features such as energy, homogeneity prominence and
cluster shade, and contrast are extracted. As a final point, the extracted feature is categorized by means
of SVM classifier. SVM is a set of significant supervised learning procedure developed for progression
and classification. The detection accuracies are improved using SVM classifier. Using this method,
the plant traits can be predicted as initial segment itself and a pest regulator tools may be applied to
control the pest issues while decreasing the risks to the people and atmosphere [28].
This technique dedicated to the identification of visual indications of the diseases in cotton leaves. It
heightened the PSO feature insertion approach using skew variance system. The edge, texture, and
color features were calculated. The calculated features gave the input to SVM, BPNN, and GA feature
extraction and fuzzy logic with edge CYMK color astronomical feature selection to discover the plant
diseases. The scientists worked for cotton leaf diseases such as root rot, fusarium wilt, verticillium
wilt, micro nutrient, bacterial blight, and leaf blight. Though its complication is superior to others and
such a method is not appropriate for monocot plants family [29]. Smita described a histogram
equivalent practice to recognize the plant infections. The layers separation approach is practiced for
the preparation process which separates the layers of red, green and blue images into RGB layers and
the edge detection approach which detecting the edges of layered images. SGDM structures were
operated for the CCM texture investigation [30].
Bernardes et al. offered a method for cotton leaves infections using structural investigation. The
wavelet convert method is used for structure extraction and SVM classifier is used for classifying the
images. The diseases trees are predictable into four sections like NONE, AS, MA, and RA. They had
the accuracies of 71.4%, 97.1%, 80%, 96.2% respectively for AS, MA, RA and NONE classes [31].

2.8 Classifiers Based Techniques

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 19

2.8.1 Support Vector Machine

There has been alot of measures taken up till so far for the protection of plants and identification of
various diseases in them. The use of computer vision is now considerably more under research as
compared to the old domestic ways. Currently, the work has been carried in the processing of images
of plant diseases affecting agricultural crops. The diseases need to be correctly detected, identified and
quantified initially. Fungal, bacterial, viral, nematodes, deficiency and normal are the diseases that are
used as sample disease in the recognition and classification phase. 900 images (150 samples of each
class) have been taken as sample data. These images once acquired are preprocessed and filtered by
applying shade correction, removing artifacts, and formatting. These images undergo color and texture
feature extraction using Algorithms for extraction, which consequently trains the ANN and SVM.
SVM is chosen because of its high efficiency and performance rate. Sample data is divided into
training and testing data for further processing. Although SVM and ANN have their own independent
significance in the identification and classification of diseases in plants, SVM classifier is still found
to work better [32].

Yuan et al. used multiple classifiers for SVM to recognize the diseases in the wheat leaves. They have
selected the four main wheat plant diseases such as leaf rust, Puccinia striformi, powdery mildew, and
leaf blight. Their training set is extracting the leave color, shape, and texture from the images and then
pass these to the three classifiers. Their given framework has three major components such as input
initial images/data, features extractions part and processing on the classifier. MCS scheme has the
number of different classifier with variant classification accuracy [33]. Ashraf et al. identified the oil
palm leaves diseases using a kernel-based SVM classifier. Their framework was working under the
control of three kernels including polynomial kernel using soft and hard margin and linear kernel.
They get the accuracy of 95% in the identification of oil leaves diseases [34].
Radhiah presents a prototype model to recognize the paddy diseases including narrow brown spot
diseases, paddy blast diseases, and brown spot diseases. They passed the paddy diseases images from
the binary transformation process then perform RGB calculation. The segments of the paddy images
are then converted for the training and the testing steps using the excel file format into binary data.
Therefore, after using the neural network scheme, they get the accuracy rates of 92.5% [35].

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 20

India is one of the largest countries that economically relies on its agriculture. In the production food
staples, Paddy plants are very useful source. So, the care of such plants and keeping them disease free
is very important. The plants that are infected would affect the growth rate negatively. Therefore, the
research focuses on the identification and classification of paddy plants using image processing.
Brown Spot Disease (BSD), Leaf Blast Disease (LBD), Bacterial Blight Disease (BBD) are the
diseases that need to be detected accurately so that they can be treated timely with appropriate
fertilizers. For the purpose of detection of diseases 60 infected plants were taken as training data. The
total number of images taken into account were 90 with 30 images in each class. The diseases are
detected using Haar-like features and AdaBoost classifier and then recognized using Scale Invariant
Feature Transform (SIFT) feature. The feature once obtained after SIFT are then taken into account
for the recognition of images through Support Vector Machine (SVM) classifiers and k-Nearest
Neighbor (k-NN). The resulting accuracy after this experiment showed up to 91.10% accuracy
proving that such disease detection and recognition can help safe plants early in their problems [36].
Guava is a healthy fruit as it contains many vitamins and minerals that are good for human body. Even
the leaves of guava have their own medical significance. The protection of such fruits and plants that
are beneficial for both the human body and the country’s economy is very important. To detect and
classify diseases in a leaf of guava, the image processing technique is used. Various images have been
downloaded from websites as a sample data of diseased leaf. After image acquisition images are pre-
processed and then Region growing segmentation is applied over those images. The diseased part of
the color image is detected by the transformation of the color image. Once the image is transformed
Scale Invariant Feature Transform (SIFT) is used for feature extraction from these images. These
extracted features are used as a base for classification. SVM and k-NN classifiers are used for
classification of various diseases in these images. By concluding the results, it is found that diseases
like Algal leaf spot have 88%, Rust 96%, Curl 92, Powdery mildew 92%, Powdery mildew 92% and
Viburnum chindo 92% result accuracy with k-NN classification and 98.2%, 98.2%, 95.45%, 95.45%,
98.2% result accuracy with SVM classification. The provided solution works adjacent to all classifiers
and any provided dataset. The results show that the accuracy rate of SVM is little higher to k-NN [14].
Rice is one of the most utilized and valuable crops. The protection of rice crop from diseases is very
important. The outdated ways of detecting diseases in crops did not only required expertise but also,
they were time taking and less efficient. Along with other crops rice crops are also getting beneficial
from pattern recognition, machine learning and image processing. The convolutional neural network

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 21

has been proposed for the identification of disease in rice crops. It provides a feasible way to detect
disease using the camera and is found to be the most optimal in classification. For the training of
CNN's, the gradient-descent algorithm is applied. This technique here Uses a dataset of 500 natural
images of diseased and healthy rice leaves and stems and is able to identify 10 different rice diseases.
It provides the higher recognitional accuracy in comparison to standard BP algorithm, SVM and
practical swarm optimization models. For the purpose of sub-sampling in CNN's, Stochastic pooling
layer is used as it provides the efficient way of disease classification and detection. In multiclass
classification, Softmax regression can be useful. The models are trained over a gradient-descent
algorithm. As this classification process gets successfully done, the images of diseased crops are
identified and preprocessed. This model is then applied to disease recognition problem. The results
show that accuracy obtained after complete experimentation are much higher than other machine
learning models [37].

2.8.2 Artificial Neural Network

Sannakki used the BPNN classifier to identify the plant's diseases in the pomegranate leaves. They
worked with well-known two diseases such as bacterial blight and wilt complex. They resized the
normal and disease affected pomegranate images, and then applied the image filtering by using the
LAB color space transformation method. The identification of the affected region is performed using
the K-means technique. The next step is the computation of color and texture features, and their results
were further used as the input for the BPNN classifier. The BPNN classifier capable of exactly
identifying the name of the diseases being occurred. They achieved 97.30% accuracy rate [38].
In previous years, agrarian applications utilizing image processing strategies have been endeavored
by different analysts. Features extraction, preprocessing and segmentation, feature reduction, and
classifier base techniques are part of those techniques that are previously used. The accompanying
area will examine some past work done utilizing these strategies [39].
[40] proposed a work to diagnose the disease on brinjal Leaves with the help of image processing and
ANN techniques. First, they captured an image and apply histogram equalization to increase the
quality of the image. They applied the K-means Algorithm to extract the diseased part from the healthy
part in the segmentation phase. Color co-occurrence method is used for texture and color features
extraction after that ANN is applied to recognize the disease.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 22

[41] develop a simple disease detection system for Plants Diseases. In the first step, they capture image
from camera and store image feature in database. In preprocessing stage image segmentation is done
on images. In segmentation, the CIELAB approach is used. Feature extraction is done by means of
Gabor Filter. Finally, the classification is done by using ANN and recognition rate is achieved up to
91% by ANN.

2.8.3 K-nearest Neighbor

Savita presented a review paper that is discussing different classification schemes in the plant leaves
diseases. They reviewed SVM, KNN, ANN, PNN, PCA, genetic algorithms and fuzzy logic.
Biological research and the agriculture field have a large application of these disease classifiers. [42].
Baldomero et al. used the concept of automatic white balance in images. The segmentation of the
affected region is performed by using the Euclidean Distance method and used the K-Nearest
Neighbors classifier for further classification [43].

Shitala et al. had worked on the color space transformation scheme for the diagnosis of the disease
inside the plants. For the segmentation, the K-means clustering is helpful in the affected region. In the
next step, they applied the Gabor Wavelet Transform features and GLCM, which is then using K-
Nearest Neighbors classifier [44]. S. W. Zhang et al. presented a new framework for plants leaves
disease detection using the HSV - color transformation. For dissection process, the predicament of the
components is used for the identification of affected area and then applying the color, texture, and
shape structures for the feature mining, which then apply the K-Nearest Neighbor classifier to classify
the images [45].

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 23

Chapter 3
Proposed Architecture
3.1. Introduction:
A neural network is a model which works the same as our human neural system. It involves
interrelated and prearranged neurons that process the information for computation.
A neural network is composed of artificial neurons, which simulate biological neurons in a limited
way.

Figure 9 Image of Neuron

This is a simulation of a simple biological neuron, information flows in the process by the neuron
and the result flows out. This gives the neuron is the ability to react which is based on previously
learned patterns. Technology duplicates this by creating a structure that processes information like
a biological neuron does. Expect this process is mathematical instead, just like biological neuron
information flows is process by the artificial neurons and the results flow out. This single process
becomes a mathematical formula that can be used for a simple problem.
Artificial neurons network power is connected in the sets of neurons or processing elements
together in the form of three layers: an Input layer, a Hidden layer that can be more than one layer

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 24

and output layer. When you connected the layers, then the output of one layer will become the
input of the second layer and exact steps for a single layer are simply repeated for each layer of
the neural network.
A neural network is one of the learning algorithms used in machine learning, consist of different
layers for analyzing and learning data. The neural network learns and attributes weights to the
connections between the different neurons each time the network processes the data.
Neural Network is used for classification which is used in this project for seed classification.
Because the work of neurons in Neural network same as biological one does. The neural network
takes inputs having different weights and has an output which depends on the given inputs. Neural
network widely used for pattern recognition because this has the ability to generalize and respond
to unexpected patterns or inputs.
Neural network on the behalf of its types/architecture used in many fields like Pattern/image
Recognition, Voice Recognition, Medical Diagnosis, Credit Rating, or in Targeted Marketing.
ANN have different types; every type has its own functionality and purpose of use.
3.1.1. Types of Neural Network:
In Machine learning, there are many types of the neural network but we used Convolutional Neural
Network for our project (CNN is explained in 3.2).
Major types are:
1. Feedforward Neural Network
2. Radial basis function Neural Network
3. Korhonen Self Organizing Neural Network
4. Recurrent Neural Network (RNN)
5. Convolutional Neural Network (CNN)
6. Modular Neural Network
7. Back Propagation Neural Network (BPNN)
8. Multilayer feed-forward Neural Network
These all types are inspired by the behavior of human neurons and the electrical signals they
convey between input, processing and the output from the human brain.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 25

3.2. Convolutional Neural Network (CNN):


Convolutional Neural Network in short ConvNet or CNN is the most representative Supervised
Deep Learning Model.
CNN is basically a deep neural network that is specialized for image recognition and it performs
many tasks in the same way as our visual cortex works and recognizes images. In Figure 9
Basic Structure of CNN is shown.

Figure 10 Basic structure diagram for CNN

3.2.1. Steps of Convolutional Neural Network (CNN):


As we discuss in the Neural network that there is different layers input, hidden and output layer,
CNN uses in a hidden layer in which CNN passes the input from its own different steps/layers and
then output flows out. In figure 10 working of CNN is shown

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 26

Figure 11 The working of the CNN model

Steps of Convolutional neural network is:


1. ConvNet Layer
2. ReLU Layer
3. Pooling Layer
4. Fully Connected Layer
5. SoftMax Layer
3.2.1.1. Convolutional Layer:
The first building block of CNN is Convolutional Layer. Convolutional layer generates feature
maps from images (input images) and the working principle of this layer is different from the
neural network layers. It does not employ connection weights and a weighted sum. Instead, it
contains filters that convert images, we call these filters, Convolutional Filters. Convolutional
filters are actually determined by the training process. The number of feature map and the number
of convolutional filters will be the same. If there are four convolutional layers then there will be
four feature maps.
Filters of the convolutional layer are two-dimensional matrices.
The convolutional operation begins at the upper-left corner of the submatrix (the input image
matrix) that is the same size as the convolution filter, the result of the multiplications will be added
together and the value of that addition will replace this entire block. The result of the convolution
will be the conversion of 4x4pixel into the 3x3pixel image.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 27

The results of the Convolutional neural network demonstrate that the feature map will be extracted
depends on the convolutional filter you are using.
3.2.1.2. Rectified Layer (ReLU):
There will be another layer in between convolution filter and feature map, these are "Activation
Function". These activation functions are the same as the function we use in a neural network. The
output of the convolutional filter will be passed through a ReLU function.
Rectified Linear Unit, in short, ReLU is the popular activation function. There are also some other
activation functions like Sigmoid Function and Tanh Function. In our project, we use ReLU
activation function on input and a hidden layer which is a ReLU function.

Figure 12 graph of ReLU

3.2.1.3. Pooling Layer:


This is another building block of a CNN. Its operation is very simple and straightforward, to reduce
the spatial size of the representation to reduce the number of parameters and computation in the
network. It operates on each feature map independently.
As from mathematical perspective, the calculating of mean pooling and max-pooling is
convolution operation and the difference between the convolution operation and convolution layer
is stationary convolution filter and the convolution area don't overlap.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 28

There are two types of pooling:


1. Mean Pooling
2. Max Pooling

Mean Pooling:
For mean pooling, the calculation is done by taking the means of the convolution areas and this
process is repeated for full image matrices.
Max Pooling:
The process of calculating the values of max pooling matrix is simple and easy, simply take the
highest value of each convolution area and form the matrix. Max pooling is a commonly used
approach on pooling layer.
Fully Connected Layer:
When pooling is done, the image is passed to fully connected Layer, which
Reducing the number of connections, shared weights on the edges.
SoftMax Layer:
In a neural network, SoftMax layer is implemented just before the output layer. The number of
nodes in the output layer and in SoftMax layer must be same.

3.3 Methodology:
We have collected different samples of seeds. By using the algorithm and the techniques, it can
tell the quality of seed whether it is low quality, normal quality, high quality. For this, first, we
must create a dataset of images.
3.3.1. Hardware and Software Specification:
We are Using MATLAB Software, Neural network, and image classification for seed snap
application.
Discuss in detail below sections.
3.3.2. Load Images:
Load the dataset the usage of an image Datastore to help us control the data. Because Image
Datastore operate on photo file locations, photos are now not loaded into memory till reading,
making it efficient for use with gradient picture collections.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 29

Command use for extracting image Data store into MATLAB.


imds = imageDatastore(‘ImageFolder', 'LabelSource', 'foldernames', 'IncludeSubfolders',true);

Below, you can see an instance picture from one of the classes covered in the dataset.

the first instance of an image for each category


%As I HAVE USE 3 Categories
% FIRST_CATEGORY

Bad Quality = find (imds. Labels == 'Bad Quality', 1);


figure
imshow (readimage (imds, Bad_Quality))

% SECOND_CATEGORY

Average_Quality = find (imds. Labels == 'Average_Quality', 1);


figure
imshow (readimage (imds, Bad_Quality))

% THIRD_CATEGORY

Good Quality = find (imds. Labels == 'Good Quality', 1);


figure
imshow (readimage (imds, Bad_Quality))
Here you Can See the Example

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 30

Figure 13 Seed_image Used In project

3.3.3. Category Labels with Each Image:


The imds variable now contains the images and the category labels associated with each image.
The labels are automatically assigned from folder names of images files.
%% Display Class Names and Counts
tbl = countEachLabel(imds).
In figure 12 you can see the labels and count of images.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 31

Figure 14 Labels of images with the number of images in a folder

Here Problem arises, because imds above contains an unequal number of images per category, lets
first adjust it. So, the number of images in the training set is balanced.

% Determine the smallest number of images in a category


minSetCount = min(tbl:2});

% Limit the number of images to reduce the time it takes


maxNumImages = 60;
minSetCount = min (maxNumImages, minSetCount);

% Use splitEachLabel method to trim the set.


imds = splitEachLabel (imds, minSetCount, 'randomize');

% Notice that each set now has the same number of images.
countEachLabel(imds)

Figure 15 A balanced number of images for better training of the system.

How our system understands the quality of seed and how to differentiate the seeds of each quality,
it was a great challenge for us throughout the project.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 32

For This Purpose, we must apply three steps


1)Training
2) Testing
3)Validation
3.3.4. Prepare Training and Test Image Sets:
We have to apply the Strategies of training test by changing the Samples. Pick 30% of images
from each set for the training data and the remainder, 70%, for the validation data. Randomize the
split to avoid biasing the results. The training and test sets will be processed by the CNN model.

[trainingSet, testSet] = splitEachLabel (imds, 0.3, 'randomize');

Give these samples and data Set to the neural Network by which we can train the system and it
can classify the quality of seed and give validation result for it. Function (quantifying what it means
to have a “good” W) Optimization (start with random W and find a W that minimizes the loss).
All the results are shown and discuss in next chapter.
3.3.5. Validation:

Apply the Trained Classifier on One Test Image:

testImage = readimage(testSet,1);
testLabel = testSet.Labels(1)

% image features are extracted using activations.


ds = augmentedImageDatastore (imageSize, testImage, 'ColorPreprocessing', 'gray2rgb');

% Extract image features using the CNN


imageFeatures = activations (net, ds, featureLayer, 'OutputAs', 'columns');

% Make a prediction using the classifier


predictedLabel = predict(classifier, imageFeatures, 'ObservationsIn', 'columns').

All the results are shown and discussed in the next chapter.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 33

3.4. Pre-process Images For CNN:


We Design Neural Net that can process images that are 224-by-224. To avoid re-saving all the
images to this format, we use augment Image Datastore which provide the facility of resizing and
convert any grayscale images to RGB. The augment Image Datastore is used for network
training.

% Create augmentedImageDatastore from training and test sets to resize


% images in imds to the size required by the network.

imageSize = net. Layers (1). Input Size;


augmentedTrainingSet = augmentedImageDatastore (imageSize, trainingSet, 'ColorPreprocessing',
'gray2rgb');

augmentedTestSet = augmentedImageDatastore (imageSize, testSet, 'ColorPreprocessing', 'gray2rgb');

In Next Section We generate the results of Testing training and validation of our Conventional
Neural Network.
3.5. Project Overview statement:
This study has taken the problem of generating seed images under discussion. It has provided us
with the history of this problem and the known approaches introduced before this one. The method
of the study suggests will produce a standard type application for seed images recognition, using
the difference in intensity of image brightness, the difference in occlusion will be averaged; How
our system understands the quality of seed and how to differentiate the seeds of each quality.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 34

Chapter 4

Results and Discussion


4.1. Introduction:
CNN has been chosen as the research method. This paper uses CNN guidelines, which is a form
of secondary study that uses a well-defined methodology as explained in Chapter 3. The CNN
methodology aims to be as fair as possible by being audited and repeatable. The purpose of CNN
is to provide a valid and accurate answer using all possible layers. Meanwhile, traditional reviews
attempt to summarize the results of several studies in Chapter 2.
4.2. Pretrained Network:
In CNN there are several pretrained networks that have gained popularity. Most of these have been
trained on the ImageNet data set, which has 1000 object categories and 1.2 million training images.
“ResNet-50” is one such model and can be loaded using the resnet50 functions from the Neural
Network model.
% Load pretrained network
net = resnet50();
4.2.1. First Attempt:
Use the plot to visualize the network. Because this is a large network, so adjust it to the window
screen by using command.

Figure 16 First section of ResNet-50 model

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 35

We can categories the images with 126 layers of CNN are implemented with them.
4.3. Assumptions and Simulations parameters:
The first layer defines the input dimensions. Each CNN has different input size requirements. The
one shown in this figure 15 requires image input that is 224-by-224-by-3.

Figure 17 Image to be converted in 224-224-3

4.3.1. Inspect the First Layer:


The intermediate layers make up the bulk of the CNN. These are a series of convolutional layers,
interspersed with the rectified linear unit (ReLU) and max-pooling layers. Followings are
3 fully-connected layers as shown in figure figure16.

Figure 18 A simplex Code

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 36

4.3.2. Inspect the Last Layer:


The final layer is the classification layer and its properties depend on the classification task.
In figure 17, The CNN model that was loaded was trained to solve 1000-way classifications
problems. We can train our system to classify all such problems which are related to our project
and with related to image classifications also. Thus, classification layers have 1000 classes from
Image Dataset.

Figure 19 classifer functions

4.4. Results:
In this section, we must see all the results we have conducted throughout the project.
1.Training features.
2.Testing features.
3.Validation or Output response.
4.4.1. Training Results:
Each layer of CNN produces a response, or activation, to an input image. During training Features,
there are some values generated after training of images.
Results of some image values are shown in Figure 18

Figure 20 Values of train images

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 37

The new input image gas, not the same background every time so it is not possible to train a system
every time for the new image. So, we overcome this problem by using a few layers within a CNN
that are suitable for image feature extraction. The beginning of network layer captures basic image
features, such as edges, deformation, etc.
Here is an example is shown in Figure 19.

Figure 21 Image Result of Training features using CNN

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 38

4.4.2. Testing Results:


In Figure 22 testing features of train images are shown.

Figure 22 Testing features results

4.4.3. Confusion Matrix:


After the test images features, the system is confused to classify some images, so test features
can then pass to the classifier to measure the accuracy of the images.

Shown in figure 23

Figure 23 The mean accuracy of the images is shown

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 39

4.4.4. Output or prediction Results:


The last step is to classify the new input image, with the trained neural network.

testImage = readimage(testSet,3);
testLabel = testSet.Labels(1)

Figure 24 The image will test from these training set


Here the result is shown for the new input images in figure 25

Figure 25 The result of an input image

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 40

Chapter 5

Conclusion and Future Work


5.1. Conclusion:
In this project, the seed snap fulfilled the research objective by extracting four main seed features,
namely 1) shape, 2) size, 3) color and 4) texture, for the recognition of seed samples. The seed
snap data set contain 626 images of different seeds with four qualities. We take 60 images of every
category which is average, bad and good quality. 30% images are used for testing and 70% images
are used for validation. The CNN performed 90-95% accuracy which is more accurate than
previous techniques.
5.2. Future Work:
As mentioned earlier in chapter 3, Convolutional Neural Network works with the layers. If we
apply fewer layers during training or testing so the mean accuracy of the system is minimum but
if we apply more layers during training or testing so the mean accuracy of the system is maximum.
At present we are using 126 layers of CNN in our model training and testing features. But in future
if more layers and data set is used for training of images, so result and performance both are
improved. And the new input image will be defined more quickly that the input belongs to which
category.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 41

References

[1] S. Dehuri, R. Roy, S.-B. Cho, and A. Ghosh, "An improved swarm optimized functional
link artificial neural network (ISO-FLANN) for classification," Journal of Systems and
Software, vol. 85, no. 6, pp. 1333-1345, 2012.
[2] A. Srisawat, T. Phienthrakul, and B. Kijsirikul, "SV-kNNC: An algorithm for improving
the efficiency of k-nearest neighbor," in Pacific Rim International Conference on
Artificial Intelligence, 2006, pp. 975-979: Springer.
[3] W. Luo, A. G. Schwing, and R. Urtasun, "Efficient deep learning for stereo matching," in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016,
pp. 5695-5703.
[4] D. Culibrk, O. Marques, D. Socek, H. Kalva, and B. Furht, "A neural network approach
to bayesian background modeling for video object segmentation," in VISAPP (1), 2006,
pp. 474-479.
[5] C. Couprie, C. Farabet, L. Najman, and Y. LeCun, "Toward real-time indoor semantic
segmentation using depth information," Journal of Machine Learning Research, 2014.
[6] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik, "Simultaneous detection and
segmentation," in European Conference on Computer Vision, 2014, pp. 297-312:
Springer.
[7] F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu, "On parallelizability of stochastic gradient
descent for speech dnns," in 2014 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), 2014, pp. 235-239: IEEE.
[8] K. R. Gavhale and U. Gawande, "An overview of the research on plant leaves disease
detection using image processing techniques," IOSR Journal of Computer Engineering
(IOSR-JCE), vol. 16, no. 1, pp. 10-16, 2014.
[9] T. Deshpande, S. Sengupta, and K. Raghuvanshi, "Grading & identification of disease in
pomegranate leaf and fruit," International Journal of Computer Science and Information
Technologies, vol. 5, no. 3, pp. 4638-4645, 2014.
[10] N. E. Abdullah, H. Hashim, Y. W. M. Yusof, F. N. Osman, A. S. Kusim, and M. S.
Adam, "A characterization of watermelon leaf diseases using Fuzzy Logic," in Business,
Engineering and Industrial Applications (ISBEIA), 2012 IEEE Symposium on, 2012, pp.
1-6: IEEE.
[11] S. Prasad, S. K. Peddoju, and D. Ghosh, "An adaptive plant leaf mobile informatics using
RSSC," Multimedia Tools and Applications, vol. 76, no. 20, pp. 21339-21363, 2017.
[12] S. P. Mohanty, D. P. Hughes, and M. Salathé, "Using deep learning for image-based
plant disease detection," Frontiers in plant science, vol. 7, p. 1419, 2016.
[13] S. B. Jadhav and S. B. Patil, "Grading of soybean leaf disease based on segmented image
using k-means clustering," Int. J. Adv. Res. Electron. Commun. Eng, vol. 4, no. 6, pp.
1816-1822, 2015.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 42

[14] M. T. S. Abirami, "Application of Image Processing in Diagnosing Guava Leaf


Diseases," International Journal of Scientific Research and Management, vol. 5, no. 7,
pp. 5927-5933, 2017.
[15] K. R. Gavhale, U. Gawande, and K. O. Hajari, "Unhealthy region of citrus leaf detection
using image processing techniques," in Convergence of Technology (I2CT), 2014
International Conference for, 2014, pp. 1-6: IEEE.
[16] R. R. Kajale, "Detection & Reorganization of Plant Leaf Diseases using Image
Processing and Android OS," International Journal of Engineering Research and
General Science, vol. 3, no. 2 Part 2, 2015.
[17] P. Chaudhary, A. K. Chaudhari, A. Cheeran, and S. Godara, "Color transform based
approach for disease spot detection on plant leaf," International Journal of Computer
Science and Telecommunications, vol. 3, no. 6, pp. 65-70, 2012.
[18] N. Rishi and J. S. Gill, "An Overview on Detection and Classification of Plant Diseases
in Image Processing," International Journal of Scientific Engineering and Research
(IJSER), vol. 3, no. 5, 2015.
[19] M. Krishnan and M. Sumithra, "A novel algorithm for detecting bacterial leaf scorch
(BLS) of shade trees using image processing," in Communications (MICC), 2013 IEEE
Malaysia International Conference on, 2013, pp. 474-478: IEEE.
[20] S. S. Sannakki, V. S. Rajpurohit, V. Nargund, and P. Kulkarni, "Diagnosis and
classification of grape leaf diseases using neural networks," in 2013 Fourth International
Conference on Computing, Communications and Networking Technologies (ICCCNT),
2013, pp. 1-5: IEEE.
[21] S. Zhang, X. Wu, Z. You, and L. Zhang, "Leaf image based cucumber disease
recognition using sparse representation classification," Computers and electronics in
agriculture, vol. 134, pp. 135-141, 2017.
[22] D. Zhihua, W. Huan, S. Yinmao, and W. Yunpeng, "IMAGE SEGMENTATION
METHOD FOR COTTON MITE DISEASE BASED ON COLOR FEATURES AND
AREA THRESHOLDING," Journal of Theoretical & Applied Information Technology,
vol. 48, no. 1, 2013.
[23] M. R. Badnakhe and P. R. Deshmukh, "Infected leaf analysis and comparison by Otsu
threshold and k-means clustering," International Journal of Advanced Research in
Computer Science and Software Engineering, vol. 2, no. 3, 2012.
[24] S. Phadikar, J. Sil, and A. K. Das, "Classification of Rice Leaf Diseases Based
onMorphological Changes," International Journal of Information and Electronics
Engineering, vol. 2, no. 3, p. 460, 2012.
[25] A. K. Dey, M. Sharma, and M. Meshram, "Image processing based leaf rot disease,
detection of betel vine (Piper BetleL.)," Procedia Computer Science, vol. 85, pp. 748-
754, 2016.
[26] H. D. Marathe and P. N. Kothe, "Leaf disease detection using image processing
techniques," International Journal of Engineering Research & Technology (IJERT), vol.
2, no. 3, pp. 2278-0181, 2013.
[27] K. Lalitha, K. Muthulakshmi, and A. Vinothini, "Proficient acquaintance based system
for citrus leaf disease recognition and categorization," International Journal of Computer
Science and Information Technologies, vol. 6, no. 3, pp. 2519-2524, 2015.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 43

[28] S. Arivazhagan, R. N. Shebiah, S. Ananthi, and S. V. Varthini, "Detection of unhealthy


region of plant leaves and classification of plant leaf diseases using texture features,"
Agricultural Engineering International: CIGR Journal, vol. 15, no. 1, pp. 211-217, 2013.
[29] P. Revathi and M. Hemalatha, "Cotton leaf spot diseases detection utilizing feature
selection with skew divergence method," International Journal of scientific engineering
and technology, vol. 3, no. 1, pp. 22-30, 2014.
[30] S. Naikwadi and N. Amoda, "Advances in image processing for detection of plant
diseases," International journal of application or innovation in engineering &
management (IJAIEM), vol. 2, no. 11, 2013.
[31] A. A. Bernardes et al., "Identification of foliar diseases in cotton crop," in Topics in
Medical Image Processing and Computational Vision: Springer, 2013, pp. 67-85.
[32] D. Pujari, R. Yakkundimath, and A. S. Byadgi, "SVM and ANN based classification of
plant diseases using feature reduction technique," IJIMAI, vol. 3, no. 7, pp. 6-14, 2016.
[33] Y. Tian, C. Zhao, S. Lu, and X. Guo, "SVM-based multiple classifier system for
recognition of wheat leaf diseases," in World Automation Congress (WAC), 2012, 2012,
pp. 189-193: IEEE.
[34] H. M. Asraf, M. Nooritawati, and M. S. Rizam, "A comparative study in kernel-based
support vector machine of oil palm leaves nutrient disease," Procedia Engineering, vol.
41, pp. 1353-1359, 2012.
[35] Z. Radhiah, "Paddy Disease Detection System Using Image Processing," 2012.
[36] K. J. Mohan, M. Balasubramanian, and S. Palanivel, "Detection and recognition of
diseases from paddy plant leaf images," International Journal of Computer Applications,
vol. 144, no. 12, 2016.
[37] Y. Lu, S. Yi, N. Zeng, Y. Liu, and Y. Zhang, "Identification of rice diseases using deep
convolutional neural networks," Neurocomputing, vol. 267, pp. 378-384, 2017.
[38] S. Sannakki and V. Rajpurohit, "Classification of pomegranate diseases based on back
propagation neural network," International Research Journal of Engineering and
Technology (IRJET), Vol2, no. 02, 2015.
[39] S. Bagde, S. Patil, S. Patil, and P. Patil, "Artificial neural network based plant leaf disease
detection," International Journal of computer science and mobile computing, vol. 4, no.
4, pp. 900-905, 2015.
[40] A. R, V. S, and A. J, "An Application of image processing techniques for Detection of
Diseases on Brinjal Leaves Using KMeans Clustering Method," Fifth International
Conference on Recent Trends in Information Technology, April 2016.
[41] A. H. Kulkarni and A. Patil, "Applying image processing technique to detect plant
diseases," International Journal of Modern Engineering Research, vol. 2, no. 5, pp.
3661-3664, 2012.
[42] S. N. Ghaiwat and P. Arora, "Detection and classification of plant leaf diseases using
image processing techniques: a review," International Journal of Recent Advances in
Engineering & Technology, vol. 2, no. 3, pp. 1-7, 2014.
[43] B. M. S. Rangel, M. A. A. Fernández, J. C. Murillo, J. C. P. Ortega, and J. M. R.
Arreguín, "KNN-based image segmentation for grapevine potassium deficiency
diagnosis," in Electronics, Communications and Computers (CONIELECOMP), 2016
International Conference on, 2016, pp. 48-53: IEEE.

Department of CS & IT, University of Sargodha Sub campus Mianwali


Seed Snap 44

[44] S. Prasad, S. K. Peddoju, and D. Ghosh, "Multi-resolution mobile vision system for plant
leaf disease diagnosis," Signal, Image and Video Processing, vol. 10, no. 2, pp. 379-388,
2016.
[45] S. Zhang, Y. Shang, and L. Wang, "Plant disease recognition based on plant leaf image,"
Journal of Animal & Plant Sciences, vol. 25, no. 3, pp. 42-45, 2015.

Department of CS & IT, University of Sargodha Sub campus Mianwali

Das könnte Ihnen auch gefallen