Sie sind auf Seite 1von 9

GRD Journals | Global Research and Development Journal for Engineering | International Conference on Innovations in Engineering and Technology

(ICIET) - 2016 | July 2016

e-ISSN: 2455-5703

Diagnosis of Retinal Disease using ANFIS


Classifier
1Dr.A.Umarani 2V.Madhura

veena 3K.Dhivya Bharathi


1
Assistant Professor 2Student
1,2,3
Department of Electronics and Instrumentation
1,2,3
K.L.N College of Engineering
Abstract

Corneal images can be acquired using confocal microscopes which provide detailed views of the different layers inside a human
cornea. Some corneal problems and diseases can occur in one or more of the main corneal layers: the epithelium, stoma and
endothelium. In this method, we have proposed a novel framework for automatic detection of true retinal area in SLO images.
Artificial neural networks (ANNs) and, adaptive neuro fuzzy inference systems (ANFIS) have been investigated and tested to
improve the recognition accuracy of the main corneal layers and identify abnormality in these layers. Feature selection is
necessary so as to reduce computational time during training and classification. It shows that selection of features based on their
mutual interaction can provide the classification power close to that of feature set with all features. As far as the classifier is
concerned, the testing time of ANFIS was the lowest compared to other classifiers. The performance of the ANFIS achieves an
accuracy of 100% for some classes in the processed data sets. This system is able to pre-process (quality enhancement, noise
removal), classify corneal images, and identify abnormalities in the analyzed data sets.
Keyword - ANFIS, Retinal Disease
__________________________________________________________________________________________________

I. INTRODUCTION
Early detection and treatment of retinal eye diseases is critical to avoid preventable vision loss. Conventionally, retinal disease
identification techniques are based on manual observations. Optometrists and ophthalmologists often rely on image operations
such as change of contrast and zooming to interpret these images and diagnose results based on their own experience and domain
knowledge. These diagnostic techniques are time consuming. Automated analysis of retinal images has the potential to reduce the
time, which clinicians need to look at the images, which can expect more patients to be screened and more consistent diagnoses
can be given in a time efficient manner . The 2-D retinal scans obtained from imaging instruments [e.g., fundus camera, scanning
laser ophthalmoscope (SLO)] may contain structures other than the retinal area; collectively regarded as artefacts. Exclusion of
artefacts is important as a pre-processing step before automated detection of features of retinal diseases. In a retinal scan, extraneous
objects such as the eyelashes, eyelids, and dust on optical surfaces may appear bright and in focus. Therefore, automatic
segmentation of these artefacts from an imaged retina is not a trivial task. The purpose of performing this study is to develop a
method that can exclude artefacts from retinal scans so as to improve automatic detection of disease features from the retinal scans.
To the best of our knowledge, there is no existing work related to differentiation between the true retinal area and the artefacts for
retinal area detection in an SLO image. The SLO manufactured by Optus [2] produces images of the retina with a width of up to
200 (measured from the centre of the eye). This compares to 4560 achievable in a single fundus photograph. Examples of
retinal imaging using fundus camera and SLO are shown in Fig. 1. Due to the wide field of view (FOV) of SLO images, structures
such as eyelashes and eyelids are also imaged along with the retina. If these structures are removed, this will not only facilitate the
effective analysis of retinal area, but also enable to register multi view images into a montage, resulting in a completely visible
retina for disease diagnosis. In this study, we have constructed a novel framework for the extraction of retinal area in SLO images.
The three main steps for constructing our framework include determination of features that can be used to distinguish between the
retinal area and the artefacts, selection of features which are most relevant to the classification, construction of the classifier which
can classify out the retinal area from SLO images. For differentiating between the retinal area and the artefacts, we have determined
different image-based features which reflect grayscale, textural, and structural information at multiple resolutions. Then, we have
selected the features among the large feature set, which are relevant to the classification. The feature selection process improves
the classifier performance in terms of computational time. Finally, we have constructed the classifier for discriminating between
the retinal area and the artefacts. Our prototype has achieved average classification accuracy of 92% on the dataset having healthy
as well as diseased retinal images.

All rights reserved by www.grdjournals.com

324

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

II. LITERATURE SURVEY


A. V. Mire and B. L. Dhote, Iris recognition system with accurate eyelash segmentation and improved FAR, FRR using textural
and topological features, Int. J. Comput. Appl., vol. 7, pp. 09758887, 2010. In this method, a novel iris recognition system is
suggested considering both the textural and topological features of an iris image to reduce the false rejection rates. The proposed
histogram analysis is applied on a transformed polar iris image to extract textural features and Euler numbers are used to extract
the topological features. A decision strategy is proposed to verify the authenticity of an individual.It describes an alternate means
to identify individuals using images of their iris with low false acceptance rate and low false rejection rate. For encoding topological
feature Euler vector can be utilized while for encoding textural feature histogram is used. Histogram is matched by using Du
measure whose origin belong in Hyperspectral Image Analysis while for matching euler vector Vector Difference Matching
algorithm is developed .
B. J. Kang and K. R. Park, A robust eyelash detection based on iris focus assessment, Pattern Recog. Lett., vol. 28, pp.
16301639, 2007. In this method, they have proposed a new method for detecting eyelashes. we propose a new method for
detecting eyelashes in this paper. This work shows three advantages over previous works. First, because eyelash detection was
performed based on focus assessment, its performance was not affected by image blurring. Second, the new focus assessment
method is appropriate for iris images. Third, the detected eyelash regions were not used for iris code generation and therefore iris
recognition accuracy was greatly enhanced. In future work, the author plan to conduct more field tests with our algorithm. Also,
they will enhance iris recognition accuracy by interpolating the detected eyelash regions and not excluding them.
H. Davis, S. Russell, E. Barriga, M. Abramoff, and P. Soliz, Vision-based, real-time retinal image quality assessment,
in Proc. 22nd IEEE Int. Symp. Comput.-Based Med. Syst., 2009, pp. 16.
This method shows that using common image processing features, one can find those few that will robustly identify those
images that are below a visually determined threshold for acceptable image quality.This method discovered the individual key
image parameters that aid in determining the visually classified quality characteristics of a retinal image. Their goals were to: 1)
determine the quality of an image within milliseconds (real time); 2) perform image quality assessment with high sensitivity and
very high specificity; and 3) identify sources of image quality degradation to assist the photographer in making appropriate
corrections.Future research will evaluate these techniques for finding similar thresholds for recognizable differences based on the
amount of compression for an image.
C.-W. Hsu, C.-C. Chang, and C.-J. Lin, A practical guide to support vector classification, Dept. Comput. Sci., National
Taiwan Univ., Taipei, Taiwan, 2010.The support vector machine (SVM) is a popular classification technique. However, beginners
who are not familiar with SVM often get unsatisfactory results since they miss some easy but significant steps. In this guide, they
propose a simple procedure which usually gives reasonable results.
J. A. M. P. Dias, C. M. Oliveira, and L. A. d. S. Cruz, Retinal image quality assessment using generic image quality indicators,
Inf. Fusion, vol. 13, pp. 118, 2012.The proposed retinal image quality assessment algorithm relies on measures of generic image
features, namely colour, focus, contrast and illumination computed using specific algorithms. The decision to evaluate retinal
image quality based on these image characteristics was based on the ARIC study.
These quality indicators are also combined and classified to evaluate the image suitability for diagnostic purposes.The
novel algorithms proposed for assessing retinal image colour, focus, contrast and illumination are quite robust and all four
classifiers, one for each image feature assessment algorithm, show performances close to optimal, with areas under
H. Yu, C. Agurto, S. Barriga, S. C. Nemeth, P. Soliz, and G. Zamora,Automated image quality evaluation of retinal fundus
photographs in diabetic retinopathy screening,inProc.IEEESouthwestSymp.Image Anal. Interpretation, 2012, pp. 125128.To
decrease the multifaceted nature of picture handling errands and give a helpful primitive picture design, we have gathered pixels
into various locales taking into account the territorial size and conservativeness, called super pixels. The system then computes
picture based components reflecting textural and basic data and orders between retinal territory and ancient rarities.
J.Paulus,J.Meier,R.Bock,J.Hornegger,andG.Michelson,Automated
quality
assessment
of
retinal
fundus
photos,Int.J.Comput.AssistedRadiol.Surg.,vol. 5, pp. 557564,2010.Automated,objectiveandfast measurement of the image
quality of single retinal fundus photos to allow a stable and reliable medical evaluation.

III. EXISTING METHODOLOGY


The block diagram of the retina detector framework is shown in Fig. 2. The framework has been divided into three stages, namely
training stage, testing and evaluation stage, and deployment stage. The training stage is concerned with building of classification
model based on training images and the annotations
Reflecting the boundary around retinal area.
In the testing and evaluation stages, the automatic annotations are performed on the test set of images and the classifier
performance is evaluated against the manual annotations for the determination of accuracy. Finally, the deployment stage performs
the automatic extraction of retinal area. The subtasks for training, testing, and deployment stages are
Briefly described as follows:

All rights reserved by www.grdjournals.com

325

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

A. Image Data Integration


It involves the integration of image data with their manual annotations around true retinal area.
B. Image Pre-processing
Images are then preprocessed in order to bring the intensity values of each image into a particular range.
C. Generation of Super pixels
The training images after preprocessing are represented by small regions called super pixels. The generation of the feature vector
for each super pixel makes the process computationally efficient as compared to feature vector generation for each pixel.

Fig. 1
D. Feature Generation
We generate image-based features which are used to distinguish between the retinal area and the artefacts. The image-based
features reflect textural, grayscale, or regional information and they were calculated for each super pixel of the image present in
the training set. In testing stage, only those features will be generated which are selected by feature selection process.
E. Feature Selection
Due to a large number of features, the feature array needs to be reduced before classifier construction. This involves features
selection of the most significant features for classification.
F. Classifier Construction
In conjunction with manual annotations, the selected features are then used to construct the binary classifier. The result of such
a classifier is the super pixel representing either the true retinal area or the artefacts.
G. Image Post processing
Image post processing is performed by morphological filtering so as to determine the retinal area boundary using super pixels
classified by the classification model. The elements of our detection framework are discussed as follows.

IV. PROPOSED METHODOLOGY


We generate image-based features which are used to distinguish between the retinal area and the artefacts. Due to a large number
of features, the most significant features are needed before classifier construction. The result of such a classifier is the super-pixel
representing either the true retinal area or the artefacts. The ANFIS benefits from its adaptive neuro-fuzzy inference ability.
ANFIS is a kind of Neural Networks and models the dynamic character of the nonlinearity.

All rights reserved by www.grdjournals.com

326

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

Fig. 2
A. Adaptive neuro fuzzy inference system
An adaptive neuro-fuzzy inference system or adaptive network-based fuzzy inference system (ANFIS) is a kind of artificial neural
network that is based on TakagiSugeno fuzzy inference system. The technique was developed in the early 1990s. Since it
integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework.
Its inference system corresponds to a set of fuzzy IFTHEN rules that have learning capability to approximate nonlinear
functions. Hence, ANFIS is considered to be a universal estimator.
B. ANFIS Classifier
Both ANFIS (Adaptive Neuro-fuzzy Inference System) and WT (Wavelet Transform) are widely used in the field of signal
processing. ANFIS has the

All rights reserved by www.grdjournals.com

327

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

Fig. 3

ANFIS
Classifi

Fig. 4
Adaptive neuro-fuzzy inference ability. WT has the satisfying characteristics of time-frequency localization and multiresolving power. The differences were studied and compared when they were applied in filtering. In the example, the filtered signal
using WT lost some information in high frequency section, but its low frequency section was smooth. The estimated signal using
ANFIS kept most of the high frequency information, but its low frequency section was rough. The reasons were discussed. At last,
an improved method was advanced by combining both methods together, which got a better de-noising effect. The explanation of
improvement was given. It is concluded that the improved method is useful to filter the rough measured signal without clear
frequency character of noise, especially to the heavily noise-overlapped signal.

All rights reserved by www.grdjournals.com

328

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

V. PRE-PROCESSING AND RESULT ANALYSIS


A. Edge Detection Using Wiener Filter

Fig. 5
B. Area Extraction

Fig. 6
C. Area Matching

Fig. 7

All rights reserved by www.grdjournals.com

329

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

D. Comparision of Area Matching

Fig. 8
E. Conversion of Image

Fig. 9
F. Pre-Processing Result for ANFIS Classifier

Fig. 10

All rights reserved by www.grdjournals.com

330

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

G. Fuzzy Process

Fig. 11
H. Misclassification Rate of ANFIS
MISCLASSIFICATION_RATE_FC = 0.1234
COMPARISON OF MISCLASSIFICATION RATE

Fig. 12

VI. CONCLUSION AND FUTURE WORK


Thus, we have proposed a unique approach to automatically extract the true retinal area from image based on image processing
and machine learning approaches. Also, we have grouped pixels into different regions based on the regional size and compactness.
The detection of abnormal image with the classification accuracy of ANFIS was compared to a previous classifier. Since, ANFIS
was the combination of ANN and FIS classifiers. Finally, we compare the performance results of ANN and ANFIS classifiers. The
segregation of abnormal region in the retinal image was detected exactly when compared to the conventional approach. We propose
a unique approach to automatically extract the true retinal area from image based on image processing and machine learning
approaches. We have grouped pixels into different regions based on the regional size and compactness. The testing time of ANFIS
was the lowest compared to other classifiers. ANFIS is the combination of ANN and FIS classifiers. Finally, we compare the
performance results of ANN and ANFIS classifiers. Our future work will focus on also considering the gradient in the spectral
dimension in the model construction process in order to develop a real 3-D medical model.

REFERENCES
[1] M. S. Haleem, L. Han, J. van Hemert, and B. Li, Automatic extraction of retinal features from colour retinal images for
glaucoma diagnosis: A review, Comput. Med. Image. Graph. vol. 37, pp. 581596, 2013.
[2] Optos. (2014). [Online]. Available: www.optos.com
[3] R. C. Gonzalez and R. E. Woods, Eds., Digital Image Processing, 3rd ed.Englewood Cliffs, NJ, USA: Prentice-Hall, 2006.
[4] M. J. Aligholizadeh, S. Javadi, R. S. Nadooshan, and K. Kangarloo, Eyelid and eyelash segmentation based on wavelet
transform for iris recognition, in Proc. 4th Int. Congr. Image Signal Process., 2011, pp. 12311235.
[5] D. Zhang, D. Monro, and S. Rakshit, Eyelash removal method for human iris recognition, in Proc. IEEE Int. Conf. Image
Process., 2006, pp. 285 288.
[6] A. V.Mire and B. L. Dhote, Iris recognition system with accurate eyelash segmentation and improved FAR, FRR using
textural and topological features, Int. J. Comput. Appl., vol. 7, pp. 09758887, 2010.

All rights reserved by www.grdjournals.com

331

Diagnosis of Retinal Disease using ANFIS Classifier


(GRDJE / CONFERENCE / ICIET - 2016 / 053)

[7] Y.-H. Li,M. Savvides, and T. Chen, Investigating useful and distinguishing features around the eyelash region, in Proc. 37th
IEEE WorkshopAppl. Imag. Pattern Recog., 2008, pp. 16.
[8] B. J. Kang and K. R. Park, A robust eyelash detection based on iris focus assessment, Pattern Recog. Lett., vol. 28, pp.
16301639, 2007.
[9] T. H. Min and R. H. Park, Eyelid and eyelash detection method in the normalized iris image using the parabolic Hough model
and Otsus thresholding method, Pattern Recog. Lett., vol. 30, pp. 11381143, 2009.
[10] Iris database. (2005). [Online]. Available: http://www.cbsr.ia.ac.cn/ IrisDatabase.html.
[11] H. Davis, S. Russell, E. Barriga,M.Abramoff, and P. Soliz, Vision-based,real-time retinal image quality assessment, in Proc.
22nd IEEE Int. Symp. Comput.-Based Med. Syst., 2009, pp. 16.
[12] H. Yu, C. Agurto, S. Barriga, S. C. Nemeth, P. Soliz, and G. Zamora,Automated image quality evaluation of retinal fundus
photographs in diabetic retinopathy screening, in Proc. IEEE Southwest Symp. Image Anal. Interpretation, 2012, pp. 125
128.
[13] J. A. M. P. Dias, C. M. Oliveira, and L. A. d. S. Cruz, Retinal image quality assessment using generic image quality
indicators, Inf. Fusion,vol. 13, pp. 118, 2012.
[14] M. Barker and W. Rayens, Partial least squares for discrimination,J. Chemom., vol. 17, pp. 166173, 2003.
[15] J. Paulus, J.Meier, R. Bock, J. Hornegger, and G.Michelson, Automated quality assessment of retinal fundus photos, Int. J.
Comput. Assisted Radiol. Surg., vol. 5, pp. 557564, 2010.
[16] R. Pires, H. Jelinek, J.Wainer, and A. Rocha, Retinal image quality Analysis for automatic diabetic retinopathy detection,
in Proc. 25th SIBGRAPI Conf. Graph., Patterns Images, 2012, pp. 229236.
[17] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, Slic superpixels compared to state-of-the-art superpixel
methods, IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 22742282, Nov. 2012.
[18] A. Moore, S. Prince, J.Warrell, U. Mohammed, and G. Jones, Superpixel lattices, in Proc. IEEE Conf. Comput. Vis. Pattern
Recog., 2008, pp. 18. an energy optimization framework, in Proc. 11th Eur. Conf. Comput.Vis., 2010, pp. 211224.

All rights reserved by www.grdjournals.com

332

Das könnte Ihnen auch gefallen