Sie sind auf Seite 1von 7


3, MAY 2011


Automated Diagnosis of Glaucoma Using Texture

and Higher Order Spectra Features
U. Rajendra Acharya, Sumeet Dua, Xian Du, Vinitha Sree S, and Chua Kuang Chua

AbstractGlaucoma is the second leading cause of blindness

worldwide. It is a disease in which fluid pressure in the eye increases continuously, damaging the optic nerve and causing vision
loss. Computational decision support systems for the early detection of glaucoma can help prevent this complication. The retinal
optic nerve fiber layer can be assessed using optical coherence
tomography, scanning laser polarimetry, and Heidelberg retina tomography scanning methods. In this paper, we present a novel
method for glaucoma detection using a combination of texture and
higher order spectra (HOS) features from digital fundus images.
Support vector machine, sequential minimal optimization, naive
Bayesian, and random-forest classifiers are used to perform supervised classification. Our results demonstrate that the texture and
HOS features after z-score normalization and feature selection,
and when combined with a random-forest classifier, performs better than the other classifiers and correctly identifies the glaucoma
images with an accuracy of more than 91%. The impact of feature
ranking and normalization is also studied to improve results. Our
proposed novel features are clinically significant and can be used
to detect glaucoma accurately.
Index TermsClassifier, glaucoma, higher order spectra (HOS),

LAUCOMA is caused by increased intraocular pressure
(IOP) due to the malfunction of the drainage structure of
the eyes [1]. It is estimated that more than four million Americans have glaucoma, and half of them are unaware that they
have it. Approximately 120 000 are blind from glaucoma, thus
accounting for 9%12% of all cases of blindness in the U.S.
About 2% of the population between 4050 years old and 8%
over 70 years old have elevated IOP [2], which increases their
risk of significant vision loss and even blindness.
Many studies have been conducted to develop computerbased decision support systems for the early detection of glaucoma. An artificial neural network (ANN) model using multifo-

Manuscript received July 9, 2010; revised January 6, 2011; accepted February

13, 2011. Date of publication February 24, 2011; date of current version May
4, 2011.
U. R. Acharya and C. K. Chua are with the Department of Electronics
and Communication Engineering, Ngee Ann Polytechnic, Singapore 599489
S. Dua and X. Du are with the Computer Science Program, Louisiana
Tech University, Ruston, LA 71272 USA (e-mail:;
V. Sree S is with the School of Mechanical & Aerospace Engineering,
Nanyang Technological University, Singapore 639798 (e-mail: vinithasree@
Color versions of one or more of the figures in this paper are available online
Digital Object Identifier 10.1109/TITB.2011.2119322

cal visual evoked potential (M-VEP) data from the ObjectiVision perimetry system was studied in [3]. Their results showed
that a neural network model with M-VEP inputs was able to
detect glaucoma with a high sensitivity of 95%.
Qualitative assessment of the ability of optic nerve head
(ONH) stereo photographs (ONHSPs), confocal scanning laser
ophthalmoscopy (CSLO), scanning laser polarimetry (SLP), and
optical coherence tomography (OCT) to distinguish normal eyes
from those with early-to-moderate glaucomatous visual field
defects was performed in [4]. Receiver operating characteristic (ROC) curves were generated from discriminant analysis of
CSLO, SLP, and OCT measurements and from ONHP scores to
test the performance. It was found that the quantitative methods
CSLO, SLP, and OCT were not better than the qualitative assessment of disc ONHSPs conducted by observers experienced
at distinguishing normal eyes from those with early-to-moderate
glaucoma. They proposed that the combination of the imaging
methods might significantly improve this capability.
The optic disc topography parameters of Heidelberg retina
tomograph and neural network combination were used to differentiate between glaucomatous and nonglaucomatous eyes [5].
ROC curves were generated for the classification of eyes by three
neural network techniques: linear and Gaussian support vector
machines (Linear SVM and Gaussian SVM, respectively), a
multilayer perceptron (MLP), and linear discriminant function.
Bowd et al. showed that these neural network analyses helped
increase the diagnostic accuracy of glaucoma tests.
The performance of an ANN trained to recognize glaucomatous visual field defects was studied and its diagnostic accuracy
was compared with that of other algorithms proposed for the
detection of visual field loss [6]. ANN recorded a sensitivity of
93% at a specificity level of 94% and an area under the ROC
(AROC) curve of 0.984. The results of other compared methods
are as follows. The glaucoma Hemifield test had a sensitivity
of 92% at 91% specificity. The pattern standard deviation had a
sensitivity of 89% at 93% specificity. The cluster algorithm had
a sensitivity of 95% at 82% specificity.
An adaptive neuro-fuzzy inference system (ANFIS) was designed to differentiate between normal and glaucomatous eyes
from the quantitative assessment of summary data reports obtained using stratus OCT in Taiwan Chinese population [7].
Retinal nerve fiber layer thickness and ONH topography obtained using stratus OCT were used as input to the classifiers.
Two types of classifiers were studied: 1) ANFIS and 2) a backpropagation gradient descent method combined with the least
squares method. The results showed that ANFIS was better at
discriminating glaucomatous and normal eyes when using if
then rules and membership functions, which enhance the readability of the output.

1089-7771/$26.00 2011 IEEE


Fig. 1.


Proposed glaucoma detection system.

An artificial intelligence system involving ANN and the analysis of the nerve fibers of the retina from the study with SLP
(NFAII; GDx), perimetry, and clinical data was developed [8].
The groups were defined as follows. Normal eyes were considered stage 0 and ocular hypertension as stage 1. Early glaucoma
was considered stage 2, and established glaucoma as stage 3.
Advanced glaucoma was considered stage 4, and terminal glaucoma as stage 5. The MLP using the LevenbergMarquardt
technique was used. The 100% specificity and sensitivity obtained indicates that 100% correct classification of each eye in
the corresponding stage of glaucoma was achieved.
An algorithm to detect the glaucoma using morphological
image processing was developed using fundus images. Nayak
et al. [1] used the cup-to-disc (c/d) ratio, the ratio of the distance
between the optic disc center and ONH to diameter of the optic disc, and the ratio of blood vessels area in inferiorsuperior
side to area of blood vessel in the nasal-temporal side as features in a neural network. The developed neural network system
identified glaucoma automatically with a sensitivity and specificity of 100% and 80%, respectively. This result implies that
the system can detect all subjects with glaucoma accurately, but
can detect only 80% of the normal subjects as normal. A new
framework for the detection of glaucoma based on the changes
in the ONH using the method of proper orthogonal decomposition was proposed in [9]. Any glaucomatous changes in the
ONH present during a follow-up examination were estimated
by comparing the follow-up ONH topography with its baseline
topograph subspace representation that was constructed earlier.
The changes in the ONH were quantified using image correspondence measuresL1-norm and L2-norm, correlation, and
image Euclidean distance (IMED). By using the L2-norm and
IMED in the new framework, good areas under the ROC curve
of 0.94 at 10 field of imaging, and 0.91 at 15 field of imaging
were obtained.
Linear discriminant analysis (LDA) and an ANN were used to
improve the differentiation between glaucomatous and normal
eyes in a Taiwan Chinese population based on the retinal nerve
fiber layer thickness measurement data from the SLP variable
corneal compensation [10].
The results showed that the individual parameter with the
best AROC curve for differentiating between normal and glaucomatous eye was the nerve fiber indicator (NFI, 0.932). The
AROCs for the LDA and ANN methods were 0.950 and 0.970,
respectively. Hence, the NFI, ANN, and LDF methods demonstrated equal diagnostic power in glaucoma detection. In contrast
to the method in [9], our proposed technique does not use any
pixel-based comparison between baseline image and a follow-up
image estimated to detect classes. Hence, minor topographical
changes within one image will not affect diagnoses.
The scheme proposed in this study is shown in Fig. 1. Higher
order spectra (HOS) based and texture-based features are com-

Fig. 2.

Typical fundus images. (a) Normal. (b) Glaucoma.

monly used in many medical image-processing areas. However, such studies have not yet been done on glaucoma images.
Therefore, these features were extracted in our study. After preprocessing the acquired fundus images, HOS-based and texturebased features are extracted from the preprocessed images. Subsequently, these features are fed to SVM, sequential minimal
optimization (SMO), random forest, and naive Bayesian (NB)
classifiers for classification. Feature ranking is also performed
to highlight and employ the discriminatory ability of the features
in the classification process. The layout of the paper is as follows. Section II contains an explanation of the data acquisition
process. In Section III, we address the preprocessing of the raw
normal and glaucoma images. Section IV contains an explanation of the extraction of the features using the HOS and texture
methods. A brief description of the various classifiers used is
discussed in Section V, and Section VI of the paper presents the
results of the proposed method. Finally, the paper concludes in
Section VII.
The digital retinal images were collected from the Kasturba
Medical College, Manipal, India. We have used 60 fundus images: 30 normal and 30 open-angle glaucoma images from 20to-70-year-old subjects. The doctors in the Ophthalmology Department of the hospital certified the image quality and their
usability. The ethics committee, consisting of senior doctors,
approved the images for this research purpose. All the images
were taken with a resolution of 560 720 pixels and stored in
JPEG format. The fundus camera along with a microscope and
light source were used to acquire the retinal images to diagnose
diabetes retinopathy, glaucoma, etc. Fig. 2(a) and (b) shows the
typical normal and glaucoma fundus images, respectively.
The preprocessing step consists of image contrast improvement using histogram equalization and radon transform was
performed for HOS feature extraction. Histogram equalization
increases the dynamic range of the histogram of an image [11]
and assigns the intensity values of pixels in the input image such



ity) and, hence, the entropies (Ent1, Ent2, and Ent3) computed
are also between 0 and 1.
B. Texture Features

Fig. 3. Nonredundant region () of computation of the bispectrum for real


that the output image contains a uniform distribution of intensities. As a result, the contrast of the image is increased. Radon
transform is used to detect features within an image. Radon
transform transforms lines through an image to points in the
radon domain, where each point in this domain is transformed
to a straight line in the image [11], [12]. This radon transformation is used before extracting the HOS parameters from the
In this study, we have extracted two types of features:
1) HOS parameters and 2) texture descriptors. Brief explanations of these features are given in the following.
A. Higher Order Spectra
HOS elicits both amplitude and phase information of a given
signal. It offers good noise immunity and yields good results,
even for weak and noisy signals. HOS consist of moment and
cumulant spectra and can be used for both deterministic signals
and random processes [13]. We derived the features from the
third-order statistics of the signal, namely, the bispectrum. The
bispectrum is given by
B(f1 , f2 ) = E[X(f1 )X(f2 )X (f1 + f2 )]


where X(f) is the Fourier transform of the signal x(nT), and E[.]
stands for the expectation operation.
Features are calculated by integrating the bispectrum along
the dashed line with slope = a. Frequencies are normalized
by the Nyquist frequency (see Fig. 3). These bispectral invariants [14] contain information about the shape of the waveform
within the window and are invariant to shift and amplification
and robust to time-scale changes. In this study, we used these
bispectral invariant features for every 20 . Bispectral entropies
have been derived from bispectrum plots to find the rhythmic
nature of the heart rate variability and electroencephalogram
signals [15], [16]. The equations used to determine the various
HOS features are given in the Appendix [see (7)(11)]. The normalization in the equations ensures that entropy is calculated for
a parameter that lies between 0 and 1 (as required of a probabil-

Texture descriptors provide measures of properties, such as

smoothness, coarseness, and regularity, which indicate a mutual
relationship among intensity values of neighboring pixels repeated over an area larger than the size of the relationship. Such
properties can be used as features for pattern recognition.
Co-Occurrence Matrix: A gray-level co-occurrence matrix
(GLCM) depicts how often different combinations of pixel
brightness values (gray levels) occur in an image. It is a secondorder measure because it measures the relationship between
neighborhood pixels. For an image of size m n, we performed
a second-order statistical textural analysis by constructing the
GLCM [17] as

(p, q), (p + x + y) :
Cd (i, j) =
I(p, q) = i, I(p + x, q + y) = j
where (p, q), (p + x, q + y) M N, d = (x, y) and
| | denotes the cardinality of a set. For a pixel in an image
having a gray level i, the probability that the pixel at a (x, y)
distance away is j, which is defined as
Cd (i, j)
Pd (i, j) = 
Cd (i, j)


Using (2) and (3), the following features were calculated: energy, contrast, homogeneity, entropy, and moments [see formulae given in Appendix: (12)(16)]. The difference vector, which
represents the gray-level difference statistics that can be obtained from the co-occurrence matrix can be derived using the
following equation [18]:

Cd (i, j)
Pd (k) =

where |i j| = k, k = 0, 1, . . ., n 1, and n is the number

of gray-scale level [12]. Each entry in the difference matrix is
in fact the sum of the probability that the gray-level difference
is k between two points separated by d. We derived the following properties using the difference vector [19]: angular second
moment, contrast, entropy, and mean [see Appendix: (17)(20)].
Run Length Matrix: In the run-length matrix P (i, j), each
cell in the matrix consists of the number of elements in which
gray level i successively appears j times in the direction ,
and the variable j is termed as run length. The resultant matrix
characterizes the gray-level runs by the gray tone, length, and
direction of the run. As a common practice, run length matrices
of equal to 0 , 45 , 90 , and 135 were calculated to determine the following features [20]: short-run emphasis, long-run
emphasis, gray-level nonuniformity, run-length nonuniformity,
and run percentage [see Appendix: (21)(25)].
Our feature vector comprises heterogeneous features with
densely distributed values. Figs. 4 and 5 demonstrate the distribution of feature values for the two classes of features (HOS



Fig. 4.

Data distribution of various HOS features for a variety of images.

Fig. 5.

Data distribution of various texture features for a variety of images.


and texture). HOS features are more widely distributed and have
very limited correlation among them. The texture features are
relatively correlated, but do exhibit discriminatory character for
each of the images. The classifiers were chosen based upon their
effectiveness in capturing the discriminative properties of these
features, the impact of the ranking of features and efficiency,
and the efficacy of the classification results. Four classifiers
were employed for supervised learning and testing: SVM [21],
SMO, random forest, and NB. Hardware consistency was maintained during the evaluation of these classifiers. The algorithmic control parameters for different classifiers are provided in
Table I, and a succinct discussion on these classifiers is presented
in the following.
SVMs perform classification by constructing an Ndimensional hyperplane that optimally separates the data into
two categories. SVMs view the classification problem as a
quadratic optimization problem. We chose SVM because of
its superior generalization in high-dimensional data and fast

convergence in training [22]. In general, SVMs plot the feature vector for each sample in the training set and result in a
high-dimensional feature space. Each feature vector is labeled
with its class identifier referred to as training IDs. A hyperplane
is drawn between the training IDs that maximizes the distance
between the different classes. The shape of the hyperplane is
generated by the kernel function, although many experiments
select the radial-basis kernel as optimal, which was employed
for our study. SMO speeds up the training of SVM [23], and
we implement the SMO algorithm by Platt [24] in our study.
The random-forest classifier discriminates classes by using a
collection of independent decision trees, instead of one tree,
where each tree is grown using a subset of the possible attributes. In random-forest classification, each tree votes for
one of the classes and, correspondingly, the most popular class
is assigned [25], [26]. NB is a statistical classifier based on
Bayes rules. The NB classifier can predict class membership
probabilities, such as the probability that a given tuple belongs
to a particular class, which is then employed for classification
Since each of the features in the feature vector does not contribute equally to the classification process, we employ feature
ranking using the methods of chi-square, gain ratio, and information gain (IG). Consequently, each of the features is weighted
by their corresponding rank (normalized between 01). These
weighted features are then used for training and testing of instances. In an additional study, we investigated the impact of
normalization on the nonranked features. Lack of adequate normalization can skew the classification results causing significant






false alarms and dismissals. z-Score normalization (standard

norm) and minmax normalization were used to normalize the
features. In z-score normalization, the values of feature A are
normalized based on the mean and standard deviation of A. The
z-score normalization maps a value v of A to v  using the formula
v =

v A


where A is the mean and A is the standard deviation of the

attribute. We also performed min max normalization to represent
the data in a new range using the formula
v =

(v minA (new maxA new minA ))

(maxA minA ) + new mina


where new_min was chosen to be 0.0 and new_max was chosen

to be 1.0. Normalization was first employed for each feature
independently and then on the aggregate of features, and classification accuracy was then investigated on the derived values of
Table II shows the HOS features and the p-values. In this
study, we extracted 20 bispectrum invariants for each radontransformed eye image. Among them, we have used P (6/20)
feature as it was clinically significant (p < 0.005) in this study.
Other entropy and bispectrum magnitudes at different angles
were also chosen as the input vector for the classifier (p <
0.005). A two-sample t-test was used to estimate whether the
mean value of each HOS feature was significantly different
between the two classes. We assumed a null hypothesis that
there is no difference between the two means. A p-value is a
measure of probability that a difference between the two means
happened by chance. In general, the null hypothesis is rejected,
if the p-value is less than 0.05 or 0.01, corresponding to a 5% or
1% chance, respectively, of the null hypothesis being true. It can

be seen from the table that all the features show low p-values,
which indicate that there is a clinically significant difference
between the means of the two classes. In addition, Table III
shows the features extracted from the texture of the fundus
image. These features also show significantly low p-values. All
the texture features show higher variation for glaucoma images
as compared to the normal fundus image. However, most of the
HOS parameters show lower values for the glaucoma than the
normal images.
The ratio of the diameter of the optic cup to that of the optic
disc in a healthy eye is generally less than 0.5 [27]. When
the optic nerve is damaged by glaucoma, many of the individual
fibers that make up the nerve are lost and the optic nerve becomes
excavated. As glaucoma progresses, more optic nerve tissue
is lost and the optic cup grows larger. Thus, the cup-to-disc
ratio is higher for glaucoma subjects than for normal subjects,
thus leading to differences in the respective fundus images.
Moreover, in the case of optic nerve hemorrhage, another sign of
glaucoma-related damage, the blood typically collects along the
individual nerve fibers that radiate outward from the nerve. Such
physiological changes are manifested in the fundus images, and
our experiments show that the HOS and texture features are able
to quantify such differences in eye physiology.
Table IV summarizes our classification results with and without feature ranking, and with and without feature selection.
We applied fivefold cross validation for training and testing
purposes. LibSVM outperformed the other classifiers without
the ranking of features, while random forest performed better
than other methods with IG feature ranking. However, featureranking methods do not improve the performance of other




classifiers. Significantly, feature selection based on any of the

three feature-ranking methods outperformed feature-ranking
methods. This result demonstrated that some of the features
do not contribute to the classification, or even hamper the performance of classifiers on the diagnosis of glaucoma.
The feature-selection methods were subsequently adopted for
normalization-based studies (see Table V). Table V summarizes
the classification results obtained as a result of normalization.
The classifiers generally maintained their performance or presented improved accuracies when the normalization was performed on features as an aggregate. The z-score normalizationbased method outperformed the minmax normalization-based
method without feature ranking and feature selection. We attribute this performance to the fact that z-score normalization
reassigns values based on the variance around the mean (resulting in a mean of zero for the normalized values), while the
minmax normalization exploits the boundary values. Consequently, minmax normalization is sensitive to the outliers and
leads to increased skewness. Thus, we applied feature-selection
methods on the z-score normalization result. All of the four
classifiers performed the same using different feature-selection
methods. Random forest obtained the highest classification accuracy at 91.7% for z-score normalized (all features) data. We
also demonstrated that the combination of minmax normalization and chi-squared feature selection did not improve the
classification accuracy using chi-squared feature selection on
the original features.
In this study, we developed an automatic glaucoma diagnosis system that combines texture and HOS features extracted
from fundus images for diagnosis. We found that the textureand HOS-based features were clinically significant, i.e., these
features had a low p-value, which means that there is more
chance that these features have very different values for the
normal and abnormal classes, and, hence, better discriminate
the two classes. We, therefore, used these features for classification. The performances of four supervised classifiers were
evaluated. We found that the random-forest classifier, combined
with z-score normalization and feature-selection methods, performed the best among the four classifiers with a classification
accuracy of more than 91%. Our technique is of clinical significance, as the accuracy obtained is comparable to the accu-

racies achieved so far by other studies and the equipment used

is the most commonly used fundus-imaging equipment. Therefore, our proposed technique can be easily incorporated into
existing medical infrastructures, thus making it clinically a viable option. The classification accuracy can be further improved
by increasing the number of diverse training images, choosing
better features and better classifiers and using controlled environmental lighting conditions during image acquisition. Using
more diverse digital fundus images from normal and glaucoma
subjects can further enhance the percentage of correct diagnosis. Physicians and other clinical practitioners can employ our
proposed approach within a decision support system that offers
secondary opinion in the clinical diagnosis of glaucoma.
HOS features:
Mave =

Mean of magnitude:
Phase entropy:

Pe =

|B(f1 , f2 )|.


p(yn ) log p(yn ).


Normalized bispectral entropy (BE 1): Ent1 =
pn log pn


where pn = (|S(f1 , f2 )|)/( |B(f1 , f2 )|), and W is the region as shown in Fig. 3.
Normalized bispectral squared entropy (BE 2):

qn log qn
Ent2 =


where qn = (|B(f1 , f2 )|2 )/( |B(f1 , f2 )|2 ).
Normalized bispectral cubic entropy (BE 3):

rn log rn
Ent3 =


where rn = (|B(f1 , f2 )|3 )/( |B(f1 , f2 )|3 ).
Co-occurrence matrix based features:
[Pd (i, j)]2 .
Energy: E =

Contrast: Co =



(i j)2 Pd (i, j).


En =






Pd (i, j)
1 + (i j)2


Pd (i, j) lnPd (i, j).


Moments m1 , m2 , m3 , and m4 :

(i j)g Pd (i, j). (16)
mg =


Difference-vector-based features:
Angular second moment:

n 1

Pd (k)2 .


k =0


n 1

k 2 Pd (k).


k =0


n 1

Pd (k) log Pd (k).


k =0


n 1

kPd (k).


k =0

Run-length-matrix-based features:
(P (i, j)/j 2 )
Short-run emphasis:
j P (i, j)
j P (i, j)
i j
Long-run emphasis:
j P (i, j)
j P (i, j)
Gray-level nonuniformity:
j P (i, j)
P (i, j)]2
j [

Run-length nonuniformity:
j P (i, j)
j P (i, j)
Run percentage:
where A is the area of the image of interest.





[1] J. Nayak, U. R. Acharya, P. S. Bhat, A. Shetty, and T. C. Lim, Automated
diagnosis of glaucoma using digital fundus images, J. Med. Syst., vol. 33,
no. 5, pp. 337346, Aug. 2009.
[2] Glaucoma research foundation. (2009). [Online]. Available: http://www.
[3] R. Nagarajan, C. Balachandran, D. Gunaratnam, A. Klistorner, and
S. Graham, Neural network model for early detection of glaucoma using
multi-focal visual evoked potential (M-VEP), Invest. Ophthalmol. Vis.
Sci., vol. 42, 2002.
[4] M. J. Greany, D. C. Hoffman, D. F. Garway-Heath, M. Nakla, A. L.
Coleman, and J. Caprioli, Comparisons of optic nerve imaging methods to
distinguish normal eyes from those with glaucoma, Invest. Ophthalmol.
Vis. Sci., vol. 43, no. 1, pp. 140145, Jan. 2002.
[5] C. Bowd, K. Chan, L. M. Zangwill, M. H. Goldbaum, T. W. Lee,
T. J. Sejnowski, and R. N. Weinreb, Comparing neural networks and
linear discriminant functions for glaucoma detection using confocal scanning laser ophthalmoscopy of the optic disc, Invest. Ophthalmol. Vis.
Sci., vol. 43, pp. 34443454, 2002.
[6] D. Bizios, A. Heijl, and B. Bengtsson, Trained artificial neural network
for glaucoma diagnosis using visual field data: A comparison with conventional algorithms, J. Glaucoma, vol. 16, no. 1, pp. 2028, Jan. 2007.
[7] M. L. Huang, H. Y. Chen, and J. J. Huan, Glaucoma detection using
adaptive neuro-fuzzy inference system, Expert Syst. Appl., vol. 32, no. 2,
pp. 458468, Jan. 2007.


[8] E. H. Galilea, G. Santos-Garcia, and I. F. Suarez-Barcena, Identification

of glaucoma stages with artificial neural networks using retinal nerve fibre
layer analysis and visual field parameters, Adv. Soft Comput., vol. 44,
pp. 418424, 2007.
[9] M. Balasubramanian, S. Zabic, C. Bowd, H. W. Thompson, P. Wolenski,
S. S. Iyengar, B. B. Karki, and L. M. Zangwill, A framework for detecting
glaucomatous progression in the optic nerve head of an eye using proper
orthogonal decomposition, IEEE Trans. Inf. Technol. Biomed., vol. 13,
no. 5, pp. 781793, Sep. 2009.
[10] M. L. Huang, H. Y. Chen, W. C. Huang, and Y. Y. Tsai, Linear discriminant analysis and artificial neural network for glaucoma diagnosis using
scanning laser polarimetryVariable cornea compensation measurements
in Taiwan Chinese population, Graefes Arch. Clin. Exp. Ophthalmol.,
vol. 248, no. 3, pp. 435441, Mar. 2010.
[11] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Englewood
Cliffs, NJ: Prentice Hall, 2001.
[12] U. R. Acharya, K. C. Chua, E. Y. K. Ng, W. Wei, and C. Chee, Application of higher order spectra for the identification of diabetes retinopathy
stages, J. Med. Syst., vol. 32, no. 6, pp. 481488, 2008.
[13] C. L. Nikias and A. P. Petropulu, Higher-Order Spectra Analysis: A Nonlinear Signal Processing Framework. Englewood Cliffs, NJ: Prentice
Hall, 1993.
[14] V. Chandran and S. L. Elgar, Pattern recognition using invariants defined
from higher order spectra one-dimensional inputs, IEEE Trans. Signal
Process., vol. 41, no. 1, pp. 205212, Jan. 1993.
[15] K. C. Chua, V. Chandran, U. R. Acharya, and C. M. Lim, Cardiac state
diagnosis using higher order spectra of heart rate variability, J. Med.
Eng. Technol., vol. 32, no. 2, pp. 145155, Mar. 2008.
[16] K. C. Chua, V. Chandran, U. R. Acharya, and C. M. Lim, Analysis of
epileptic EEG signals using higher order spectra, J. Med. Eng. Technol.,
vol. 33, no. 1, pp. 4250, Jan. 2009.
[17] J. H. Tan, E. Y. K. Ng, and U. R. Acharya, Study of normal ocular
thermogram using textural parameters, Infrared Phys. Technol., vol. 53,
no. 2, pp. 120126, Mar. 2010.
[18] F. Tomita and S. Tsuji, Computer Analysis of Visual Textures. Boston,
MA: Kluwer, 1990.
[19] J. S. Weszka and A. Rosenfield, An application of texture analysis to
material inspection, Pattern Recogn., vol. 8, no. 4, pp. 195200, Oct.
[20] M. M. Galloway, Texture classification using gray level run length,
Comput. Graph. Image Process., vol. 4, no. 2, pp. 172179, Jun. 1975.
[21] C.-C. Chang and C.-J. Lin. (2001). LIBSVMA library for support
vector machines. [Online]. Available:
[22] P. Chowriappa, S. Dua, J. Kanno, and H. W. Thompson, Protein structure
classification based on conserved hydrophobic residues, IEEE/ACM
Trans. Comput. Biol. Bioinf., vol. 6, no. 4, pp. 639651, Oct./Dec. 2009.
[23] J. Platt, Fast training of support vector machines using sequential
minimal optimization, in Advances in Kernel MethodsSupport Vector
Learning. Cambridge, MA: MIT Press, 1999, pp. 185208.
[24] J. Platt, Machines using sequential minimal optimization, in Advances
in Kernel MethodsSupport Vector Learning, B. Schoelkopf, C. Burges,
and A. Smola, Eds., 1998.
[25] Y. Qi, J. K. Seetharaman, and Z. B. Joseph, Random forest similarity for
protein-protein interaction prediction for multiple sources, Pacific Symp.
Biocomput., vol. 10, pp. 531542, 2005.
[26] C. Chen, A. Liaw, and L. Breiman, Using random forest to learn imbalanced data, Dep. Statistics, Univ. California, Berkely, Tech. Rep. TR-666.
[27] Glaucoma guide (2010). [Online]. Available: http://www.medrounds.

Authors photographs and biographies not available at the time of publication.