Sie sind auf Seite 1von 13

Applied Soft Computing 30 (2015) 1–13

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Using an unsupervised approach of Probabilistic Neural Network


(PNN) for land use classification from multitemporal satellite images
Jawad Iounousse ∗ , Salah Er-Raki, Ahmed El Motassadeq, Hassan Chehouani
LP2M2E, Faculty of Sciences and Techniques, Cadi Ayyad University, Marrakesh, Morocco

a r t i c l e i n f o a b s t r a c t

Article history: The aim of this work is to develop an unsupervised approach based on Probabilistic Neural Network
Received 22 July 2013 (PNN) for land use classification. A time series of high spatial resolution acquired by LANDSAT and SPOT
Received in revised form 6 November 2014 images has been used to firstly generate the profiles of Normalized Difference Vegetation Index (NDVI)
Accepted 21 January 2015
and then used for the classification procedure.
Available online 30 January 2015
The proposed method allows the implementation of cluster validity technique in PNN using Ward’s
method to get clusters. This procedure is completely automatic with no parameter adjusting and instan-
Keywords:
taneous training, has high ability in producing a good cluster number estimates and provides a new point
Unsupervised classification
Probabilistic Neural Network
of view to use PNN as unsupervised classifier. The obtained results showed that this approach gives an
Ward’s method accurate classification with about 3.44% of error through a comparison with the real land use and pro-
Cluster validity index vides a better performance when comparing to usual unsupervised classification methods (fuzzy c-means
Land use (FCM) and K-means).
LANDSAT and SPOT images © 2015 Elsevier B.V. All rights reserved.

1. Introduction learning methods is to develop classification labels automatically.


Unsupervised algorithms seek out similarity between pieces of data
The classification is one of the most useful tasks of human behav- in order to determine whether they can be characterized as form-
ior. It aims at identifying groups of similar objects in the sense of ing groups labeled clusters. In remote sensing for example, the
a homogeneity criterion and therefore helps to discover the distri- unsupervised methods commonly used are split-and-merge [24],
bution of patterns and interesting correlations in large data sets. ISODATA [25], K-means, fuzzy c-means (FCM) [26,27], NNs based
Its application has an important role for resolving many problems methods [28,29] and scale space techniques [30].
in pattern recognition [1], imaging, color image segmentation [2], Zhang [31] reported that the classification is the most inves-
data mining [3] and in different domains such as medicine [4], biol- tigated topic of NNs. Furthermore, it has been noted that NNs
ogy [5,6], marketing [7], energy [8], remote sensing especially land are a promising alternatives to various conventional classification
use [9–11], etc. methods. The advantages of using NNs are due to the following
There are two main methods used for classification: supervised theoretical aspects. First, NNs are self-adaptive methods as they
and unsupervised. In the first one, the user defines the classes can adjust themselves to data without any explicit specification
which can be conceived as a finite set. The main task is to search of functional or distributional form for their underlying structure.
the patterns and then construct their corresponding mathemati- The user can adjust parameters of learning by setting up the initial
cal models. The consistency of those models is evaluated based on weights of the network and selecting the correct number of hid-
the actual data. The most used supervised classification methods den layers and nodes at each layer. Second, NNs can approximate
are: maximum likelihood classification (MLC) [12], parallelepiped any function with arbitrary accuracy [32–34]. So, any classifica-
method (PP) [13] and fuzzy sets [14-18], neural networks (NNs) tion procedure seeks a functional relationship between the group
[19,20], support vector machines (SVM) [21,22] and computational membership and the attributes of the object. In fact, if the user
intelligence [23]. In other hand, the basic task of unsupervised disposes of different networks with a variety of methods using a
multivariate training data formats, it can be easy to get an accu-
rate identification of this underlying function. Finally, NNs are able
to estimate the posterior probabilities using the Bayes rule. These
∗ Corresponding author. Tel.: +212 524 43 34 04.
probabilities provide the basis to establish classification rule and
E-mail addresses: iounousse@gmail.com (J. Iounousse),
s.erraki@gmail.com (S. Er-Raki), motassadeq@gmail.com (A. El Motassadeq),
perform statistical analysis [35]. For classification tasks, the Prob-
chehouani@fstg-marrakech.ac.ma (H. Chehouani). abilistic Neural Network (PNN) is one of the most used NN. It is

http://dx.doi.org/10.1016/j.asoc.2015.01.037
1568-4946/© 2015 Elsevier B.V. All rights reserved.
2 J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13

a special form of radial basis function NN (RBFNN). In addition, it


is considered as an implementation of the Bayes optimal decision
rule in the NN form based on nearest neighbor classifiers [36,37].
Several recent studies [4,8,38–46] used PNN for classification and
showed that this method provides satisfactory results if the ini-
tial target classes are defined correctly. In this way, finding the
basis function centers (classes) with their appropriate number is an
important step to achieve suitable classification. This can be proved
by several reasons as cited by Tsekouras and Tsimikas [47]. First, the
activation of each hidden node depends exclusively on the distance
between the center and the current input vector. Second, in the
neuron construction, the distribution of neuron’s receptive fields
across the feature space is strongly linked to the locations of the
respective centers. Third, the underlying data structure is revealed
by these centers. They affect directly the following neurons output.
Fourth, the estimation of the widths directly depends on the loca-
tions of the centers. The classification performance depends heavily
on selecting appropriate spread values. Too small spread values
give very spiky Probability Density Functions (PDFs) whereas too
large spread values smooth out the details. The idea of using clus-
tering algorithms in training RBFNN design has been addressed
by several authors [47–55]. Pedrycz [50] applied the conditional
fuzzy clustering (modified FCM) in the input space. This method has
embedded the output data using the clusters weights calculated as
feedback information into the input mechanism. Uykan et al. [55]
employed the K-means model and showed that the main impact Fig. 1. Overview of the study area (false color composition).
of the input–output clustering is the minimization of an upper
bound of network’s mean square error. Staiano et al. [54] used fuzzy
clustering to generate the clusters in the input space and for each
cluster established an input–output relationship through a local by LANDSAT and SPOT to build land use map. Finally, the obtained
linear regression models. Tsekouras and Tsimikas [47] proposed an results are then validated with the real land use and compared with
algorithm to select the optimal values for the basis function centers the results of usual classification methods (FCM and K-means).
of RBFNN. This algorithm uses the output space to adjust the input
partition by combining input–output fuzzy clustering and particle
swarm optimization. 2. Study area and data description
Based on the state-of-the-art cited above, it seems that the
major challenge in clustering is to determine the optimal number The region of interest is an irrigated area located in the Haouz
of clusters to better fit a data set. In the most clustering methods, plain in the center of the Tensift basin (Central Morocco), 40 km
experimental evaluations of 2D/3D-data sets are used in order to east of Marrakech city. The climate is of semi-arid Mediterranean
visually check the validity of the results (i.e. how well the clustering type with an average annual precipitation of about 250 mm of
algorithm discovers the clusters of the data set). But in the case of which 70% falls during winter and spring. The area covers about
large multidimensional (more than three dimensions) data sets like 2800 ha and is mostly flat. It has been extensively studied during
multidimensional remote sensing images, effective visualization of the 2002–2003, 2003–2004 and 2005–2006 agricultural seasons
the data set would be difficult. Moreover, the perception of clus- [69–72]. The main land cover classes are cereals; mostly wheat,
ters using available visualization tools is a difficult task for humans then barley and a significant portion is left in fallow or not culti-
that are not accustomed to higher dimensional spaces and complex vated (Fig. 1). More details about the study area and the climate of
sets of data. To overcome this problem, many techniques based on region can be found in [68–72]. The vegetation development in this
cluster analysis have been developed in order to group either the area is affected by a great inter-annual and/or intra-annual hetero-
data or the variables into clusters. To do so, many criteria have geneity [72]. Then, the land cover maps required annual update.
been described like partitioning methods, hierarchical clustering, Therefore, the effort was directed toward the development of land
etc. One of the most widespread hierarchical clustering methods is cover classification methods based on remote sensing data. A time
the Ward’s method [56–64]. According to Hands and Everitt [64], series of images acquired by SPOT and LANDSAT was collected dur-
this method achieves good results than other hierarchical meth- ing the growing season of wheat (November 2002–June 2003) in
ods (single-link, complete linkage, median, average linkage, etc.) order to extract vegetation profiles. Due to cloudiness or uncer-
especially when the group proportions are approximately equal. tainty in atmospheric corrections, only seven images have been
In this paper, we design an unsupervised approach for land clas- used in this study. These images with the size of 122,500 pixels
sification. It is based on a different way to implement the clustering arranged in 350 columns and 350 rows were radiometrically cali-
in PNN (RBFNN design). The Ward’s method [56] is used in training brated and atmospherically corrected based on the reflectance of an
the input targets. A cluster validity function, generally applied on invariant objects and transformed to NDVI maps [72]. The NDVI was
fuzzy clustering [65–67], is developed in the hidden layer output derived from red and near infrared reflectance bands as follows:
space of PNN by varying the number of classes to find the optimal
number of clusters. The proposed model is firstly tested for Fischer’s NIR − RED
NDVI = (1)
Iris data set [75,76], synthetic grayscale and RGB digital images. NIR + RED
The consistency of this approach is assessed through a comparison
with FCM clustering using the concept of cluster analysis. After, this where NIR and RED are the reflectance measured in the near-
approach is applied for time series remote sensing images acquired infrared and red band respectively.
J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13 3

Fig. 2. Architecture of the PNN.


Fig. 3. Flowchart of the automation procedure for PNN.
3. Description of Probabilistic Neural Network

Introduced in 1990 by Specht [36,37], the Probabilistic Neural site better than other networks like Multilayer Perceptron (MLP)
Networks (PNNs) are based on the concept of utilizing a non- and RBFNN. Furthermore, the accuracy of the PNN classification
parametric estimator (Parzen window) for obtaining multivariate could be increased through the incorporation of prior probabilities
probability density estimates. In contrast to classical RBFs, PNNs of class membership. However, the accuracy of each classification
are only used for classification and they compute conditional class could also be degraded by the presence of an untrained class [73].
probabilities p (class k/x) for each of C classes. A typical PNN consists Thus, it is essential to choose the appropriate classes.
of an input layer, a pattern layer (hidden layer) and a competitive
output layer. The structure of a PNN is shown in Fig. 2. Similar to 4. Automation of PNN classification
RBFs, PNNs receive D-dimensional feature vectors x = (x1 ,. . .,xD ) as
input. This input vector is applied to the input neurons xi (1 ≤ i ≤ D) PNN algorithm requires initially setting of the modes (centers of
and is passed to the neurons in the hidden layer. Here, the hidden the Gaussian functions), which are not evident to find. The choice
nodes are collected into groups: one group for each of the C classes. of modes and their number should be without errors. An eval-
Each hidden node in the group for class k (1 ≤ k ≤ C) corresponds uation methodology is required to determine and to choose the
to a Gaussian function centered on its associated feature vector in optimal number of clusters C*. This method is usually called the
the kth class (there is a Gaussian for each exemplar feature vector) cluster validity. To make PNN automatic, we used the summation
called Probability Density Function (PDF). PDF for a single sample of PDFs in the output of its hidden layer which takes the form of a
xk is written as follows: matrix of probabilities. This matrix will allow to calculate the valid-
1 2 )/(2 2 )) ity index (V) according to the variation of the class number C in a
fk (x) = e−((||x−xk || (2) given interval [Cmin ; Cmax ] in order to determine the adequate num-
D/2
(2) D
ber of clusters. Cmin and Cmax are respectively the minimum and
where  is the smoothing parameter for Gaussians, D is the dimen- maximum number of possible classes fixed firstly by the user. The
sion of the input vector x and ||x − xk || = i (x − xk )2 is the Euclidean optimal number of classes is obtained when V reaches its maximum
distance between vectors x and xk . All of the Gaussians in a class value. The flowchart (Fig. 3) illustrates the developed automation
group feed their functional values to the same output layer node for
that class, so there are C output nodes. The kth output node sums
these multivariate densities to produce a vector of probabilities
representing the average of the PDF’s for C samples:

1 
C
2 )/(2 2 ))
pk (x) = e−((||x−xk || (3)
D/2
(2) DC
k=1

Finally, a competitive transfer function gives 1 for the input class


which has the maximum joint PDF and 0 for all other classes. An
unknown input x belongs to class k if: pk (x) > pk (x) for all k =
/ k.
Therefore, the neuron in the decision layer determines the class
belongingness of the pattern x by (4) in accordance with Bayes’s
decision rule under the following assumption:

c(x) = argmax{pk (x)}, k = 1, 2, . . ., C (4)

where c(x) is the estimated class of the pattern x.


PNN is commonly used as supervised classifier in various appli-
cations but it is less exploited in remote sensing. Foody [73] proved Fig. 4. Flowchart describing the functional steps of the automation procedure for
that PNN was able to accurately map land cover for an agricultural PNN.
4 J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13

procedure for PNN. Fig. 4 describes its functional stages as summa- are evenly distributed through D-space. This criterion is the most
rized in the following steps: accurate in hierarchical ascending clustering on Euclidean data par-
ticularly when the elements are close. In this paper, we used the
(1) Proceed by hierarchical agglomerative classification using Ward’s method to obtain the Gaussian functions centers in the hid-
Ward’s method applied to input data for obtaining the C clus- den layer. In order to reduce the overlap of the centers, the widths
ters. of the radial basis functions are locally determined using a spread
(2) Apply the PNN algorithm by implementing the C clusters as equal to the half of the minimum distance between the neighbor
targets input founded in step 1. centers.
(3) Calculate V corresponding to the obtained classification. V
requires the values of the probability matrix produced in the 4.2. Proposed cluster validity index for the optimal number of
output of PNN’s hidden layer (see Section 4.2). modes
(4) Repeat step 1 for different cases of C. The number of classes C
can be chosen in an interval [Cmin ; Cmax ]. Otherwise, all possible Cluster analysis aims at identifying groups of similar objects,
numbers of classes are taken. therefore helps to discover interesting distribution of patterns and
(5) Select the optimal number C* of clusters corresponding to max- correlations in large data sets. Most of clustering algorithms need
imum value of V. to know the right number of classes C*. However, it is generally dif-
ficult to predict this number for accurate separation of data set. If
4.1. Ward’s method for defining the centers of Gaussian functions it is too large, one or more good compact clusters may be broken.
In contrast, if it is too small, more than one separate cluster may be
In statistics, Ward’s method [56] is a criterion applied in hierar- merged. The problem for finding C* is usually called cluster valid-
chical agglomerative clustering. This method consists in providing ity. A large number of cluster validity indices are available in the
a set of partitions into less detailed classes obtained by combin- literature [65–67,74]. In this paper, the proposed cluster validity
ing successively the parties. The idea is to build a dendrogram or a function is inspired from the Dave’s Modified Partition Coefficient
tree of data that successively merges similar groups of points. This (MPC) used for fuzzy partition [74]. MPC is defined as:
dendrogram is obtained by hierarchical ascending: We combine at N C
first the two closest elements which form a “summit”. It remains C j=1 i=1
(uij )m − N
only (n − 1) objects and we iterate the process until a complete MPC(C, U, N) = (6)
N(C − 1)
group. The general pseudo code of the hierarchical agglomerative
clustering is writing as follow: where m is the fuzzification coefficient, N the number of vectors to
be classified, C the number of classes and uij is the element of the
(1) Begin with N clusters, each containing one object and number partition matrix U of size C × N representing the membership of the
the clusters 1 through N. pattern xj to the cluster Ci .
(2) Compute the between-cluster distance dist(A, B) as the Before introducing the proposed cluster validity index V, we
between-object distance of the two objects in A and B respec- first use the summation of Gaussians produced by the computed
tively with A, B = 1, 2, . . ., n. Let the square matrix D = dist(A, clusters at the output of PNN’s hidden layer (see Section 3). This
B). If the objects are represented by vectors, use the Euclidean latter retrieves the probability matrix P = [pjk ]C×N which represents
distance. the membership of the kth vector to the jth data input. As P takes
(3) Find the most similar pair of clusters r and s, such that the the same form of U in Eq. (6) and the PNN’s competitive func-
distance dist(A, B) is minimal among all the pairwise distances. tion reaches the maximum of these probabilities, V is given by the
(4) Merge A and B to a new cluster C and compute the between- following equation:
cluster distance dist(C, k) for any existing cluster k =/ A, B. Once N
C j=1
max1≤k≤C (pkj ) − N
the distances are obtained, delete the rows and columns corre- V (C, P, N) = (7)
sponding to the old cluster A and B in the D matrix, since A and N(C − 1)
B do not exist anymore. Then add a new row and column in D where P = [pjk ]C×N is the matrix membership in the output of PNN’s
corresponding to cluster C. hidden layer representing the kth vector of probabilities for the jth
(5) Repeat Step 3 a total of N − 1 times until there is only one cluster data input and max (P) is the maximum value of P associated to each
left. input. In others words, max (P) represents the closest cluster to the
input. The values of V range in [0; 1]. By varying C, the maximum
Ward’s method is distinct from other methods because it uses proposed index corresponds to the optimal distribution of clusters
an analysis of variance approach to evaluate the distances between and produces the best clustering performance for the dataset.
clusters and therefore it is very efficient. At each stage, the Ward
objective is to find those two clusters whose merger gives the mini-
4.3. Tests and comparison
mum increase in the total error sum of squares of the within-group
(or distances between the centroids of the merged clusters). The
We realized different tests to different types of data. We started
Ward distance used between two classes is the distance of their
with the famous Fischer’s Iris dataset then we tested the method to
centroids squared, weighted by the size of the two clusters. It is
simple case of synthetic grayscale image and finally to digital RGB
defined as follows:
images. All results are compared with results of the FCM clustering
pA pB 2 algorithm using the same concept of cluster validity.
dist(A, B) = d (gA , gB ) (5)
pA + pB
where gA and gB are the gravity centers of classes A and B with the 4.3.1. Test using Fischer’s Iris dataset
weight pA and pB . This dataset contains random samples of flowers belonging to
Because the Ward method minimizes the sum of within-group three species of iris flowers setosa, versicolor and virginica [75,76].
sums of squares (squared error criterion), the clusters tend to For each of the species, fifty observations for four features (sepal
be hyperspherical, i.e. spherical in multidimensional D-space, and length, sepal width, petal length and petal width) are recorded. We
to contain roughly equal numbers of objects if the observations applied the proposed algorithm and FCM clustering by choosing the
J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13 5

Table 1 Table 4
Variability of cluster validity indexes with C for Fischer’s Iris dataset. Variability of cluster validity indexes with C for image of Moroccan tile.

C classes 2 3 4 5 6 C classes 3 4 5 6 7 8

V (PNN) 0.681 0.697 0.591 0.605 0.628 V (PNN) 0.878 0.881 0.882 0.792 0.753 0.777
MPC (FCM) 0.663 0.675 0.609 0.531 0.528 MPC (FCM) 0.810 0.844 0.829 0.812 0.799 0.791

Table 2 4.3.4. Comparison between clustering using FCM and PNN


The correct samples of iris flowers detected and the accuracy. We tested the same concept of cluster validity for FCM and PNN
Methods Setosa Versicolor Virginica Accuracy on different types of data. The results (Figs. 5 and 6 and Tables 2–4)
showed that the proposed method gives the appropriate number
Automatic PNN 50 48 36 89.33%
Automatic FCM 50 47 33 86.66% of classes where the FCM technique fails. Regardless the number of
channels in an image, the proposed method was able to distinguish
between different classes. From these performed tests, we can see
Table 3 that the unsupervised PNN is a valid reliable classifier.
Variability of cluster validity indexes with C for synthetic grayscale image.

C classes 3 4 5 6 7 8 5. Application and results


V (PNN) 0.721 0.752 0.741 0.734 0.844 0.969
MPC (FCM) 0.706 0.728 0.701 0.785 0.894 0.878 After testing and comparing the proposed approach with FCM
clustering over several data sets (Fischer’s iris data, grayscale and
RGB digital images), this approach is applied for a sequence of seven
time series of NDVI remote sensing images acquired by LANDSAT
number of classes C in the range [Cmin = 2; Cmax = 6]. Table 1 sum- and SPOT to build land use map. The obtained results of land cover
marized the obtained results. Both of the methods give the optimal are compared with the real data collected by land sampling in the
cluster number estimate C* = 3 for the Iris data set. But the differ- framework of VALERI Program [72,77].
ence is in the classification accuracy. Table 2 shows the detected For large data sets like multi-layer remote sensing images, it
samples of the three Iris flowers and the accuracy of classification is desirable to firstly apply spatial classification scene by scene in
using the two algorithms with a notable advantage of the proposed order to reduce the number of color. Then the results are classified
PNN classifier. in time.
To use an image as feature vector of PNN input, a serialization
4.3.2. Test using synthetic grayscale image procedure is applied to transform the matrix image to a vector
We tested the proposed method on a synthetic image repre- (taken row by row or column by column) providing that the oppo-
senting a gradient of eight levels of gray. In this case, we choose site transformation is done to restore the output classified image.
a number of classes C in the range [Cmin = 3; Cmax = 10] to see if
the algorithm is capable to determine the exact number of classes. 5.1. Spatial classification
Table 3 summarized the obtained results by the unsupervised PNN
and FCM. The maximum validity index (0.969) corresponds to class We applied the proposed model to each image of the seven
number of C* = 8 for the proposed approach while C* = 7 for FCM NDVI scenes for different number of classes C in the range [Cmin = 5;
clustering. Fig. 5 represents the original and the classified images Cmax = 15]. We chose the value 5 as the minimum of classes accord-
using the two methods. We can note easily that FCM has detected ing to the minimum diversity of the land in the studied area [72]:
a false number of classes. bare soil, cereals, trees, trees with herbs, fallow, etc. The maximum
number of classes chosen is the value 15 in order to represent more
levels of NDVI and to keep the majority of the information from
4.3.3. Test using digital RGB image each scene. Table 5 showed for each scene the optimal number of
In this case, we increase the color space to three channels (Red, classes C* obtained by comparing V values. Table 6 presents the
Green and Blue). We used RGB image of Moroccan tile which con- number of classes obtained in each scene before and after spatial
tains five colors to show if the proposed algorithm is able to give classification. The obtained results showed that after the classifi-
the exact number of colors and to perform meaningful classifica- cation, the scenes with a narrow histogram (7 Nov 2002, 25 Dec
tion. The range of C chosen is [Cmin = 2; Cmax = 8]. The results are 2002 and 27 Jun 2003) took 5 as the minimum number of classes
illustrated in Table 4 and represented in Fig. 6. while the scenes with a large histogram (26 Jan 2003, 11 Feb 2003,

Fig. 5. (a) synthetic grayscale image. Classified images: (b) using automatic FCM, (c) using automatic PNN.
6 J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13

Fig. 6. (a) RGB image of Moroccan tile. Classified images: (b) using automatic FCM, (c) using automatic PNN.

Table 5
Variability of V with C for each NDVI scene.

Number of classes C

5 6 7 8 9 10 11 12 13 14 15

V for each scene


7 Nov 0.882 0.772 0.705 0.673 0.709 0.771 0.711 0.724 0.684 0.677 0.659
25 Dec 0.755 0.670 0.671 0.656 0.664 0.696 0.663 0.668 0.677 0.710 0.694
26 Jan 0.693 0.632 0.684 0.695 0.707 0.717 0.711 0.712 0.721 0.713 0.698
11 Feb 0.710 0.646 0.666 0.693 0.699 0.708 0.714 0.705 0.695 0.706 0.695
31 Mar 0.656 0.682 0.683 0.702 0.707 0.716 0.711 0.714 0.721 0.711 0.714
18 May 0.630 0.648 0.669 0.663 0.667 0.698 0.670 0.689 0.683 0.716 0.715
27 Jun 0.853 0.690 0.650 0.705 0.700 0.696 0.685 0.690 0.690 0.667 0.721

Table 6
The effect of classification on number of levels in NDVI values for the 7 scenes.

NDVI scenes 7 Nov 02 25 Dec 02 26 Jan 03 11 Feb 03 31 Mar 03 18 May 03 27 Jun 03

Number of levels in the original scene 73 75 77 75 82 77 87


Number of levels after classification 5 5 13 11 13 14 5

31 Mar 2003 and 18 May 2003) took a number of classes greater to 4619 allowing a minimization of the running process time in the
than 10 (Fig. 7). It is logical and reasonable because in the wheat following stage.
agricultural season, there is less verdure density in the period from
7 November to 25 December corresponding to cultivation period
and harvest (after 27 June) while the period from 26 January to 18 5.2. Temporal classification
May representing the growth phase showed more verdure density
and several types of crops (wheat, barley, fallow, etc.). To extract the different temporal behavior of NDVI, we applied
The spatial classification adopted here is a compression strategy the proposed algorithm to the time series of seven scenes spatially
which reduces the number of levels of NDVI values in each scene classified. Cluster validity index V by varying C in the range [Cmin = 5;
without affecting the information contained in it. Therefore, the Cmax = 15] is represented in Table 7. As shown in this table, the max-
number of NDVI temporal combinations is reduced from 121,493 imum value of V is about 0.99 which corresponds to fifteen classes.
J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13 7

Fig. 7. Histograms of the 7 scenes.

Table 7
Variability of V with C for multitemporal NDVI scenes.

Number of classes C 5 6 7 8 9 10 11 12 13 14 15

Cluster validity index V 0.893 0.889 0.930 0.951 0.971 0.962 0.969 0.977 0.983 0.986 0.990
8 J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13

Fig. 8. The 15 obtained NDVI profiles.

Fig. 8 illustrates the temporal evolution of the fifteen obtained NDVI (profile 6 and 7) is characterized by tree profiles having high NDVI
profiles which are used next to identify the main crop types. range variations (>0.17) labeled as tree with herbs (i.e. on annual
understory).
5.3. Crop types identification using NDVI profiles - Annual crops (cereals) classes are defined by NDVI values rising
above 0.18 showing significant vegetation biomass. Also these
The crop identification method was designed based on field classes are characterized by NDVI values below 0.18 at the begin-
observations. These field data were made up of some thematic ning and at the end of the growth phase (i.e. a period of bare
classes, including all the species encountered and their combi- soil) which can make a distinction with evergreen tree classes.
nations. Based on the temporal evolution of the fifteen obtained Annual crops include mainly cereals like wheat and barley which
NDVI profiles (Fig. 8), they can be merged to six following main can be divided in early and late classes considering its tempo-
classes: ral NDVI profiles [71]. Five profiles (profile 8, 9, 10, 11 and 12)
representing early (wheat/barley) cultivated before 15 December
- Bare soil class (profile 4) is evident to find. This class has a constant and three others (profile 13, 14 and 15) corresponding to late
value of NDVI around 0.15 which corresponds to clay soil [71]. (wheat/barley) cultivated after 15 January with narrow growth
Some fluctuations of NDVI could be explained by the variation of phase.
soil moisture and by small grown herbs due to the rainfall events. - Fallow land class can be defined as land with almost no vegetation
- Tree classes are considered as NDVI profile relatively constant or very poorly developed wheat with low NDVI values (i.e. rainfall
over time and above 0.18 taking into account that the majority wheat). This class is characterized by NDVI values less than 0.4 in
of them are evergreen trees (olive and citrus trees). Moreover, the growth phase (profile 1, 2 and 3).
there are two tree classes. The first one is tree on bare soil class
(profile 5) which is clearly identified by NDVI values higher than Table 8 showed the land cover classes which brand each NDVI
0.43 with limited variations in range of 0.17. The other class evolution after identification.

Table 8
NDVI profiles merging and their interpretations.

NDVI profiles Interpretation of classes

7 Nov 25 Dec 26 Jan 11 Feb 31 Mar 18 May 27 Jun

0.13 0.24 0.24 0.26 0.28 0.17 0.08


0.18 0.26 0.34 0.39 0.39 0.26 0.27 Follow
0.18 0.21 0.22 0.23 0.36 0.25 0.27

0.12 0.15 0.20 0.19 0.13 0.17 0.14 Bare soil


0.43 0.47 0.48 0.55 0.60 0.49 0.49 Trees on bare soil

0.37 0.42 0.39 0.39 0.53 0.44 0.47


Trees with herbs
0.28 0.40 0.43 0.50 0.62 0.36 0.27

0.14 0.17 0.45 0.60 0.78 0.26 0.18


0.14 0.17 0.38 0.51 0.58 0.24 0.17
0.13 0.15 0.26 0.41 0.79 0.27 0.16 Early (wheat/barley)
0.16 0.19 0.27 0.35 0.60 0.28 0.27
0.15 0.39 0.52 0.55 0.49 0.24 0.19

0.13 0.15 0.28 0.34 0.36 0.17 0.08


0.12 0.15 0.14 0.23 0.61 0.27 0.16 Late (wheat/barley)
0.14 0.18 0.09 0.12 0.42 0.30 0.11
J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13 9

Fig. 9. Land cover map obtained after classification and merging.

After merging, the obtained classes give the land cover map
illustrated in Fig. 9 with the following percentages: 17.24% of bare
soil, 12.14% of fallow, 39.47% of late (wheat/barley), 22.44% of early
(wheat/barley), 2.57% of trees on bare soil and 6.13% of trees with
herbs. The obtained results are in agreement with the previous
studies using the same data set but with other techniques of clas-
sification [71,72]. Er-Raki et al. [71] used the K-means to classify
the cereals and they found two main classes: early and late sow-
ing wheat as it has been found in this work. Simonneaux et al.
[72] used the supervised classification method based on the use
of simple phenological criteria of each crop. This method is called
decision tree [78–80] which uses the minimum, the maximum or
the range of NDVI as the phenological criteria. They obtained a gen-
eral land cover (annual crops, trees, annual crops + trees, bare soils).
By comparison with the presented classification, they did not clas-
sify the annual crops class on early and late sowing cereals and
did not separate it from the fallow land class. In Spain, Julien et al.
[9] used the Yearly Land Cover Dynamics (YLCD) approach based
on annual behavior of LST (Land Surface Temperature) and NDVI.
A time series of LANDSAT-5 images has been used to classify an
agricultural area into crop types using the maximum likelihood
classification. They obtained the main classes: cereals, irrigated and Fig. 10. Mapping of vegetation types survey in the region by sampling during
non-irrigated crops. As in this work, wheat and barley were merged 2002–2003 season.
in a single class (cereals) due to their NDVI similarity. While the irri-
gated and non-irrigated crops were separated in different classes
accuracy is computed as the proportion of true prediction results
due to strong differences in NDVI and LST annual behaviors.
(samples correctly classified) [81]. The obtained classes shown in
Table 11 are recognized with an overall accuracy of 96.56% which is
5.4. Validation of the obtained results higher in comparison with other studies [9,72]. This high accuracy

In order to check the accuracy of our approach, we compared Table 9


the obtained land use with the real one established in the study Land cover of the region by sampling in 2002–2003 season.
region. During the 2002–2003 season, data sets were collected
Classes Number of parcels Percentage
by VALERI Program [72,77] on a series of 450 sample plots dis-
tributed across the plain (Fig. 10 and Table 9). We merged the Cereals 234 52.00%
Barleys 29 6.45%
classes representing the same type of cover: Building is added
Fallow/not cultivated 59 13.11%
to bare soil class, olive trees to trees on bare soil class and bar- Alfalfa 4 0.89%
leys to cereals class in order to make a comparison with the Olive trees 5 1.11%
results of classification. To visualize the performance of the pro- Building 3 0.67%
Bare soil – fallow 77 17.11%
posed algorithm, a matching matrix is presented in Table 10. This
Trees on bare soil 11 2.44%
matrix was obtained by comparison of the proposed automatic PNN Trees with herbs 28 6.22%
classification with the validation data mentioned above. Table 11
Total 450 100%
and Fig. 11 showed the results of this comparison. The overall
10 J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13

Fig. 11. Comparison of land cover results using the proposed classification and by sampling.

Fig. 12. Land cover map obtained after classification and merging using FCM.

Table 10
Results of matching matrix using the proposed method.

Predicted classes

Cereals Fallow Trees on bare soil Trees with herbs Bare soil Alfalfa

Actual classes
Cereals 100% 0% 0% 0% 0% 0%
Fallow 5.91% 92.6% 0% 0% 1.49% 0%
Trees on bare soil 5.08% 0% 72.39% 22.53% 0% 0%
Trees with herbs 0.6% 0% 0.85% 98.55% 0% 0%
Bare soil 0.4% 2.58% 0% 0% 97.02% 0%
Alfalfa 0% 0% 0% 0% 0% 0%
J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13 11

Fig. 13. Land cover map obtained after classification and merging using K-means.

Table 11 Table 12
Comparison of land cover results. Performance comparison between unsupervised PNN, FCM and K-means.

Classes % by % by PNN Classification Classification precision


sampling classification precision
Unsupervised FCM K-means
Cereals (wheat + barley) 58.45 61.91 100% PNN
Fallow/not cultivated 13.11 12.14 92.60%
Trees on bare soil 3.55 2.57 72.39% Classes
Trees with herbs 6.22 6.13 98.55% Cereals (wheat + barley) 100% 70.15% 74.82%
Bare soil 17.78 17.25 97.02% Fallow/not cultivated 92.6% 99.16% 95.35%
Alfalfa 0.89 – 0% Trees on bare soil 72.39% 84.51%
92.12%
Trees with herbs 98.55% 93.09%
Total 100 100 Accuracya = 96.56% Bare soil 97.02% 89.98% 95.61%

Classes Alfalfa 0% 0% 0%
a
Accuracy = (precision × % sampling). Overall accuracy 96.56% 79% 82.02%

6. Conclusion
demonstrates that the proposed approach is globally able to
In this work, we have proposed an unsupervised approach based
retrieve automatically and accurately the existing crop types in the
on Probabilistic Neural Network with the implementation of cluster
region. The class of alfalfa is characterized by a NDVI profile with
validity technique using Ward’s method. This technique was firstly
frequent variation due to several cutting thus it was not recognized.
validated through a series of tests including Fischer’s Iris data set,
More successive scenes with no cloudiness could overcome this
synthetic grayscale and RGB digital images. A comparison with the
miss-classification.
classical automatic clustering by FCM using the same concept of
cluster validation showed that the proposed algorithm was more
5.5. Performance comparison between unsupervised PNN, FCM accurate. The strength of this approach is its capability to solve
and K-means a classification problem with unknown class number. This is the
concrete case of land use classification which proceeds with large
In order to bring to light the performance of the proposed multidimensional data sets like multidimensional remote sensing
method, a comparative study with other usual classification meth- images. Here, effective visualization of the data set and class num-
ods (FCM, K-means) is done by using the same sequence of seven ber prediction are difficult. In this way, the developed approach
time series of NDVI images. Land cover maps obtained by using FCM was applied for a sequence of seven time series of NDVI remote
and K-means are shown in Figs. 12 and 13, respectively. The per- sensing images acquired by LANDSAT and SPOT to build land use
formance comparisons between the three methods are displayed map. Spatial and temporal classifications were adopted. In fact,
in Table 12. As expected, FCM method has given a less accuracy the procedure has proven its efficiency to distinguish between dif-
(79%) and a less cluster number estimation. Two classes (trees with ferent classes and to determine the land cover especially for the
herbs and trees in bare soil) are merged due to their similar clusters large surfaces where the available information on soil and crops is
distribution. Regarding K-means method, it has done a reasonable limited. The obtained results are compared with real land use and
job with 82.02% of accuracy and detailed classes (good number and showed 96.56% of overall accuracy which is higher than other usual
type). As a conclusion, the proposed approach using PNN provides methods like FCM and K-means. Thus, the implementation of clus-
better results with higher accuracy (96.56% of overall accuracy) in ter validity technique in PNN gives rise to a reliable tool for data
comparison with other methods. classifying especially for massive data like multilayer images.
12 J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13

The principal advantages of the proposed approach are: (1) it [24] R. Laprade, Split-and-merge segmentation of aerial photographs, Comput. Vis.
is completely automatic with no parameter adjusting and instan- Graphics Image Process. 48 (1) (1988) 77–86.
[25] B.J. Irvin, S.J. Ventura, B.K. Slater, Fuzzy and isodata classification of landform
taneous training, (2) it has high ability to perform good cluster elements from digital terrain data in Pleasant Valley Wisconsin, Geoderma 77
number estimates, (3) it provides a new point of view to use PNN (2–4) (1997) 137–154.
as unsupervised classifier, and (4) it is rapid and easy to implement [26] S. Pal, A. Ghosh, B. Uma Shankar, Segmentation of remotely sensed images
with fuzzy thresholding and quantitative evaluation, Int. J. Remote Sens. 21
in soft computing for classification. (11) (2000) 2269–2300.
[27] R. Cannon, R. Dave, J. Bezdek, M. Trivedi, Segmentation of a thematic Mapper
image using fuzzy c-means clustering algorithm, IEEE Trans. Geosci. Remote
Acknowledgements Sens. 24 (1) (1986) 400–408.
[28] M.N. Kurnaz, Z. Dokur, T. Olmez, Segmentation of remote-sensing images by
incremental neural network, Pattern Recogn. Lett. 26 (8) (2005) 1104–1316.
The authors are grateful to the International Joint Laboratory- [29] Z. Zhou, S. Wei, X. Zhang, X. Zhao, Remote sensing image segmentation based on
TREMA (http://trema.ucam.ac.ma/) and CNES for providing us the self-organizing map at multiple scale, in: Proceedings of SPIE Geoinformatics:
satellite data. Remotely Sensed Data and Information, USA, 2007, pp. 122–126.
[30] Y. Wong, E. Posner, A new clustering algorithm applicable to polarimetric and
SAR images, IEEE Trans. Geosci. Remote Sens. 31 (3) (1993) 634–644.
[31] G.P. Zhang, Neural networks for classification: a survey, IEEE Trans. Syst. Man
References Cybernet. C: Appl. Rev. 30 (4) (November 2000).
[32] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math.
[1] L. Zheng, X. He, Classification techniques in pattern recognition, in: Proceedings Control. Signals Syst. 2 (1989) 303–314.
of the 13th International Conference in Central Europe on Computer Graphics, [33] K. Hornik, Approximation capabilities of multilayer feedforward networks,
Visualization and computer vision (WSCG 2005), 2005, pp. 77–78. Neural Networks 4 (1991) 251–257.
[2] V. Mohan, A. Kannan, Color image classification and retrieval using image min- [34] K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are
ing techniques, Int. J. Eng. Sci. Technol. 2 (5) (2010) 1014–1020. universal approximators, Neural Networks 2 (1989) 359–366.
[3] T.N. Phyu, Survey of classification techniques in data mining, in: Proceedings [35] M.D. Richard, R. Lippmann, Neural network classifiers estimate Bayesian a pos-
of the International MultiConference of Engineers and Computer Scientists teriori probabilities, Neural Comput. 3 (1991) 461–483.
(IMECS 2009), 2009, pp. 978–988. [36] D.F. Specht, Probabilistic neural networks for classification, mapping, or asso-
[4] J.S. Wang, W.C. Chiang, Y.L. Hsu, Y.T.C. Yang, ECG arrhythmia classification using ciative memory, in: IEEE International Conference on Neural Networks 1, July,
a probabilistic neural network with a feature reduction method, Neurocompu- 1988, pp. 525–532.
ting 116 (2013) 38–45. [37] D.F. Specht, Probabilistic neural networks, Neural Networks 3 (1) (1990)
[5] J.I. Arribas, G.V. Sánchez-Ferrero, G. Ruiz-Ruiz, J. Gómez-Gil, Leaf classification 109–118.
in sunflower crops by computer vision and neural networks, Comput. Electron. [38] T.D. Gancheva, D.K. Tasoulisb, M.N. Vrahatisb, N.D. Fakotakis, Generalized
Agric. 78 (1) (2011) 9–18. locally recurrent probabilistic neural networks with application to text-
[6] R. Raghuraj, S. Lakshminarayanan, Variable predictive model based classifica- independent speaker verification, Neurocomputing 70 (2007) 1424–1438.
tion algorithm for effective separation of protein structural classes, Comput. [39] X. Fu, Y. Ying, Y. Zhou, H. Xu, Application of probabilistic neural networks in
Biol. Chem. 32 (4) (2008) 302–306. qualitative analysis of near infrared spectra: determination of producing area
[7] F. Kaefer, C.M. Heilman, S.D. Ramenofsky, A neural network application to and variety of loquats, Anal. Chim. Acta 598 (2007) 27–33.
consumer classification to improve the timing of direct marketing activities, [40] A.L.I. Oliveira, F.R.G. Costa, C.O.S. Filho, Novelty detection with constructive
Comput. Oper. Res. 32 (10) (2005) 2595–2615. probabilistic neural networks, Neurocomputing 71 (2008) 1046–1053.
[8] N. Huang, D. Xu, X. Liu, L. Lin, Power quality disturbances classification based [41] J. Grim, J. Hora, Iterative principles of recognition in probabilistic neural
on S-transform and probabilistic neural network, Neurocomputing 98 (2012) networks, Neural Networks 21 (2008) 838–846.
12–23. [42] H. Adeli, A. Panakkat, A probabilistic neural network for earthquake magnitude
[9] Y. Julien, J.A. Sobrino, J.-C. Jiménez-Munoz, Land use classification from mul- prediction, Neural Networks 22 (2009) 1018–1024.
titemporal Landsat imagery using the Yearly Land Cover Dynamics (YLCD) [43] F. Ozturk, F. Ozen, A new license plate recognition system based on probabilistic
method, Int. J. Appl. Earth Obs. Geoinf. 13 (2011) 711–720. neural networks, Procedia Technol. 1 (2012) 124–128.
[10] R. Geerken, B. Zaitchik, J.P. Evans, Classifying rangeland vegetation type and [44] O. Er, A.C. Tanrikulu, A. Abakay, F. Temurtas, An approach based on probabilistic
coverage from NDVI time series using Fourier Filtered Cycle Similarity, Int. J. neural network for diagnosis of Mesothelioma’s disease, Comput. Electr. Eng.
Remote Sens. 26 (24) (2005) 5535–5554. 38 (2012) 75–81.
[11] A. Halder, A. Ghosh, S. Ghosh, Supervised and unsupervised land use map gener- [45] J. Jia, C. Liang, J. Cao, Z. Li, Application of probabilistic neural network in bacterial
ation from remotely sensed images using ant based systems, Appl. Soft Comput. identification by biochemical profiles, J. Microbiol. Methods 94 (2013) 86–87.
11 (2011) 5770–5781. [46] S. Timung, T.K. Mandal, Prediction of flow pattern of gas–liquid flow through
[12] J. Sun, J. Yang, C. Zhang, W. Yun, J. Qu, Automatic remotely sensed image clas- circular microchannel using probabilistic neural network, Appl. Soft Comput.
sification in a grid environment based on the maximum likelihood method, 13 (2013) 1674–1685.
Math. Comput. Modell. 58 (3–4) (2013) 573–581. [47] G.E. Tsekouras, J. Tsimikas, On training RBF neural networks using input–output
[13] Q. Lü, M. Tang, Detection of hidden bruise on kiwi fruit using hyperspectral fuzzy clustering and particle swarm optimization, Fuzzy Sets Syst. 221 (2013)
imaging and parallelepiped classification, Procedia Environ. Sci. 12 (B) (2012) 65–89.
1172–1179. [48] J. González, I. Rojas, H. Pomares, J. Ortega, A. Prieto, A new clustering tech-
[14] C. Chen, Fuzzy training data for fuzzy supervised classification of remotely nique for function approximation, IEEE Trans. Neural Networks 13 (1) (2002)
sensed images, in: Proceedings of 20th Asian Conference on Remote Sensing 132–142.
(ACRS 1999), 1999, pp. 460–465. [49] H.-S. Park, W. Pedrycz, S.-K. Oh, Granular neural networks and their devel-
[15] A. Ghosh, S. Meher, B.U. Shankar, A novel fuzzy classifier based on product opment through context-based clustering and adjustable dimensionality of
aggregation operator, Pattern Recognit. 41 (6) (2008) 961–971. receptive fields, IEEE Trans. Neural Networks 20 (10) (2009) 1604–1616.
[16] F. Maselli, A. Rodolfi, C. Copnese, Fuzzy classification of spatially degraded the- [50] W. Pedrycz, Conditional fuzzy clustering in the design of radial basis function
matic Mapper data for the estimation of sub-pixel components, Int. J. Remote neural networks, IEEE Trans. Neural Networks 9 (4) (1998) 601–612.
Sens. 17 (3) (1996) 537–551. [51] W. Pedrycz, H.S. Park, S.K. Oh, A granular-oriented development of functional
[17] F. Melgani, B.A. Hashemy, S. Taha, An explicit fuzzy supervised classification radial basis function neural networks, Neurocomputing 72 (2008) 420–435.
method for multispectral remote sensing images, IEEE Trans. Geosci. Remote [52] H.-S. Park, Y.-D. Chung, S.-K. Oh, W. Pedrycz, H.-K. Kim, Design pf information
Sens. 38 (1) (2000) 287–295. granule-oriented RBF neural networks and its application to power supply for
[18] Y. Liu, B. Zhang, L.-m. Wang, N. Wang, A self-trained semisupervised SVM high-field magnet, Eng. Appl. Artif. Intell. 24 (2011) 543–554.
approach to the remote sensing land cover classification, Comput. Geosci. 59 [53] S.B. Roh, T.C. Ahn, W. Pedrycz, The design methodology of radial basis function
(2013) 98–107. neural networks based on fuzzy K-nearest neighbors approach, Fuzzy Sets Syst.
[19] D.M. Miller, E.J. Kaminsky, S. Rana, Neural network classification of remote- 161 (13) (2010) 1803–1822.
sensing data, Comput. Geosci. 21 (1995) 377–386. [54] A. Staiano, R. Tagliaferri, W. Pedrycz, Improving RBF networks performance in
[20] J. Zeng, H.-f. Guo, Y.-m. HU, Artificial neural network model for identifying taxi regression tasks by means of a supervised fuzzy clustering, Neurocomputing
gross emitter from remote sensing data of vehicle emission, J. Environ. Sci. 19 69 (2006) 1570–1581.
(2007) 427–431. [55] Z. Uykan, C. Guzelis, M.E. Celebi, H.N. Koivo, Analysis of input–output clustering
[21] M. Brown, H. Lewis, S. Gunn, Linear spectral mixture models and support vector for determining centers of RBFNN, IEEE Trans. Neural Networks 11 (4) (2000)
machines for remote sensing, IEEE Trans. Geosci. Remote Sens. 38 (5) (2000) 851–858.
2346–2360. [56] J.H. Ward, Hierarchical grouping to optimize an objective function, J. Am. Stat.
[22] C. Huang, L. Davis, J. Townshend, An assessment of support vector machines Assoc. 58 (1963) 236–244.
for land cover classification, Int. J. Remote Sens. 23 (2002) 725–749. [57] A.M. Dillner, J.J. Schauer, W.F. Christensen, G.R. Cass, A quantitative method for
[23] D. Stathakis, A. Vasilakos, Comparison of computational intelligence based clustering size distributions of elements, Atmos. Environ. 39 (2005) 1525–1537.
classification techniques for remotely sensed optical image classification, IEEE [58] H.C. Lu, C.L. Chang, J.C. Hsieh, Classification of PM10 distributions in Taiwan,
Trans. Geosci. Remote Sens. 44 (8) (2008) 2305–2318. Atmos. Environ. 40 (2006) 1452–1463.
J. Iounousse et al. / Applied Soft Computing 30 (2015) 1–13 13

[59] T. Varin, R. Bureau, C. Mueller, P. Willett, Clustering files of chemical struc- [70] B. Duchemin, R. Hadria, S. Erraki, G. Boulet, P. Maisongrande, A. Chehbouni,
tures using the Székely–Rizzo generalization of Ward’s method, J. Mol. Graphics R. Escadafal, J. Ezzahar, J.C.B. Hoedjes, M.H. Kharrou, S. Khabba, B. Mougenot,
Modell. 28 (2009) 187–195. A. Olioso, J.C. Rodriguez, V. Simonneaux, Monitoring wheat phenology and
[60] N. Picard, F. Mortier, V. Rossi, S. Gourlet-Fleury, Clustering species using a irrigation in Central Morocco: on the use of relationships between evapotrans-
model of population dynamics and aggregation theory, Ecol. Modell. 221 (2010) piration, crops coefficients, leaf area index and remotely sensed vegetation
152–160. indices, Agric. Water Manage. 79 (2006) 1–27.
[61] A. Carteron, M. Jeanmougin, F. Leprieur, S. Spatharis, Assessing the efficiency of [71] S. Er-Raki, A. Chehbouni, N. Guemouria, B. Duchemin, J. Ezzahar, R. Hadria,
clustering algorithms and goodness-of-fit measures using phytoplankton field Combining FAO-56 model and ground-based remote sensing to estimate water
data, Ecol. Inform. 9 (2012) 64–68. consumptions of wheat crops in a semi-arid region, Agric. Water Manage. 87
[62] C.S. Malley, C.F. Braban, M.R. Heal, The application of hierarchical cluster analy- (2007) 41–54.
sis and non-negative matrix factorization to European atmospheric monitoring [72] V. Simonneaux, B. Duchemin, D. Helson, S. Er-Raki, A. Olioso, A.G. Chehbouni,
site classification, Atmos. Res. 138 (2014) 30–40. The use of high-resolution image time series for crop classification and evapo-
[63] Y. Xiao, C. Mignolet, J.-F. Mari, M. Benoît, Modeling the spatial distribution of transpiration estimate over an irrigated area in central Morocco, Int. J. Remote
crop sequences at a large regional scale using land-cover survey data: a case Sens. 29 (2008) 95–116.
from France, Comput. Electron. Agric. 102 (2014) 51–63. [73] G.M. Foody, Thematic mapping from remotely sensed data with neural
[64] S. Hands, B. Everitt, A Monte Carlo study of the recovery of cluster structure networks: MLP, RBF and PNN based approaches, J. Geoghraph. Syst. 3 (2001)
in binary data by hierarchical clustering techniques, Multivar. Behav. Res. 22 217–232.
(1987) 235–243. [74] R.N. Dave, Validating fuzzy partition obtained through c-shells clustering, Pat-
[65] A. Ghosh, N.S. Mishra, S. Ghosh, Fuzzy clustering algorithms for unsuper- tern Recogn. Lett. 17 (1996) 613–623.
vised change detection in remote sensing images, Inform. Sci. 181 (2011) [75] R.A. Fisher, The use of multiple measurements in taxonomic problems, Ann.
699–715. Eugenics 7 (2) (1936) 179–188.
[66] W. Wang, Y. Zhang, On fuzzy cluster validity indices, Fuzzy Sets Syst. 158 (2007) [76] E. Anderson, The species problem in Iris, Ann. MO Bot. Gard. 23 (3) (1936)
2095–2117. 457–509.
[67] K.-L. Wu, M.-S. Yang, A cluster validity index for fuzzy clustering, Pattern [77] VALERI (Validation of Land European Remote sensing Instruments) Program,
Recogn. Lett. 26 (2005) 1275–1291. http://w3.avignon.inra.fr/valeri/
[68] A. Chehbouni, R. Escadafal, G. Boulet, B. Duchemin, V. Simonneaux, G. Dedieu, [78] D. Lloyd, A phenological classification of terrestrial vegetation cover using
B. Mougenot, S. Khabba, H. Kharrou, O. Merlin, A. Chaponnière, J. Ezzahar, S. shortwave vegetation index imagery, Int. J. Remote Sens. 11 (1990) 2269–2279.
Er-Raki, J. Hoedjes, R. Hadria, H. Abourida, A. Cheggour, F. Raibi, L. Hanich, [79] R.S. De Fries, M. Hansen, J.R.G. Townshend, R. Sohlberg, Global land cover clas-
N. Guemouria, Ah. Chehbouni, A. Lahrouni, A. Olioso, F. Jacob, J. Sobrino, The sifications at 8 km spatial resolution: the use of training data derived from
use of remotely sensed data for integrated hydrological modeling in arid Landsat imagery in decision tree classifiers, Int. J. Remote Sens. 19 (1998)
and semi-arid regions: the SUDMED program, Int. J. Remote Sens. 29 (2008) 3141–3168.
5161–5181. [80] S.S. Ray, V.K. Dadhwal, Estimation of crop evapotranspiration of irrigation com-
[69] R. Hadria, B. Duchemin, A. Lahroun, S. Khabba, S. Er-Raki, G. Dedieu, A. mand area using remote sensing and GIS, Agric. Water Manage. 49 (2001)
Chehbouni, Monitoring of irrigated wheat in a semi-arid climate using crop 239–249.
modelling and remote sensing data: impact of satellite revisit time frequency, [81] R. Congalton, K. Green, Assessing the Accuracy of Remotely Sensed Data, Lewis
Int. J. Remote Sens. 27 (2006) 1093–1117. Publications, Boca Raton, FL, 1999.

Das könnte Ihnen auch gefallen