Beruflich Dokumente
Kultur Dokumente
http://dx.doi.org/10.12785/amis/090421
Abstract: In this paper a new algorithm for adaptive kernel principal component analysis (AKPCA) is proposed for dynamic process
monitoring. The proposed AKPCA algorithm combine two existing algorithms, the recursive weighted PCA (RWPCA) and the moving
window kernel PCA algorithms. For fault detection and isolation, a set of structured residuals is generated by using a partial AKPCA
models. Each partial AKPCA model is performed on subsets of variables. The structured residuals are utilized in composing an isolation
scheme, according to a properly designed incidence matrix. The results for applying this algorithm on the nonlinear time varying
processes of the Tennessee Eastman shows its feasibility and advantageous performances.
Keywords: Kernel PCA, dynamic process, fault detection and isolation, residual structuration, partial PCA model.
developed, which allows extracting both linear and non process are presented in section 5. Finally, conclusions
linear correlation among process variables. An elegant are given in section 6.
and one of the most widely used non linear generalization
of the linear PCA is the kernel principal component
analysis (KPCA), which was proposed in 1998 by 2 Preliminaries
Scholkopf et al. in [3] and first employed as a monitoring
tool by Lee et al. in [14]. It has the following advantages 2.1 Principal Component Analysis (PCA)
over previous versions of nonlinear PCA: (i) unlike
nonlinear PCA methods based on neural networks, it does PCA is a powerful dimension-reducing technique. It
not include the determination of the number of nodes, produces new variables that are uncorrelated with each
layers and approximation for the nonlinear function. (ii) other and are linear combinations of original variables
kernel PCA does not involve a nonlinear optimization [6]. Let X represent a N×m matrix of data. PCA is an
procedure. Despite recently reported KPCA-based optimal factorization of X into matrix T (principal
monitoring applications, the following problems arise : components N×l) and P (loadings m×l ) plus a matrix of
the monitoring model is fixed which may produce false residuals E (N×m).
alarms if the process is naturally time-varying, and the
fault isolation step is a much more difficult problem in X = T PT + E (1)
nonlinear PCA than in linear PCA [8,9]. The first
problem has been addressed by a recursive KPCA where l is the number of factors (l < m). The
formulation to overcome the same problems of the linear Euclidean norm of the residual matrix E must be
case, presented in the previous paragraph. minimized for a given number of factors. This criterion is
satisfied when the columns of P are eigenvectors
The kernel PCA method based process monitoring
have recently shown to be very effective for online corresponding to the l largest eigenvalues of the
monitoring nonlinear processes. Similar to the linear case, covariance matrix of X. PCA can be viewed as a linear
mapping from ℜm to a lower dimensional space ℜl . The
two methods are presented in the literature for nonlinear
mapping has the form :
adaptive process monitoring. As the moving window
kernel PCA (MWKPCA) and the recursive kernel PCA t = PT X (2)
(RKPCA) approaches, where, little research have been
presented on this issue. A variable moving window kernel When using linear PCA the variables involved should
PCA scheme is presented by Khediri et al. in [11]. This be linearly correlated. If they are correlated nonlinearly it
method is then applied in a monitoring procedure with a is more powerful to use the nonlinear principal component
variable window size model that can provide a flexible analysis (NLPCA) for data modeling [19].
control strategy. Recursive kernel PCA algorithm is
presented by Liu et al. in [15], the proposed technique
incorporates an up-and down-dating procedure to adapt
2.2 Kernel PCA (KPCA)
the data mean and covariance matrix in the feature space.
In this work, a new adaptive kernel principal As a nonlinear extension of PCA, kernel PCA was
component analysis (AKPCA) algorithm is introduced to proposed in [5] to generalize PCA to the nonlinear case
monitor and diagnose nonlinear dynamic systems. The by nonlinearly mapping input samples to a higher or
AKPCA algorithm allow to update recursively the kernel infinite dimensional feature space F and performing PCA
PCA model and its corresponding control limits for there. The feature space F is nonlinearly transformed
monitoring statistics. The basic idea of the proposed from input space and implicitly defined by a kernel
algorithm refers to a paradigm where, at each time function. However, unlike other forms of nonlinear PCA,
instant, a new observation is available, and the covariance the implementation of kernel PCA relies on linear
matrix in the feature space (Gram matrix) need to be algebra. We may therefore think of kernel PCA as a
recursively updated according to the newly available data. natural extension of ordinary PCA.
The adaptive KPCA algorithm update the covariance Let vector Φ (X j ) denote the image of an input vector
matrix in the feature space with the degree of change in (X j ) induced in a feature space defined by the nonlinear
the operating process, which depend on the magnitude of map : Φ : Rm0 → Rm1 , where m0 is the dimensionality of
the forgetting factor. the input space and m1 is the dimensionality of the feature
The paper is organized as follows : In section 2 linear space. Given the set of examples {Xi }Ni=1 , where the a
principal component analysis and kernel principal corresponding set of feature vectors {Φ (Xi }Ni=1 .
component analysis are presented. Section 3 gives the Accordingly, we may define an m1 − by − m1 correlation
adaptive version of the proposed KPCA approach. matrix in the feature space, denoted by R̃, as follows :
Section 4 gives the residual generation based on the
1 N
AKPCA for fault detection and isolation. Results of
simulation studies performed on the Tennessee Eastman
R̃ = ∑ Φ (Xi )Φ T (Xi )
N i=1
(3)
c 2015 NSP
Natural Sciences Publishing Cor.
Appl. Math. Inf. Sci. 9, No. 4, 1833-1845 (2015) / www.naturalspublishing.com/Journals.asp 1835
As with ordinary PCA, the first thing we have to do is Kernel principal component analysis method have
to ensure that the set of feature vectors {Φ (Xi }Ni=1 has zero recently shown to be very effective for monitoring
mean : nonlinear processes. However, their performance largely
1 N depend on the kernel function and currently there is no
∑ Φ (Xi ) = 0
N i=1
(4) general rule for kernel selection. Existing methods simply
choose the kernel function empirically or experimentally
To satisfy this condition in the features space is a from a given set of candidates. The kernel function plays
more difficult proposition than it is in the input space. A a central role in KPCA, and a poor kernel choice may
principal component v is then computed by solving the lead to significantly impaired performance [20,21].
eigenvalue problem : Regarding the kernel functions, they can be chosen for
instance as follows:
R̃q̃ = λ̃ q̃ (5) • Polynomial kernel,
where λ̃ is an eigenvalue of the correlation matrix R̃ and q̃ K(xi , x j ) = (xi . x j +1 )d (13)
is the associated eigenvector. Now we note that all
eigenvectors that satisfy Eq.(5) for λ̃ 6= 0 lie in the span where d is a positive integer;
of the set of feature vectors {Φ (X j }Nj . • Radial basis function (RBF),
2
K(xi , x j ) = exp(−
xi − x j
/2δ 2 )
N (14)
q̃ = ∑ α j Φ (X j ) (6)
j=1 where 2δ 2 =w is the width of the Gaussian kernel.
The above kernel functions give similar results if
Thus substituting Eq.(3) and Eq.(6) into (5), we obtain : appropriate parameters are chosen. The radial basis
N N N
function may present advantages owing to its flexibility in
∑ ∑ α j Φ (Xi )K(Xi , X j ) = N λ̃ ∑ α j Φ (X j ) (7) choosing the associated parameter. For instance, the width
of the Gaussian kernel can be very small (< 1) or quite
i=1 j=1 j=1
large [7].
Where K(Xi , X j ) is an inner-product kernel defined in A major limitation of KPCA-based monitoring is that
terms of the feature vectors by : the KPCA model, once built from the data, is
time-invariant, while most real industrial processes are
K(Xi , X j ) = Φ T (Xi )Φ (X j ) (8) time-varying. The time-varying characteristics of
industrial processes include: (i) changes in the correlation
We need to go one step further with Eq.(7) so that the structure among variables, (ii) including changes in the
relationship is expressed entirely in terms of the number of significant principal components (PCs). When
inner-product kernel. To do so, we pre-multiply both sides a time-invariant KPCA model is used to monitor
of Eq.(7) by the transposed vector Φ T (Xk ). processes with the aforementioned normal changes, false
alarms often result, which significantly compromise the
reliability of the monitoring system.
N N N
∑ ∑ α j K(Xk , Xi )K(Xi , X j ) = N λ̃ ∑ α j K(Xk , X j ) (9)
i=1 j=1 j=1
3 Adaptive Kernel PCA (AKPCA)
Accordingly, we my recast Eq.(9) in the compact
matrix form : When the process operating condition change either,
K 2 α ≃ N λ̃ K α (10) gradually or abruptly, the covariance matrix will not be
constant and will need to be updated. In the existing
All solution of this eigenvalue problem that are of interest recursive methods, only linear methods were proposed
are equally well represented in the simpler eigenvalue [23,24,25,26]. Because the kernel function is unknown, it
problem : is difficult to describe the nonlinear dynamic data
K α = N λ̃ α (11) structure. Moving Window PCA (MWPCA) as in [23,
24], and Exponentially Weighted PCA (EWPCA) as in
Where the coefficient vector α plays the role of the [25] are two representative adaptive PCA methods.
eigenvector associated with the eigenvalue λ of the kernel Similar to the linear case, in the moving window kernel
matrix K. For extraction of principal components, we PCA algorithm, a data window of fixed length is moved
need to compute the projection onto the eigenvectors q˜k in in real time to update the kernel PCA model once a new
feature space, as shown by : normal sample is available (see Figure 01).
In [11], the study proposes a variable window
N N
q˜k T Φ (X) = ∑ αk, j Φ (X j )Φ (X) = ∑ αk, j K(X j , X) (12) real-time monitoring system based on a fast block
i=1 i=1
adaptive KPCA scheme. On the other hand, Li et al in
c 2015 NSP
Natural Sciences Publishing Cor.
1836 C. CHAKOUR et. al.: New Adaptive Kernel Principal Component...
c 2015 NSP
Natural Sciences Publishing Cor.
Appl. Math. Inf. Sci. 9, No. 4, 1833-1845 (2015) / www.naturalspublishing.com/Journals.asp 1837
(e)Test if SPEt < The control limit; the testing sample is not
an outlier and the system operate properly, go to step 3.
4.2 Fault Diagnosis Based on The Structured
Otherwise, consider the current condition to be abnormal Residual Approach
and go to step 2.
3.If updating condition is satisfied, do: When a faulty condition is detected, one needs to
(a)Calculate the adaptive gram matrix. determine the root cause of this problem. AKPCA is used
in monitoring, its performed on the full data set. The sum
K = λt × K̄t−1 + (1 − λt ) × K̄t (18) of squared residuals can be used as a metric in detecting
faults. However, there is no indication of the location of
Where λt is a flexible forgetting factor.
the fault [18]. The partial AKPCA is an AKPCA
performed on reduced vector, where some variable in the
λt = (λmax − (λmax − λmin )[1 − exp(−ß(k∆ Rk) )]) data are left out. When data is evaluated against a
(19) properly designed partial AKPCA subspace, the residual
λmin = 0.9, λmax = 0.99, k ∆ R k is the Euclidean vector will only be sensitive to faults associated with the
norm of the difference between two consecutive gram variables that are present in the reduced vector. Faults
matrix and the parameter ß control the sensitivity of the associated with variables eliminated from the partial
change in λt . AKPCA will leave the residuals within the nominal
(b)Find the number of principal components (l). thresholds. With the selectivity of partial AKPCA to
(c)Update the KPCA model: calculate the new eigenvalues subsets of faults, it is possible to design an incidence
and vectors of the new covariance matrix in the feature matrix for a set of such partial AKPCAs, resulting in a
space (gram matrix). structure with same fault isolation properties as parity
(d)Update the forgetting factor λt . relations (show figure 2).
(e)Recalculate the monitoring statistics and the
corresponding controls limits. The procedure for structuring the residuals is as follow
(f)Return to step 2.
[12]:
c 2015 NSP
Natural Sciences Publishing Cor.
1838 C. CHAKOUR et. al.: New Adaptive Kernel Principal Component...
Partial AKPCA1
model 1
5.1 Tennessee Eastman Process (TEP) data
Data 1
Partial model 1
Data 1 SPE 1 ß1
Partial model q
SPE q ßq
Data q
APKPCA
Model Fault Isolation
c 2015 NSP
Natural Sciences Publishing Cor.
Appl. Math. Inf. Sci. 9, No. 4, 1833-1845 (2015) / www.naturalspublishing.com/Journals.asp 1839
N ◦ var variables
1.4
1 A feed
2 Reactor temperature 1.2
3 E feed 1
4 A and C feed
0.8
5 Recycle flow
6 Reactor feed rate 0.6
7 D feed 0.4
8 Purge rate
9 Product separator temperature 0.2
0.6
0.4
30
1.3
0.2
1.2
25
0
0 100 200 300 400 500 600
1.1
20 1
Fig. 8: Evolution the SPE AKPCA with fixed forgetting factor
0.9 0.9
SPE PCA
15
0.8
0.7
10 1.4
0.6
1.2
0.5
5
0.4 1
0
0 100 200 300 400 500 600 0 100 200 300 400 500 600 0.8
Fig. 5: SPE PCA with fixed Fig. 6: SPE KPCA with fixed 0.6
0.2
0
0 100 200 300 400 500 600
values of forgetting factor. Firstly, for the value σ of the Fig. 9: Evolution the SPE AKPCA with fixed forgetting factor
radial kernel function is tuned based on the method of Park 0.95
and Park [31], which proposes to select σ = C ∗ Averd,
where Averd is the mean distance between all observations
in feature space and C is a predetermined value. In this 1.4
The first 100 samples were utilized to build the initial 0.8
choose these values of parameters : λmax = 0.99, Fig. 10: Evolution the SPE AKPCA with fixed forgetting factor
λmin = 0.9, k=0.05. 0.97
In the identification step of the APCA model, the
number of significant PCs is selected by using CPV
method, such that the variance explained is approximately
95% of the total variance (see Fig. 13). Thus, for greater
c 2015 NSP
Natural Sciences Publishing Cor.
1840 C. CHAKOUR et. al.: New Adaptive Kernel Principal Component...
1.4
0.8
K = λt × K̄t−1 + (1 − λt ) × K̄t (24)
0.6
0.4
K = 0.9 × K̄t−1 + 0.1 × K̄t (25)
0.2
The adaptive model of AKPCA takes 10% of its
0
information from the new window and 90% of its
0 100 200 300 400 500 600
information from the previous window. The robustness is
Fig. 11: Evolution the SPE Adaptive kernel PCA related to the way the window is moving. If the moving
window collect simple wise, the problem is less serious,
but when the window is updated with block wise where
the KMWPCA model undergoes an abrupt changes in a
0.99 3.2 more or less rapid system, it will generate a very high rate
0.98
of false alarms, and sometimes even instability and
3
0.97
divergence of detection index (Q statistics) comparing
2.8 with control threshold. Our algorithm adapts to this
0.96
2.6
problem by the introduction of old information of the
0.95
system using the forgetting factor in the moving window.
0.94 2.4
This will result in a better adaptation to abrupt changes of
0.93
2.2
the systems and hence a good robustness to false alarms.
0.92
2
0.91
1.8
0 50 100 150 200 250 300 350 400 450 500
0.9
150 200 250 300 350 400 450 500 550 600 20
AKPCA
Fig. 12: The flexible forgetting AKPCA with fixed forgetting factor = 0.9
Fig. 13: Number of principal AKPCA with fixed forgetting factor = 0.95
factor.
components. 18 AKPCA with fixed forgetting factor = 0.97
MWKPCA
Rate of false alarms (threshold 95%)
16
c 2015 NSP
Natural Sciences Publishing Cor.
Appl. Math. Inf. Sci. 9, No. 4, 1833-1845 (2015) / www.naturalspublishing.com/Journals.asp 1841
20
AKPCA 2.5
14
0.5
12 0
0 100 200 300 400 500 600
10 Fig. 17: Evolution the SPE AKPCA with fixed forgetting factor
0.9
8
6
2.5
4
2
2 1.5
0
10 20 30 40 50 60 70 80 90 100 0.5
Size of moving window
0
0 100 200 300 400 500 600
Fig. 19: Evolution the SPE AKPCA with fixed forgetting factor
2.5
0.97
2
3
1.5
2.5
0.5
1.5
0
0 100 200 300 400 500 600
1
Fig. 16: Evolution the SPE Moving Window Kernel PCA 0.5
0
0 100 200 300 400 500 600
c 2015 NSP
Natural Sciences Publishing Cor.
1842 C. CHAKOUR et. al.: New Adaptive Kernel Principal Component...
2.5
100 AKPCA 2
0.5
80
0
0 100 200 300 400 500 600
1.6
70 1.4
SPE2
1.2
60 0.8
0.6
0.4
50
0.2
0
0 100 200 300 400 500 600
40 2.5
2
SPE3
30 1.5
20 0.5
0
0 100 200 300 400 500 600
10 2.5
10 20 30 40 50 60 70 80 90 100
Size of moving window 2 SPE4
1.5
0
0 100 200 300 400 500 600
2.5
2 SPE5
Figure 21 show a graphical representation of the rate
of successfully detection with different value of moving
1.5
1.5
0
0 100 200 300 400 500 600
2.5
2.5
c 2015 NSP
Natural Sciences Publishing Cor.
Appl. Math. Inf. Sci. 9, No. 4, 1833-1845 (2015) / www.naturalspublishing.com/Journals.asp 1843
experimental signature is obtained after codifying the [5] Zhiqiang Ge, Chunjie Yang, Zhihuan Song. Improved kernel
residual. Where exceeding the threshold of detection is PCA-based monitoring approach for nonlinear processes.
represented by 1, and less than the threshold is Chemical Engineering Science, vol. 64, pp. 2245-2255,
represented by 0. This gives the following experimental 2009.
signature ( 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 ). This signature [6] Tian Xuemin, Deng Xiaogang. A Fault Detection Method
is identical to the second column in the theoretical table, Using Multi-Scale Kernel Principal Component Analysis.
which means that the suspect variable (sensor/actuator) is Proceedings of the 27th Chinese Control Conference, July
x2 . 16-18, Kunming,Yunnan, China, 2008.
[7] Viet Ha Nguyen, Jean-Claude Golinval. Fault detection
based on Kernel Principal Component Analysis.
Engineering Structures, vol. 32, pp. 3683 - 3691, 2010.
6 Conclusion [8] Zhenhua Mao, Yuhong Zhao, Lifang Zhou. A Flexible
Principle Component Analysis Method for Process
In this work, a new adaptive kernel PCA algorithm is Monitoring. Fourth International Conference on Natural
proposed for dynamic process modeling. The proposed Computation, China,2008.
AKPCA model is then performed on subsets of variables [9] Ayech Nabil, Chakour Chouaib, Harkat M.Faouzi. New
to generate a structured residuals for sensor and actuator adaptive moving window PCA for process monitoring.
fault detection and isolation. The proposed algorithm is Safe Process 8th IFAC Symposium on Fault Detection,
applied for sensor and actuator fault detection and Supervision and Safety of Technical Processes. August 29-
isolation of Tennessee Eastman process. 31, Mexico, 2012.
[10] Mingxing Jia, Fei Chu, Fuli Wang, Wei Wang. On-line
batch process monitoring using batch dynamic kernel
References principal component analysis. Chemometrics and Intelligent
Laboratory Systems, vol. 101, pp. 110 - 122, 2010.
[1] Kramer M. A., Nonlinear principal component analysis [11] Issam Ben Khediri, Mohamed Limam, Claus Weihs.
using auto-associative neural networks. AIChE Joumal,vol. Variable window adaptive Kernel Principal Component
37, N. 2, pp. 233-243, 1991. Analysis for nonlinear nonstationary process monitoring.
[2] Sang Wook Choi, Changkyu Lee, Jong-Min Lee, Jin Computers and Industrial Engineering, vol. 61, pp. 437-446,
Hyun Park, In-Beum Lee. Fault identification for process 2011.
monitoring using kernel principal component analysis. [12] Yunbing Huang, Janos Gertler, Thomas J. McAvoy. Sensor
Chemical Engineering Science, vol. 60, pp. 279-288, 2005. and actuator fault isolation by structured partial PCA with
[3] Bernhard Scholkopf, Alexander Smola, and Klaus Robert nonlineair extention. Journal of Process Control, vol. 10, pp.
Muller. Nonlinear Component Analysis as a Kernel 459-469, 2000.
Eigenvalue Problem. Technical report N .44, 1996. [13] Sang Wook Choi, Elaine B. Martin, A. Julian Morris,
[4] Jyh-Cheng Jeng. Adaptive process monitoring using and In-Beum Lee. Adaptive Multivariate Statistical Process
efficient recursive PCA and moving window PCA Control for Monitoring Time-Varying Processes. Industrial
algorithms. Journal of the Taiwan Institute of Chemical Engineering Chemistry Research, vol. 45, pp. 3108 - 3118,
Engineers, vol. 41, pp. 475-481, 2010. 2006.
c 2015 NSP
Natural Sciences Publishing Cor.
1844 C. CHAKOUR et. al.: New Adaptive Kernel Principal Component...
7
[14] Lee J M, Yoo C K, Choi S W, et al. Nonlinear process
6
monitoring using kernel principal component analysis.
5
3
SPE9
[15] Xueqin Liu, Uwe Kruger, Tim Littler, Lei Xie, Shuqing
2
Wang. Moving window kernel PCA for adaptive monitoring
1
of nonlinear processes. Chemometrics and Intelligent
0
0 100 200 300 400 500 600
Laboratory Systems, vol. 96, pp. 132 - 143, 2009.
3.5 [16] Yunbing Huang, Janos Gertler, Thomas J. McAvoy. Sensor
3
and actuator fault isolation by structured partial PCA with
2.5
459-469, 2000.
1.5
1
[17] Peng Hong-xing, Wang Rui, Hai Lin-peng. Sensor Fault
0.5
Detection and Identification using Kernel PCA and Its Fast
0
0 100 200 300 400 500 600
Data Reconstruction. Proceedings of Chinese Control and
2.5
Decision Conference, china, 2010.
2
[18] Sang Wook Choi, Changkyu Lee, Jong-Min Lee, Jin
SPE11
Hyun Park, In-Beum Lee. Fault identification for process
1.5
2.5
of IFAC Symposium on Fault Detection, Supervision and
Safety for Technical Process, Washington, USA, 2003.
2
SPE12 [20] Ji-Dong Shao, Gang Rong, Jong Min Lee. Learning a
1.5
data-dependent kernel function for KPCA-based nonlinear
1 process monitoring. Chemical Engineering Research and
0.5
Design, vol. 87, pp. 1471-1480, 2009.
[21] Mingxing Jia, Hengyuan Xu, Xiaofei Liu, Ning Wang. The
0
0 100 200 300 400 500 600
1.6
for Adaptive Process Monitoring. Industrial Engineering
1.4 SPE14 Chemistry Research, vol. 47, pp. 419 - 427, 2008.
1.2
1
[24] Xun Wang, Uwe Kruger, and George W. Irwin. Process
0.8 Monitoring Approach Using Fast Moving Window PCA.
0.6
Industrial and Engineering Chemical Research, vol.44, pp.
0.4
0.2
5691 - 5702, 2005.
0
0 100 200 300 400 500 600 [25] S. Lane, E. B. Martin, A. J. Morris and P. Gower.
2
1.8
Application of exponentially weighted principal component
1.6 analysis for the monitoring of a polymer film manufacturing
1.4
1.2
SPE15 process. Transactions of the Institute of Measurement and
1
Control, vol. 25, N. 1 , pp. 17 - 35, 2003.
0.8
0.6
[26] Chang Kyoo Yoo, Sang Wook Choi, and In-Beum Lee.
0.4
0.2
Dynamic Monitoring Method for Multiscale Fault Detection
0
0 100 200 300 400 500 600
and Diagnosis in MSPC. Industrial and Engineering
2 Chemistry Research, vol. 41, N. 17, pp. 4303-4317, 2002.
1.8
1.6
[27] Li, W., Yue, H.H., Valle-Cervantes, S. and Qin, S.J.
1.4
SPE16 Recursive PCA for adaptive process monitoring. Journal of
1.2
0.6
[28] Gallagher, N. B., Wise, B. M., Butler, S. W., White,
0.4
D. D., Barna, G. G. Development and benchmarking
0.2
0
0 100 200 300 400 500 600
of multivariate statistical process control tools for a
semiconductor etch process: Improving robustness through
model updating. ADCHEM, Proceedings, Banff, Canada,
Fig. 23: Evolutions of SPE corresponding to the last Eight June 9 - 11, p 78, 1997.
different partial AKPCA models.
c 2015 NSP
Natural Sciences Publishing Cor.
Appl. Math. Inf. Sci. 9, No. 4, 1833-1845 (2015) / www.naturalspublishing.com/Journals.asp 1845
[29] Xun Wang, Uwe Kruger, and George W. Irwin. Process Messaoud DJEGHABA
Monitoring Approach Using Fast Moving Window PCA. is Professor in the
Ind. Eng. Chem. Res. vol: 44, 5691 - 5702, 2005. Department of Electronics at
[30] Wold, S. Exponentially Weighted Moving Principal Annaba University, Algeria.
Components Analysis and Projections to Latent Structures. His research interests
Chemom. Intell. Lab. Syst. 1994. include fault diagnosis,
[31] Park, C. H., and Park, H. Nonlinear discriminant analysis process modeling and
using Kernel functions and the generalized singular value monitoring.
decomposition. Journal of Matrix analysis and applications,
27, 87-102, 2005.
Chouaib CHAKOUR
received his M. Sc. degrees
in automatic and control
from Annaba University,
Algeria, in 2011. He is
currently pursuing the Ph. D.
degree in Automatic and
control at the Badji Mokhtar
Annaba University, Algeria.
His research interests include
fault diagnosis, multivariate statistical approaches,
process modeling and monitoring.
Med-Faouzi HARKAT
received his Eng. degree
in automation from Annaba
University, Algeria in 1996,
his Ph.D. degree from Institut
National Polytechnique
de Lorraine (INPL), France
in 2003 and his Algerian
”Accreditation to supervise
researches” (HDR), from
Annaba University, Algeria in 2006. He is now Professor
in the Department of Electronic at Annaba University,
Algeria. His research interests include fault diagnosis,
process modelling and monitoring, multivariate statistical
approaches and neural networks.
c 2015 NSP
Natural Sciences Publishing Cor.