Sie sind auf Seite 1von 9

J Intell Manuf

DOI 10.1007/s10845-015-1107-8

Machine prognostics based on sparse representation model


Likun Ren1 Weimin Lv1 Shiwei Jiang1

Received: 6 January 2015 / Accepted: 1 June 2015


Springer Science+Business Media New York 2015

Abstract The prognostic technologies for machines refer Introduction


to the estimation of machines remaining useful life using
monitoring data from sensors. Different from traditional Reliability is an important aspect in todays industrial pro-
maintenance strategies, this maintenance strategy can reduce duction. Products with high reliability can not only reduce the
downtime, maintenance costs and critical risks. Given these operating expense so as to improve competition (Saranga and
advantages, an increasing number of prognostic models are Knezevic 2001) but also increase the safety factor especially
introduced. Data driven methods such as neural networks in some high-risk industries such as nuclear power plants.
and Bayesian approaches are used widely in machine prog- A survey stated that an investment of between $10,000 and
nostics. However, the sequential information and inherent $20,000 dollars in condition monitoring may result in sav-
relationships among historical data are rarely considered ings of $500,000 a year (Rao 1996). However, the complexity
in these models. So, the estimations are usually not accu- of mechanical products poses great difficulties in the main-
rate enough. In our paper, we take a novel methodology tenance. It was reported that the operating and support cost
to estimate the remaining useful life: first, we adopt sparse accounts for 70 % of the total cost in the weapons life cycle
representation model to extract the inherent relationships of cost (Gartner and Dibbert 2001; Ld 2001). As a result, there
training samples and measure the similarities between test- is a pressing need to improve maintenance strategies. This
ing samples and training samples, and then a weight is given is where condition based maintenance (CBM) (Jardine et al.
to every training sample to note its similarity to the testing 2006) comes from. Based on monitoring data, CBM attempts
sample. When all testing samples are measured, a hierarchi- to monitor the machinery health conditions and make some
cal Hough voting process utilizing the sequential information maintenance suggestions.
of monitoring data is carried out to evaluate the remaining The prognostic technology is an essential part of CBM,
useful life. The industry experiment has proven the effective- which refers to the use of automated methods to estimate
ness of our approach. the degradation of physical systems performance and the
remaining useful life (RUL) (Si et al. 2011; Sikorska et al.
Keywords Prognostics Sparse representation Hough 2011). Through prognostics is newly introduced to CBM, it
voting has yet received its prominence and spread to many domains
such as the aviation industry (Wang et al. 2009), electronics
industries (Pecht 2008), nuclear systems (Zio and Maio 2010)
and battery industries (Saha et al. 2009). There are mainly
two categories in prognostic models (Pecht and Jaai 2010):
physics-based models (Stringer et al. 2012) and data-driven
models (Schwabacher 2005).
B Likun Ren Physics-based models take monitored systems as white
renlikun1988@gmail.com
boxes and build precise mathematical models (Chryssolouris
1 Naval Aeronautical and Astronautical University, and Toenshoff 1982) to analyze the failure models so as to
Yantai 264001, China evaluate the remaining useful life through the changes of

123
J Intell Manuf

operation parameters. The RUL evaluations can be highly assumption: one is that the training samples (run-to-failure
accurate if the models are built comprehensively and the data) are sequential, and so the sparse representation of cer-
parameters are estimated precisely. tain testing samples from time node i should be represented
Data-driven models are another category of prognostic mainly using neighbor training samples; the second is that the
models. These models see systems as black boxes instead. testing samples are also sequential, and the neighbor testing
They try to mine the internal relationships between the input samples should have close RUL evaluations.
(data from sensors) and the output (the RUL) of the historical There are several advantages in our model:
data (also called training samples) and then build reflections
between them. When the testing samples (RUL to be evalu- Through sparse representation and Hough voting, the
ated) are input into these models, the models will search the internal relationships and sequential information of mon-
relationships between these testing samples and the train- itoring data can be extracted, which is key to prognostic
ing samples and then give the output (the evaluated RUL) tasks.
according to the reflections built before. Because the models Compared to ANNs based models, our methodology has
are built relatively independent of complex physical models, fewer parameters to be determined. Whats more, our
some data-driven models can achieve high accuracy if these methodology is based on the similarity measurement
models can reflect the relationships between the input data between the testing samples and the training samples in
and their output. the sense of sparse representation, so its mechanism is
Among these data-driven models, artificial neural net- not so hard to explain as ANNs.
works (ANN) have been used widely in pattern recognition
areas. An ANN includes three or more layers: a layer of The remaining part of the paper is organized as follows. In
input nodes, one or more layers of hidden nodes, one layer Sect. 2, we give a brief introduction of sparse representation
of output nodes. A weight connects two nodes of two neigh- and show the detail algorithmic processes of our approach. In
bor layers. The nodes value in the upper layer is computed Sect. 3, our results are compared with some state-of-the-art
by summing all the nodes plus the corresponding weights in approaches to show the effectiveness of our methodology. In
the lower layer. Through a learning process, the weights are Sect. 4, the conclusion is given.
learned to render the outputs of ANN to be consistent with the
desired outputs. There are also some limitations with ANNs:
a widely known limitation is the lack of explanations on its Sparse representation-based prognostics
reasoning processes and foundations. Besides, there are no
rules or standards on how to determine the number of the Sparse representation theory
layers and nodes.
Other data-driven models include regression models Recently, the sparse representation of signals is a hot spot.
(Batko 1984; Benkedjouh et al. 2013), Bayesian approaches Sparse representation is a signal reconstruction method
(Mosallam et al. 2014), particle filtering models (Zio and (Aharon et al. 2006). With an over-complete dictionary
Peloni 2011), state space models (He et al. 2012) and so on. D Rnk (k  n) that contains k basis as its columns, a
In this paper, we propose a novel prognostics model based given signal y can be represented as a sparse linear combi-
on sparse representation (SRP). With an over-complete dic- nation of these basis: y Dx, satisfying y Dx2 .
tionary D Rnk that contains k basis as its columns Because D is over-complete (n  k and D is full-rank), there
(k  n), a given signal y can be represented as a sparse linear are infinite solutions for the representation y Dx2 .
combination of these basis satisfying y Dx when Hence, constraints should be set on solutions to get a unique
the optimization goal is to represent y use fewest basis in one. When the solution with the fewest number of nonzero
the dictionary D. A brief introduction of sparse representa- coefficients is pursued, the optimization model is called the
tion can be found in Sect. 2. sparse representation model:
In our SRP model, there are three steps: the dictionary
forming step, the sparse representation step and the RUL min x0 s.t. y Dx (1)
x
evaluating step. In the dictionary forming step, the dictionary
D is formed by the extracted feature vectors of run-to-failure The form x0 counts the number of non-zero components
data. In the sparse representation step, the sequential feature of x. Unfortunately, this is non-convex optimization because
yi extracted from sensor data is input into the sparse model x0 is non-convex, so the solution may be not unique. Can-
and then we can get its representation weight xi . In the RUL ds et al. (2006) proved that if the representation is sparse
evaluating step, the RUL will be evaluated in a hierarchi- enough, 0-norm is equivalent to 1-norm:
cal Hough voting process (Illingworth and Kittler 1988; Yao
et al. 2010) according to the distribution of xi under two min x1 s.t. y Dx (2)
x

123
J Intell Manuf

The representation of signal y is like to give y a coding because the labels or category attributes of training samples
on the basis or dictionary D, so sparse representation is also will be destroyed in the iterative processes.
vividly called sparse coding.
In pattern recognition area, sparse representation provides
Sparse representation based prognostics
a novel view to measure similarity. First, sparse representa-
tion sees signals as the sum of few basis in dictionary plus
Our SRP model includes three steps: the dictionary forming
some noises, so the model doesnt need prior knowledge or
step, the sparse representation step and the RUL evaluating
distribution assumptions of signals. Secondly, sparse repre-
step (Fig. 1). In the dictionary forming step, a dictionary
sentation can extract the semantic information behind signals
is formed by features extracted from run-to-failure data. In
by pursuing the fewest number of basis to represent signals,
the sparse representation step, the sequential features from
so the threshold is easier to determine than distance-based
sensor data are input into the sparse model and its represen-
similarity methods (Ren and Lv 2014). Due to those merits,
tation weight xi is computed. In the RUL evaluating step, a
sparse representation is widely used in denoising (Elad and
hierarchical Hough voting process is adopted to evaluate the
Aharon 2006), super-resolution (Yang et al. 2010) and face
RUL. Two assumptions consistent to the prognostics tasks
recognition (Wright et al. 2009) fields.
are applied in hierarchical Hough voting process: (a) there
The main difference between pattern recognition and
exists sequential relationships among training data: the train-
prognostics is that the prognostics task is closely related to
ing samples close in the dictionary are also close in time
time. So it is essential to put time attributes into the sparse
domain; (b) the testing data are also sequential: the evalu-
representation model for prognostics.
ated RULs of adjacent testing samples should be close.
Another problem in sparse representation-based prognos-
tics is the choice of the over-complete dictionary. There are
mainly two ways to build one dictionary in todays dictionary Dictionary forming
learning area. One is to choose pre-specified set of functions
such as wavelets, curvelets and contourlets as the dictionary. The first step of dictionary forming is data preprocessing to
Those fixed dictionaries are less adaptive for different types reduce noise and improve computational efficiency. Conven-
of data. The other method is based on machine learning: the tional data preprocessing methods include data denoising and
dictionaries are learned from training data and more adap- normalization. Then, the critical step is feature extraction. In
tive dictionaries can be acquired through learning process. pattern recognition area, reasonable feature extraction can
With the development of machine learning, various dictio- reach desirable recognition rate. The degeneration of most
nary learning methods have been introduced (Engan et al. equipment follows a failure rate curve called bathtub curve
1999; Vidal et al. 2003). (see Fig. 2). And the prognostic task usually focuses on the
Recently, a dictionary learning method called K-SVD has random failures and wear-out failures parts. So features that
proposed Aharon et al. (2006). K-SVD is a method itera- have a consistent trend with the random failures and wear-
tive between sparse coding process and dictionary learning out failures parts can reflect the real degeneration process
process to get a better dictionary. The optimization model is: and should be adopted. Just see Fig. 3 as an example: the
amplitude of one feature remains low at first and increases
sharply near the failure, which is consistent with the random
min Y D X 2F s.t. i, xi 0 < (3)
D,X failures and wear-out failures parts of the bathtub curve.
Once features ( f 1 , f 2 , . . . , f n ) are extracted, the next step
However, these dictionary learning methods above are not is to form dictionary D for sparse representation model. As
suitable for prognostics even the pattern recognition tasks introduced in Sect. 2.1, the aim of K-SVD dictionary learning

Fig. 1 The flow chart of our


methodology

123
J Intell Manuf

where yi is one testing feature extracted from monitored data


at time node i, D is the dictionary formed in Sect. 2.2.1 and xi
is the corresponding sparse vector of yi at time node i. When
all the run-to-now testing features are represented, we arrange
the sparse vectors to form a sparse matrix as followed:

SM = [x1 , x2 , . . . , xm ] (6)

In SM, one column xi corresponds to the sparse coding of


Fig. 2 Bathtub curve
one testing sample y j in time node i.
Different from other data-driven models, our model seeks
14000
to mine the similarities between the testing samples in dic-
12000 tionary D and the training samples yi in the sense of sparse
10000
representation. As introduce in Sect. 2.1, the sparse repre-
sentations power of extracting the sematic information of
one feature

8000 signals can help us to measure the internal relationships


6000 between the testing and training samples so as to evaluate
the RUL.
4000

2000
RUL evaluation
0
0 200 400 600 800 1000
time node In Sect. 2.2.2, the run-to-now monitored data are projected
to a sequential sparse coding SM by sparse representation
Fig. 3 An example of good feature
model. In this part, we will evaluate the RUL using SM. Here,
we assume that there are two priors in RUL evaluation prob-
lems: one is that the training samples (run-to-failure data) are
method is to get such an effective dictionary that the represen-
sequential, and so the sparse representation of certain test-
tation of the signal on the dictionary is as sparse as possible.
ing samples from time node i should be represented mainly
In the process of iteration, the internal structure of basis is
using neighbor basis in dictionary D in Eq. 5; the second is
destroyed and the time sequence property of basis is also
that the testing samples are also sequential, and the neighbor
destroyed at the same time, so K-SVD dictionary learning
testing samples should have close RUL evaluation.
method is not suitable for prognostics tasks. For this rea-
Firstly, we introduce a transform called power transmitting
son, we form dictionary D in an easier but suitable way: just
operator (). The power transmitting operator of a vector
arranging the features in time order to form the dictionary to
v = [v1 , v2 , . . . , vm ] is:
hold the sequence property.

D = [ f1 , f2 , . . . , fn ] (4) 
n
|vk |
i = (7)
|k i| + 1
k=1
Sparse representation i
si = n (8)
i=1 i
In Sect. 2.2.1, the dictionary D for sparse representation has (v) = [s1 , s2 , . . . , sn ] (9)
been formed. This part projects every monitored data from
the under-prognosed machine to a sparse vector. First, the
features should also be extracted from these data the same To certain training sample in coding vector v, the weight of
way with run-to-failure data in Sect. 2.2.1. Every feature this sample in (v) depends on the value of itself as well as
can be seen as a testing sample whose RUL remains to be the value of its neighbors. The further one component is apart
evaluated. Then, we project these features to a sparse matrix from the component, the smaller its contribution to it. This is
using the following sparse representation model: consistent to our first assumption: the training samples (run-
to-failure data) are sequential, and if the component and its
neighbors have all high value, the weight of this component
min xi 1 s.t. yi Dxi  (5) should be high.
xi

123
J Intell Manuf

Then, we transform the coding matrix SM with the oper- 1

ator (v):
0.8

S M = [x1 , x2 , . . . , xm ] = [(x1 ), (x2 ), . . . , (xm )]


0.6
(10)

di
we call S M the dual-sequential matrix because the values 0.4

of one column reflect the sequential information of training


samples while the arrangement of the column reflects the 0.2

sequential relationship among testing samples.


When S M is computed, a hierarchical Hough voting 0
0 0.2 0.4 0.6 0.8 1
process is carried out to evaluate the RUL. i/N
First, a weighted Hough voting is undertaken in every
column of S M to evaluate the RUL of every testing sample Fig. 4 The trend of di with time
yi on time node i. Assuming xi = [s1 , s2 , . . . , sn ] is the ith
column in S M corresponding the testing sample yi at time
D = diag (d1 , d2 , . . . , dl ) (16)
node i, the RUL evaluation of yi is:
di = max (bln(a i), 0) (17)

n
ri = k sk (11) where l is the number of testing samples; b and a are experi-
k=1 enced parameters and in our experiment the following values
are adopted:
Once the RUL ri of every sequential input yi is computed,
we take another Hough voting process to determine the final 10
RUL of the machine or equipment under monitor. a= (18)
l
We define a rough RUL evaluation vector to represent the 1
evaluation of every single testing sample: b= (19)
ln(10)

Rrough = [r1 , r2 , . . . , rl ] (12) Then we can project the evaluated RUL on different time
nodes to now by timing the diagonal matrix D to get the final
where l is the total number of the testing samples. evaluated RUL vector:
Because the testing samples are sequential, there should
be sequential relationships among the evaluated RULs. We R= RD (20)
adopt a weighted median filtering process to smooth and give
sequential properties to the evaluated RULs: In Eq. 21 and Fig. 4, we can see that the near time nodes have
higher confidences, which is accordant with our assumptions.
 T At last, we undertake a Hough voting process to determine
rsi = w ri2 ri1 ri ri+1 ri+2 (13)
the final RUL of monitored machine:
where w is a weighted smoothing vector and in our experi- 1 

RUL = max(R, m) (21)
ment m
 
1 2 5 2 1 where m is the number of values involved in Hough voting
w= (14)
11 11 11 11 11 process and max(R, m) represents the maximal m values in
vector R.
Then we can get a smoothed vector R of evaluated RUL
vector Rr ough .
Example
R = [r s1 , r s2 , . . . , r sl ] (15)
In this section, we apply our methodology mentioned above
Whats more, because we want to evaluate the present to ball bearing remaining useful life. The experimental data
RUL, the latest evaluated RULs should have higher confi- set was provided by the FEMTO-ST institute Nectoux et al.
dence. To fully apply this property, we define a diagonal (2012) and this data set was adopted in 2012 PHM data
matrix first: challenge competition. There are six training sets covering

123
J Intell Manuf

run-to-failure data monitored during the bearings accelerated 3

life test and eleven test sets with truncated experimental data.
2.5
The data sets were monitored under three different loading
conditions: 2

Amplitude
Conditions 1: 1800 rpm and 4000 N; 1.5
Conditions 2: 1650 rpm and 4200 N;
1
Conditions 3: 1500 rpm and 5000 N.
0.5
Condition 1 and condition 2 provide two training sets and
require to estimate RUL of five testing sets, respectively. Con- 0
0 200 400 600 800 1000
dition 3 includes two training sets covering run-to-failure data Time Node
and only one testing sets remaining to be estimated.
The characterization of the bearings degradation is based Fig. 5 The rms amplitude with time
on two data types of sensors: vibration and temperature. The
vibration sensors consists of two miniature accelerometers 8
positioned at 90 to each other; the first is placed on the ver- 7
tical axis and the second is placed on the horizontal axis. The
6
two accelerometers are placed radially on the external race of
the bearing. The temperature sensor is an resistance temper- 5

Amplitude
ature detector (RTD) platinum PT100 (1/3 DIN class) probe, 4
which is place inside a hole close to the external bearings
3
ring. The acceleration measures are sampled at 25.6 kHz and
the temperature ones are sampled at 0.1 Hz. 2

1
Feature extraction and dictionary forming
0
0 200 400 600 800 1000
Feature extraction is an essential process in our RUL evalua- Time Node
tion methodology. Before feature extraction, median filter is
Fig. 6 The variance amplitude with time
undertaken to denoise. Then, various features are extracted
from vibration signals of the training bearings data in the 900
time and frequency domains (Li et al. 2012). As referred in
800
Sect. 2.2.1, we analyze the trend of these features and choose
700
features decreasing or increasing with time as the final fea-
tures. At last, we choose 26 features (13 for horizontal and the 600
Amplitude

other 13 for vertical vibration signals). Two pairs of features 500


are from time domain: root mean square (rms) and variance 400
(Figs. 5, 6); the other features are from frequency domain: one 300
pair of features are the energy spectrum after Fourier trans- 200
formation (Fig. 7) and the other ten pairs are the energy in the
100
approximate and detailed coefficients at the first five levels
0
of decomposition using wavelet decomposition (Figs. 8, 9). 0 200 400 600 800 1000
Once all the features are extracted, they are smoothed Time Node
using median filter. Then a normalization process is taken
Fig. 7 The energy spectrum with time
to bring these features to the same scale to reduce bias due
to features of large dynamic range. The normalized form of
one feature f ti [ f t1 , f t2 , . . . , f tn ] is: Sparse representation and RUL evaluation
f ti min j ( f t j )
f ii = (22) The features from testing samples are input into the sparse
max j ( f t j ) min j ( f t j )
representation model (Eq. 5) or the matrix form:
Then, these features from the training samples are arranged
in time order to form the dictionary. min M1 s.t. Y D X  (23)
M

123
J Intell Manuf

15000 Table 1 The comparison between CALCEs result and ours


Condition Error of CALAE (%) Our error (%)

10000 1 B1_3 37 4
Amplitude

B1_4 80 10
B1_5 9 19
B1_6 5 81
5000
B1_7 2 60
2 B2_3 64 5
B2_4 10 522
0
0 200 400 600 800 1000 B2_5 440 12
Time Node B2_6 49 7
B2_7 317 17
Fig. 8 The A1 energy with time
3 B3_3 90 16
4500 Score 0.3066 0.3587
4000
3500
3000 Result
Amplitude

2500
Using our methodology, we evaluated all the testing sets
2000
provided by the FEMTO-ST institute. According to the rule
1500
of FEMTS-ST institute (Nectoux et al. 2012), RUL results
1000 should be converted into percent errors of predictions:
500
0 i
ActRULi RUL
0 200 400 600 800 1000
%Eri = 100 (24)
Time Node ActRULi

Fig. 9 The D1 energy with time Then, the score of experiment i is calculated based on the
errors:
3000

RUL estimation of the single

ex p ln(0.5)(Eri /5) Eri 0


2500 Ai = (25)
ex p +ln(0.5)(Eri /20) Eri > 0
testing sample

2000

1500
The final score of all RUL estimates has been defined as
being the mean of all experiments score:
1000

1 
11
500
Score = (Ai ) (26)
11
0 i=1
0 200 400 600 800 1000 1200
Time Node of testing samples To demonstration the effectiveness of our methodology,
Fig. 10 RUL estimation of testing samples
we contrast our result with that of Center for Advanced Life
Cycle Engineering (CALCE), University of Maryland who
won the 2012 PHM data challenge competition (Sutrisno
where X is the required sequential sparse coding matrix and et al. 2012). The comparison is shown in Table 1.
Y is the features matrix of testing samples arranged in time And to make further comparisons with neural networks
order. Once the sparse matrix is get, the RUL can be evaluated based (Heimes 2008) and similarity based (Wang et al.
using our approach discussed in Sect. 2.2.3. The single esti- 2008) algorithms, we compare our methodology with state-
mation of testing samples in set B1_4 can be seen in Fig. 10. of-the-art methods of these two algorithms that have nice
From Fig. 10, we can find that because the time nodes of performances at PHM08 data challenge competition. The
the latest testing samples are near the bearing failure bound- result is shown in Table 2.
ary time nodes, the RULs estimation of these testing samples Whats more, to verify the efficiency of sparse represen-
decreased quickly. tation in our methodology, we give the comparison between

123
J Intell Manuf

Table 2 The performance comparisons among monitoring data and the RUL estimation is also a new
Condition Error of Heimes Error of Wang Our error application field of sparse representation; second, the Hough
(2008) (%) et al. (2008) (%) (%) voting is used to utilize the sequential information among
monitoring data. The industry experiments have proven the
1 B1_3 46 57 4
efficiency of our methodology.
B1_4 63 59 10
B1_5 15 23 19
B1_6 78 17 81
B1_7 34 69 60 References
2 B2_3 69 37 5
B2_4 20 227 522 Aharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm
for designing overcomplete dictionaries for sparse representa-
B2_5 387 33 12 tion. IEEE Transactions on Signal Processing, 54(11), 43114322.
B2_6 87 10 7 doi:10.1109/TSP.2006.881199.
B2_7 54 62 17 Batko, W. (1984). Prediction method in technical diagnostics (4th ed.).
Sc. Dr. Thesis Cracov Mining Academy.
3 B3_3 34 27 16
Benkedjouh, T., Medjaher, K., Zerhouni, N., & Rechak, S. (2013).
Score 0.2031 0.2247 0.3587 Health assessment and life prediction of cutting tools based on
support vector regression. Journal of Intelligent Manufacturing,
26(2), 213223.
Table 3 The results of non-sparse representation model and sparse Cands, E. J., Romberg, J., & Tao, T. (2006). Robust uncertainty
representation based model principles: Exact signal reconstruction from highly incomplete fre-
quency information. IEEE Transactions on Information Theory,
Condition Non-sparse rep- SRP (%) 52(2), 489509.
resentation (%) Chryssolouris, G., & Toenshoff, H. (1982). Effects of machine-tool-
workpiece stiffness on the wear behaviour of superhard cutting
1 B1_3 89 4 materials. CIRP Annals-Manufacturing Technology, 31(1), 6569.
B1_4 67 10 Elad, M., & Aharon, M. (2006). Image denoising via sparse and redun-
B1_5 43 19 dant representations over learned dictionaries. IEEE Transactions
on Image Processing, 15(12), 37363745.
B1_6 103 81
Engan, K., Aase, S. O., & Hakon Husoy, J. (1999). Method of optimal
B1_7 87 60 directions for frame design. In Proceedings. 1999 IEEE inter-
2 B2_3 69 5 national conference on acoustics, speech, and signal processing,
1999 (Vol. 5, pp. 24432446). IEEE.
B2_4 381 522
Gartner, D. L., & Dibbert, S. E. (2001). Application of integrated
B2_5 92 12 diagnostic process to non-avionics systems. In AUTOTESTCON
B2_6 23 7 Proceedings, 2001. IEEE Systems Readiness Technology Confer-
B2_7 20 17 ence (pp. 229238). IEEE.
He, D., Li, R., & Bechhoefer, E. (2012). Stochastic modeling of damage
3 B3_3 53 16 physics for mechanical component prognostics using condition
Score 0.1049 0.3587 indicators. Journal of Intelligent Manufacturing, 23(2), 221226.
Heimes, F. O. (2008). Recurrent neural networks for remaining useful
life estimation. In International Conference on Prognostics and
Health Management, 2008. PHM 2008 (pp. 16). IEEE.
the result using sparse representation in the model and the Illingworth, J., & Kittler, J. (1988). A survey of the Hough transform.
result without sparse representation using k nearest neighbors Computer Vision, Graphics, and Image Processing, 44(1), 87116.
Jardine, A. K., Lin, D., & Banjevic, D. (2006). A review on machin-
method to measure the similarities between testing training ery diagnostics and prognostics implementing condition-based
samples instead. The result is shown in Table 3. maintenance. Mechanical Systems and Signal Processing, 20(7),
14831510.
Ld, A. (2001). The problem with aviation COTS. IEEE Aerospace and
Electronic Systems Magazine, 16(2), 3337.
Conclusion Li, R., Sopon, P., & He, D. (2012). Fault features extraction for bearing
prognostics. Journal of Intelligent Manufacturing, 23(2), 313321.
In this paper, we introduce a novel prognostic approach based Mosallam, A., Medjaher, K., & Zerhouni, N. (2014). Data-driven prog-
nostic method based on bayesian approaches for direct remaining
on sparse representation. Through three stepsthe dictio- useful life prediction. Journal of Intelligent Manufacturing, 112,
nary forming step, the sparse representation step and the RUL doi:10.1007/s10845-014-0933-4.
evaluating step, the RUL can be estimated based on the sim- Nectoux, P., Gouriveau, R., Medjaher, K., Ramasso, E., Morello, B.,
ilarities between testing samples and training samples in the Zerhouni, N., et al. (2012). Pronostia: An experimental platform
for bearings accelerated life test. In IEEE International Conference
sense of sparse representation. The novelty of our work lies in on Prognostics and Health Management. Denver, CO, USA.
two aspects: first, we apply the sparse representation theory Pecht, M. (2008). Prognostics and health management of electronics.
to machine prognostics to extract the inherent relationships New York: Wiley Online Library.

123
J Intell Manuf

Pecht, M., & Jaai, R. (2010). A prognostics and health management Vidal, R., Ma, Y., & Sastry, S. (2003). Generalized principal component
roadmap for information and electronics-rich systems. Microelec- analysis (GPCA). In Proceedings. 2003 IEEE computer society
tronics Reliability, 50(3), 317323. conference on computer vision and pattern recognition, 2003 (Vol.
Rao, B. (1996). Handbook of condition monitoring. Amsterdam: Else- 1, pp. I621). IEEE.
vier. Wang, T., Yu, J., Siegel, D., & Lee, J. (2008). A similarity-based prog-
Ren, L., & Lv, W. (2014). Fault detection via sparse representation for nostics approach for remaining useful life estimation of engineered
semiconductor manufacturing processes. IEEE Transactions on systems. In International Conference on Prognostics and Health
Semiconductor Manufacturing, 27(2), 252259. Management, 2008. PHM 2008 (pp. 16). IEEE.
Saha, B., Goebel, K., Poll, S., & Christophersen, J. (2009). Prognostics Wang, X., Rabiei, M., Hurtado, J., Modarres, M., & Hoffman, P. (2009).
methods for battery health monitoring using a bayesian framework. A probabilistic-based airframe integrity management model. Reli-
IEEE Transactions on Instrumentation and Measurement, 58(2), ability Engineering & System Safety, 94(5), 932941.
291296. Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., & Ma, Y. (2009).
Saranga, H., & Knezevic, J. (2001). Reliability prediction for condition- Robust face recognition via sparse representation. IEEE Trans-
based maintained systems. Reliability Engineering & System actions on Pattern Analysis and Machine Intelligence, 31(2),
Safety, 71(2), 219224. 210227.
Schwabacher, M. (2005). A survey of data-driven prognostics. In Pro- Yang, J., Wright, J., Huang, T. S., & Ma, Y. (2010). Image super-
ceedings of the AIAA Infotech@ Aerospace Conference (pp. 15). resolution via sparse representation. IEEE Transactions on Image
Si, X. S., Wang, W., Hu, C. H., & Zhou, D. H. (2011). Remaining useful Processing, 19(11), 28612873.
life estimationA review on the statistical data driven approaches. Yao, A., Gall, J., & Van Gool, L. (2010). A hough transform-based
European Journal of Operational Research, 213(1), 114. voting framework for action recognition. In 2010 IEEE conference
Sikorska, J., Hodkiewicz, M., & Ma, L. (2011). Prognostic modelling on computer vision and pattern recognition (CVPR) (pp. 2061
options for remaining useful life estimation by industry. Mechan- 2068). IEEE.
ical Systems and Signal Processing, 25(5), 18031836. Zio, E., & Di Maio, F. (2010). A data-driven fuzzy approach for pre-
Stringer, D. B., Sheth, P. N., & Allaire, P. E. (2012). Physics-based dicting the remaining useful life in dynamic failure scenarios of
modeling strategies for diagnostic and prognostic application in a nuclear system. Reliability Engineering & System Safety, 95(1),
aerospace systems. Journal of Intelligent Manufacturing, 23(2), 4957.
155162. Zio, E., & Peloni, G. (2011). Particle filtering prognostic estimation
Sutrisno, E., Oh, H., Vasan, A. S. S., & Pecht, M. (2012). Estimation of the remaining useful life of nonlinear components. Reliability
of remaining useful life of ball bearings using data driven method- Engineering & System Safety, 96(3), 403409.
ologies. In 2012 IEEE Conference on Prognostics and Health
Management (PHM) (pp. 17). IEEE.

123

Das könnte Ihnen auch gefallen