Sie sind auf Seite 1von 6

PLANT WIDE SENSOR CALIBRATION MONITORING

J. Wesley Hines, Darryl J. Wrest, Robert E. Uhrig Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee 37996 hines@utkux.utk.edu
Abstract This paper describes an Autoassociative Neural Network (AANN) based plant wide sensor calibration monitoring system that detects sensor drifts and replaces the faulty sensor output with a best estimate of the fault free value. A network tuning paradigm has also been implemented that allows the network to function when new plant conditions are encountered. The system was tested using data from the High Flux Isotope Reactor operated at Oak Ridge National Laboratory. 1. Introduction Traditional approaches to sensor validation involve periodic instrument calibration. These calibrations are expensive both in labor and process down time. Many of these calibrations require that the instrument be taken out of service and be falsely-loaded to simulate actual inservice stimuli. This can lead to damaged equipment and incorrect calibrations due to adjustments made under non-service conditions. While proper adjustment is vital to maintaining effective plant operation, a less invasive technique is desirable. As increased economic competitiveness necessitates streamlining plant operations, many maintenance groups are striving towards condition based maintenance rather than periodic, or worse yet, corrective maintenance. Changing calibration strategies to be condition based, requires instruments to be physically recalibrated only when their performance is degraded. Continuous monitoring of the instrument's calibration performance will allow plants to reduce the efforts necessary to assure the instrument is in calibration. Benefits of continuous sensor calibration monitoring include the reduction of unnecessary maintenance and more confidence in actual sensed parameter values. Reducing maintenance would result in cost savings and reduced outage times while a better knowledge of the actual state of the process could result in increased product quality and reduced equipment damage. The use of Autoassociative Neural Networks (AANNs) for plant wide monitoring was developed by the University of Tennessee (UT) and reported in NUCLEAR TECHNOLOGY1. More recently, researchers at UT developed a sensor monitoring system for Florida Power Corporations Crystal River #3 nuclear power plant 2,3. Related nuclear work includes the monitoring of the Borssele Nuclear Power Plant using AANN techniques.4 Similar work using AANNs applied to chemical process systems have also been reported.5,6,7 The work presented in this paper further advances the AANN methodology by introducing a faulty sensor replacement algorithm and a model tuning procedure. 2. Sensor Calibration Monitoring System The sensor calibration monitoring system is composed of four major components: an autoassociative neural network (AANN), a statistical decision logic module (SPRT), a faulty sensor correction module, and a network tuning module. These modules work together to monitor the plant sensors for drift and gross failures. Autoassociative Neural Network An autoassociative neural network is a network in which the outputs are trained to emulate the inputs over an appropriate dynamic range. Many plant variables that have some degree of coherence with each other constitute the inputs. During training, the interrelationships between the variables are embedded in the neural network connection weights. A robust training procedure is used to force the network to rely on the information inherent in the signals correlated with a specific sensor to estimate that specific sensor's value. As a result, any specific network output shows virtually no change when the corresponding input has been distorted by noise, faulty data, or missing data. This characteristic allows the AANN to detect sensor drift or failure by comparing

the sensor output ,which is the network input, with the corresponding network estimate of the sensor value. Figure 1 shows a sensor monitoring module for a group of four redundant sensors. Similar modules are constructed for monitoring groups of sensors whose measurements are correlated to some degree but not necessarily redundant. When a sensor that is input to the autoassociative network is faulty due to a drift or gross failure, the network still gives a valid estimate of the correct sensor value due to its use of information from other correlated sensors. The estimated sensor output (sn') is then compared to the actual sensor output (sn). The difference is called an error or a residual (rn). The residual normally has a mean of zero and a variance related to the amount of noise in the sensor's signal. When a sensor is faulty, its associated residual's mean or variance changes. This can be detected with statistical decision logic.
s1 s2 s3 s4 s1` s` Model 2 s3` ANN s4`+
+ + +

determined to be faulty, the likelihood ratio is reset to zero and the calculation to determine the status of the sensor begins again Faulty Sensor Correction The statistical decision module continues to monitor a sensor output even after it has been determined to be faulty. While the sensor is faulted, the best estimate of the sensor value (the neural network output) can be used for input into control systems, for display to plant operators, or for other sensitive tasks. The best estimate also replaces the faulty sensor as input into the AANN so that the monitoring of other sensors is not degraded. The actual sensor output is substituted back into the network when the fault has been cleared. This method always gives the operator access to the best estimate of the parameter whether it is the unfaulted measured value or the estimated value. Network Tuning The AANN architecture used is a three hidden layer feedforward network proposed by Kramer5. It consist of an input layer, 3 hidden layers, and an output layer. The first of the hidden layers is the mapping layer with dimension greater than the number of input/outputs. The second hidden later is called the bottleneck layer. The dimension of this layer is required to be the smallest in the network. The third hidden layer is called the demapping layer that has the same dimension as the mapping layer. Kramer points out that five layers are necessary for such nets in order to model non-linear processes. The decision to use this architecture and its implementation is discussed in reference 3. The three hidden layers form a "feature detection" architecture in which the bottleneck layer plays the key role in the identity mapping. The mapping layer maps from the input data space to the non-linear principle component space (bottleneck layer), and the demapping layer map from the non-linear principle component space to the data space (network output) corrected by the nonlinear principle components. This network learns the interrelationships between the variables during training. Although the training set should include samples from all plant operating regions, sometimes the operating state may change to one that was not included in the training set. This can be caused by component wear, cyclical changes, or changes in the plant configuration, among others. These changes would be detected by several residuals deviating significantly from their normal mean of zero. When this happens, the output of the AANN is not reliable and the network must be retrained to operate under the new conditions. If only one residual changes, a sensor fault is hypothesized.

r1 r2 r3 r4

Statistical Decision Logic

Fault Hypothesis

Figure 1. Sensor Monitoring Module


Statistical Decision Logic The decision logic module implements the sequential ratio probability test (SPRT) initially developed by Wald8 and later used by Upadhyaya9. This module uses the residual as the input and outputs the condition of the sensor. Rather than computing a new mean and variance at each sample time, the SPRT continuously monitors the sensor's performance by processing the residuals. This SPRT based method is optimal in the sense that a minimum number of samples is required to detect a fault existing in the signal. When a sensor is operating correctly, the residual should have a mean of zero, and a variance comparable to that of the sensor (due to the filtering characteristics of an AANN). If there is sensor drift, the residual mean shifts. Due to the shift in residual mean, the likelihood ratio increases. This ratio is a measure of how close the residual is to zero. If the likelihood ratio increases above a certain predefined boundary (user specified by false and missed alarm probabilities), the residuals are more likely to be from the faulted distribution than from the unfaulted distribution, and the sensor is classified as faulted. When the likelihood ratio is less than the boundary, the sensor is said to be good. If a sensor is

Two methods of retraining have been investigated: history stack adaptation10 and retraining only the linear output layer weights11. History stack adaptation is an adaptive training paradigm which simply adds new patterns to the training set and retrains the entire network. Several schemes give greater weight to newer patterns and drop old patterns out of the stack. Many researchers believe that the hidden layers of a neural network act as feature extractors and the linear output layer combines these features to provide a desired mapping. If the features do not change when a plant operating condition changes, then we can just alter the output layer weights to perform the desired mapping without retraining the entire network. This assumption seems to hold for small changes in operating conditions, so retraining only the output weights should always be attempted first. Retraining the entire network may be necessary for major changes in plant operating conditions when retraining the output weights does not result in satisfactory performance. Retraining only the linear output is very fast. In fact, we are not retraining the weights, we are using a least squares procedure to solve for the weights. Several methods of solving for the linear output weights exist including pseudoinverse methods which can cause numerical inversion problems, better methods use the LU or QR decompositions. The best method uses the singular value decomposition (SVD)12 which uses the most relevant information to compute the weight matrix and discards unimportant information that may be due to noise. The SVD method is also used during the original network training and resulted in a 40x reduction of training time13 over backpropagation with an adaptive learning rate and momentum. System Integration and Implementation Figure 2 presets a SIMULINK14 block diagram showing the interrelations between the modules. Measured sensor data is input to the system sequentially and is initially processed by the correction module. If the sensor has not been determined to be faulty, the sensor value is used as input to the AANN. In this figure, there are 13 sensors being monitored. The network produces an estimate of the individual sensor values. These values are compared to the actual values and residuals are formed. The residuals are fed to the SPRT based decision logic module and a decision on the status of the sensor is made. If the sensor is determined to be faulty, the correction module substitutes the estimated value for the sensor output and uses it as the input to the AANN. If the sensors are still fault free, the actual sensor outputs are used as inputs to the AANN.

The reset buttons on the left side of the figure allow the user to acknowledge sensor faults and to try to reset the AANN input to the actual sensor output. This is useful to clear spurious alarms. An automatic mode allows the sensor output to be substituted back into the network whenever the fault clears. 3. HFIR Example Data from the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) was used to test the sensor calibration monitoring system methodology. Fifty-six sensors that were sampled at two second intervals were divided into four groups based on linear correlation coefficients and genetic algorithm selection. The secondary sensor group consisting of 13 sensors is used in this example. The linear correlation between these sensors ranged from 0.1 to 0.9. An AANN was trained using 215 patterns from the first two days of operation. Decision logic alarm levels were set to give zero false alarms. This resulted in detection levels between 0.5% and 3% of the sensor's full range. Simulated Drift Detection A simulated drift of the reactor inlet temperature (sensor #1) was inserted at the rate of 0.5% per day. The drift was detected in about 9 hours which corresponds to a 0.2% detection threshold. Figure 3 shows the results of this test. The upper graph shows the simulated faulty signal and the AANN estimate of the signal. The faulty sensor signal is declining while the AANN estimate is steady. Also note that the estimate has much less noise than the measured signal. This is because the other 12 signals are primarily being used to estimate the signal and their noises tend to cancel. The middle graph shows the residual while the lower graph is the output of the SPRT based decision module. An SPRT output of zero corresponds to a fault free sensor and an output of one means the sensor is faulty. The output is reset to zero after a decision is made that the sensor is faulty. When the residual is so large that the decision can be made in one sample interval, the output locks high. This example shows that the AANN based system can detect and identify very small sensor drifts. Previous work using Florida Power Corporation's Crystal River Nuclear Power Plant data gave similar results2,3. Network Tuning Next, the system that was only trained with the first 2 days of sensor data was tested using data late in the fuel cycle. Since the network has never been trained on data

from this operating condition, we would expect poor performance which Figure 4 shows. The system was tuned to the new operating condition by adjusting only the output layer weights. To do this, a new training set was made by appending patterns from the first hour of new operation to the old training set and using the SVD methodology discussed earlier. Figure 5 shows the network performance after retuning the network. The test data used for this validation was collected from a period sampled subsequent to the retuning data. This simple and fast retuning resulted in the system being able to correctly estimate the sensor outputs. Drift detection tests gave results similar to that discussed above. 4. Conclusions The results of this study have shown that a plant wide sensor calibration monitoring system using autoassociative neural networks is not only feasible but practical. The system is composed of an AANN sensor redundancy module and a SPRT based fault detection module. A faulty sensor replacement module and a model tuning module have also been constructed. The complete sensor monitoring system has been integrated using The MathWork's SIMULINK software and applied to the High Flux Isotope Reactor. The results show that sensor degradation can be detected at levels between 0.5% and 3% of their full range. Not only is the fault detected, but the sensor signal could be replaced with a fault free signal so that plant operations could continue. Lastly, the output layer tuning method has proven to be an efficient means to correct for changes in plant operating conditions. We would like to acknowledge and thank the engineers at HFIR for providing us with this data. 5. References [1] B. R. Upadhyaya and E. Eryurek, "Application of Neural Networks for Sensor Validation and Plant Monitoring," Nuclear Technology, vol. 97, pp. 170-176, February, 1992. [2] D. J. Wrest, J. W. Hines, and R. E. Uhrig, "Instrument Surveillance and Calibration Verification Through Plant Wide Monitoring Using Autoassociative Neural Networks", published in the proceedings of The 1996 American Nuclear Society International Topical Meeting on Nuclear Plant Instrumentation, Control and Human Machine Interface Technologies, University Park, PA, 1996.

[3] D. J. Wrest, J. W. Hines, and R. E. Uhrig, "Instrument Surveillance and Calibration Verification to Improve Nuclear Power Plant Reliability and Safety Using Autoassociative Neural Networks", published in the proceedings of The International Atomic Energy Specialist Meeting on Monitoring and Diagnosis Systems to Improve Nuclear Power Plant Reliability and Safety, Barnwood, Glouster, United Kingdom, May 14-17, 1996. [4] K. Nabeshima, K. Susuki and T. Turkan, "Real-Time Nuclear Power Plant Monitoring With Hybrid Artificial Intelligence Systems," published in the proceedings of The 9th Power Plant Dynamics, Control & Testing Symposium, vol. 2, pp. 51.01, University of TennesseeKnoxville, May 24-26, 1995. [5] M.A. Kramer, "Nonlinear Principal Component Analysis Using Autoassociative Neural Networks," AIChE Journal, vol. 37, no. 2, pp. 233-243, February, 1991. [6] M.A. Kramer, "Autoassociative Neural Networks", Computers in Chemical Engineering, vol. 16, no. 4, pp. 313-328, 1992. [7] D. Dong and T.J. McAvoy, "Sensor Data Analysis Using Autoassociative Neural Nets," Proceedings Of World Congress On Neural Networks, vol. 1, pp. 161166, San Diego, June 5-9, 1994. [8] Wald, A., "Sequential Tests of Statistical Hypothesis", Ann. Math. Statist., vol. 16, pp. 117-186, 1945. [9] Upadhyaya, B. R., Wolvaardt, F. P., Glockler, O., "An Integrated Approach for Sensor Failure Detection in Dynamic Systems", Research Report prepared for the Measurement & Control Engineering Center, Report No. NE-MCEC-BRU-87-01, 1987. [10] Mills, P., A. Y. Zomaya, and M. Tade, NeuroAdaptive Process Control: A Practical Approach, John Wiley and Sons, Chichester, England, 1996. [11] Lo, James Ting-Ho, "Adptive System Identification by Non-Adaptively Trained Neural Networks", proceedings of The 1996 International Conference on Neural Networks, Washington DC, pp. 2066-2071, June 3-6, 1996, [12] Masters, T., Practical Neural Network Recipes in C++, pp. 180-185, Academic Press, San Diego, 1993. [13] Uhrig, R. E., J. W. Hines, C. Black, D. Wrest, and X. Xu, "Instrument Surveillance and Calibration Verification System", Sandia National Laboratory contract AQ-6982 Final Report by The University of Tennessee, March, 1996. [14] SIMULINK Dynamic System Simulation Software, The Math Works, Natick Massachusetts, 1993.

Reset 1 hfir_13.mat Reset 2 Reset 3 Reset 4 Reset 5 Reset 6 Reset 7 Reset all Reset 8 Reset 9 Reset 10 Reset 11 Reset 12 Reset 13 Load HFIR Data Actual 13 SPRTs 13 Filters 13 Fault Hypothesis + Sum Residuals Mux Sensor Data Correction ModuleStatus Estimates Mux yout To Workspace

13 AANN

Figure 2. Sensor Calibration Monitoring System

Comp Pt. 34: #3 Reactor Inlet Temperature (F) (adapted) 122 121
degrees F

120 119 118 0 5 100 200 300 400 Difference Between Sensor Signal And AANN Estimate 500 600 700 800 900 1000 Sensor AANN

degrees F

-5 0 100 200 300 400 500 SPRT Fault Hypothesis 600 700 800 900 1000

Index

0.5

0 0 100 200 300 400 time in 2 minute intervals 500 600 700 800 900 1000

Figure 3. 0.5% Per Day Simulated Drift in Sensor #1

Comp Pt. 34: #3 Reactor Inlet Temperature (F) 125

120
degrees F

115

Sensor AANN

110 0 5 100 200 300 400 Difference Between Sensor Signal And AANN Estimate 500 600 700 800 900 1000

degrees F

-5 0 100 200 300 400 500 SPRT Fault Hypothesis 600 700 800 900 1000

Index

0.5

0 0 100 200 300 400 500 600 700 800 900 1000

time in 2 minute intervals

Figure 4. Testing of System With Data From a New Operating Condition

Comp Pt. 34: #3 Reactor Inlet Temperature (F) 122 121


degrees F

120 119 118 0 5 100 200 300 400 Difference Between Sensor Signal And AANN Estimate 500 600 700 800 900 1000 Sensor AANN

degrees F

-5 0 100 200 300 400 500 SPRT Fault Hypothesis 600 700 800 900 1000

Index

0.5

0 0 100 200 300 400 500 600 700 800 900 1000

time in 2 minute intervals

Figure 5. Testing of System After Tuning

Das könnte Ihnen auch gefallen