Sie sind auf Seite 1von 12

ISA

TRANSACTIONS
ISA Transactions 44 2005 2334

Linear mass balance equilibration: A new approach


for an old problem
J. Ragot, D. Maquin, M. Alhaj-Dibo
Centre de Recherche en Automatique de Nancy, CNRS UMR 7039, 2, Avenue de la Fort de Haye, 54516 Vandoeuvre Cedex, France

Received 6 November 2003; accepted 3 June 2004

Abstract
Interval analyses are well known in the mathematics literature but have found few applications in control engineering. Based on the interval concept, we present here a methodology for data reconciliation and mass balance equilibration which is a very classical problem in mineral and chemical engineering. Indeed, this problem is solved with the
view of inequality constraints which allows us to represent measurements by interval without particular knowledge and
hypothesis about the density probability function of the measurement errors. As a main result, the paper gives a set of
solutions for the reconciled data under an interval form and not only one solution as is the case with classical
approaches. 2005 ISAThe Instrumentation, Systems, and Automation Society.
Keywords: Interval; Uncertainties; Measurement errors; Data reconciliation; Mass balance equilibration; Estimation; Bounded approach

1. Introduction
For plant control improvement, coherency of information supplied by instrument lines and sensors must be ensured. The problem of data reconciliation may be formulated in simple terms. Since
measurements of process variables are subject to
errors, they generally fail to exactly verify process
functioning equations. How may we adjust or reconcile these measurements to force them to verify
the set of equations assumed to be exact? Thus
formulated, data reconciliation is a procedure of
adjusting measured data so that they will obey the
constraint equations of the process such as conservation laws 1 4. With this view, data reconciliation can be transformed into a problem of optimum computation for an objective function
subject to specified constraints 5. The formulation of the problem is not, however, so easy in
practice. Indeed, a number of delicate decisions
must be made about the nature of the statistical
distribution of the measurement errors, about the

system dimensions i.e., the number of variables


and constraints, the nature of constraints choice
between static and nonstatic constraints, between
linear and nonlinear constraints, sometimes with
incomplete or imperfect knowledge about the
structure or parameters of the model involved.
This last problem presently receives increasing attention and is known as the robustness problem
6,7. The reader will also find works reporting
results about the problem of diagnosis and, in particular, methods to detect, localize, and identify
gross measurement errors 8,9,5. For non-steadystate processes we refer the reader to Refs. 10
and 2. These authors handle the data reconciliation problem using the Kalman filtering and the
state-space formalism. Reconciliation techniques
have been applied in various fields and very often
to chemical processes 11,12,3, to power plant
systems, 4,13, to paper processing 14. There
are also a lot of works in the field of mineral processing 1517 and in Ref. 13 a specific approach dealing with entropy evaluation has been

0019-0578/2005/$ - see front matter 2005 ISAThe Instrumentation, Systems, and Automation Society.

24

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

developed. A notable feature dealing with mineral


processing data is that there are specific constraints for assays, particle sizes, volumic and
mass flow rates, giving rise to bilinear, trilinear,
and even quadrilinear constraints; this particularity
has been turned into an account to develop specific
reconciliation procedures 18,19. In the paper of
Ref. 24, a survey of the problem of data reconciliation has been addressed with specific considerations in the field of mineral processing. Several
recent books present the basic statement of data
reconciliation and specific developments about
gross error detection and localization, observability analysis, classification of variables, reliability,
and sensor positioning 2022.
The general assumption of data reconciliation
concerns the measurement errors. In almost all the
works that have been done on the subject, the authors pre-suppose measurement errors to be independent and Gaussian, having zero means and
known variances. For this last point, however,
some development deals with the problem of measurement variance estimation 23. Summarizing,
the various data reconciliation algorithms may be
expressed in the following way. Let x represent
the measured values and x the true unknown values. Assume that measurements x contain errors:

x xe,

where e is supposed to be the realization of a random variable, which is normally distributed with
zero mean and known positive covariance V matrix. From the normal distribution hypothesis, the
objective function for the data reconciliation problem may be deduced:

1
x x V2 1 .
2

The model of the process is written

f x 0,

where f ( x ) represents generally the set of equations involving the mass and the energy balances.
Then, the objective function satisfying the constraints can be minimized via classical approaches
and the obtained solution, x is the so-called reconciled data. The optimality/validity of this approach
is based on the main assumption that the errors
follow a normal Gaussian distribution; when this
assumption is satisfied, conventional approaches

1,8,24 provide an unbiased estimate of the process state variables. However, normal distribution
usually does not exist in practice.
In the following, we propose to relax this hypothesis of normal error distribution and to only
use bounds upper and lower on process variables
that are chosen by the user, according to the measurement values and their precisions. It is important to note that the assumption that the measurement variances are perfectly known is very
common in the context of the data reconciliation
method. In fact, it is often difficult to have access
to that type of information as the precision of the
industrial measurements result not only from the
sensor precision but also on the operating conditions. Moreover, when taking into account the sensor precision only, most practitioners use an empirical correspondence between the magnitude of
the relative error of a sensor and the standard deviation of a normal distribution. So, the use of
bounded errors for characterizing the measurement errors is as justified as the use of statistical
distribution. We even argue that this type of model
is much closer to the physic phenomena as the real
errors are always bounded; that is not the case of a
normal distribution defined on an infinite support.
Moreover, the estimates obtained by means of data
reconciliation are only sensitive to the relative
weights affected to the different measurements in
the optimized criterion and not to the concrete values of these weights. Only the fault detection
methods really use these weights for detection and
isolating the faults, but all the practitioners know
that these data must be adjusted on line when
exploiting the method on the real process. So, the
adjustment of the interval radii, in the context of
bounded errors, is as easy as that of standard deviation in the stochastic context. Moreover, with
this new approach, we show that the reconciliation
problem may admit several solutions; for that purpose, we present a systematic way to construct a
simple geometrical domain containing a set of solutions. In Section 2, two academic examples introduce the interval approach; Section 3 gathers
some rules for interval computations. The interval
reconciliation approach is developed in Section 4
and is illustrated by an example in Section 5.
2. Two academic examples
Two academic examples introduce the problem
of data reconciliation. The first one formulates the

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

25

estimation with inequality constraints that may be


solved without computational effort, while the
second explains the basic idea for computing the
whole set of solutions.
2.1. First example
The first example concerns a process described
by three flows each being characterized by the total mass rate denoted by x i ) . Eq. 4a has been
established from the flow rate conservation and
with the hypothesis that the process is under
steady-state conditions. Eq. 4b relates the constraints that are to be verified by the estimation;
these constraints have been derived from the measurements collected on the process which are
themselves expressed under interval form; it
should be noted that the length of these intervals
depends on the precision of the measurement. The
reconciliation consists of finding an estimation x i
of all the flow rates x i , in agreement with the
model of the process 5a and the measurements
5b:

x 1 x 2 x 3 0,

4a

14x 1 18
12x 2 14 .
3x 3 6

4b

Indeed, the estimation x i are solution of the system:

x 1 x 2 x 3 0,

5a

14x 1 18
12x 2 14 .
3x 3 6

5b

Analyzing system 4a and 4b, it may be possible to obtain several estimations of the flow
rates, all being coherent with the model and the
measurements. For this example, the reader should
verify that the following set x 1 17, x 2 12, x 3
5 perfectly agrees with the model and the measurements. The following set x 1 17, x 2 12.5,
x 3 4.5 also verifies model 5a.
In fact, there is an infinity of solutions, and it
would be interesting to find a way to characterize
the whole set of solutions. As an example, all the
solutions belonging to the domain 12x 2 13,
4x 3 5, x 1 x 2 x 3 verify system 4a and
4b and all the solutions belonging to the domain

Fig. 1. Exact solution domain coordinates x 1 , x 2 ).

12.5x 2 13.5, 4x 3 4.5, x 1 x 2 x 3 are


also a solution. Among all these domains we may
think to define those having the greatest area. To
explain this point, we have drawn in Fig. 1 the
domain in the plan ( x 1 ,x 2 ) containing all the solutions of system 5a and 5b. It is easy to see
that this domain is generated by the intersection of
the three strips defined by the inequalities:

14x 1 18,

6a

12x 2 14,

6b

3x 1 x 2 6.

6c

However, such domains are not easy to handle


because the inequality defining them are coupled
by some variables. In the given example, inequalities 6a and 6b describe a box cut by a strip
6c, which is not parallel to the axis that renders a
complicated description of the domain depicted in
Fig. 1. Fig. 2 shows some reduced but admissible
domains included in the true one represented by
boxes, the remaining problem being to solve the
definition and the selection of the best box.
However, note that the domains depicted in Fig. 2
are described by noncoupled inequalities: 16
x 1 17.5, 12x 2 13 and 16.5x 1 18, 12
x 2 13.5 that finally describe very simply an
admissible solution.
Remark: the existence of a solution for a system
like Eqs. 4a and 4b depends on the measurement values. For example, considering model 4a
with the constraints 14x 1 18, 12x 2 14, 7
x 3 8 does not produce a solution. Clearly, the
presence of gross errors or outliers leads to such a
situation. In fact, it is easy to detect these gross

26

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

Fig. 2. Some admissible reduced domains coordinates x 1 , x 2 ).

errors by analyzing the balance residual; in the


proposed example, the reader should verify that
the residual is the interval 8 1 which does
not contain the zero value. Thus taking into account the bounded errors affecting the measurements does not lead to a residual which may take
the value zero; that explains the presence of a
gross error in the data. With a more formal description, for the same model 4a associated to
the constraints x inf,i x i x sup,i the existence condition of a solution may be expressed either x sup,2
x sup,3x inf,1 or x inf,2x inf,3x sup,1 . This consistency analysis of residual intervals may be generalized for any system. Other approaches are robust
in respect to the presence of gross errors and allow
us to reconciliate data including these gross errors
25.
2.2. Second example
The second example allows us to formulate the
data reconciliation as an optimization problem.
The concerned system is characterized by five
variables linked by three equations; all the variables are measured, the measurements being expressed by intervals. Following the previous example, let us directly formulate the reconciliation
problem with inequalities. System 7 describes
the model constraints while system 8 expresses
the estimation constraints, the bounds x inf,i and
x sup,i of which being fixed by the measurement
precision:

x 3 x 1 x 2 0
x 4 x 1 2x 2 0 ,
x 5 x 1 x 2 0

x inf,i x i x sup,i .

As previously explained, it is a temptation to express the set of solutions as a box defined by

x i x c,i i i ,

i 1,

where x c,i and 2 i are, respectively, the center and


the diameter of the box and where i takes values
in a normalized interval. As claimed before, the
main advantage of such representation relies in the
decoupling of the inequalities characterizing the
domain. The characterization of that box consists
in determining the two parameters x c,i and i . For
that, let us observe that the model 7 allows us to
classify the variables into an independent set and a
dependent one. In the following, we arbitrarily
choose x 1 and x 2 as independant variables and
thus system 8 may be expanded into system
10:

x inf,1x c1 1 1 x sup,1
x inf,2x c2 2 2 x sup,2
x inf,3x c1 x c2 1 1 2 2 x sup,3
x inf,4x c1 2x c2 1 1 2 2 2 x sup,4
x inf,5x c1 x c2 1 1 2 2 x sup,5
1 1
2 1

10

As we aim to obtain the greatest domain for


expressing the solutions of the reconciliation problem, we have to choose the box with maximal
area. Consequently, the product 1 2 has to be
maximized taking into account the set of inequalities 10. Thus, using the extreme values of 1 and
2 , the quantities 1 , 2 , x c1 , and x c2 are solutions of the problem:

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

Table 1
Interval measurements.
Var.
number
x inf,1
x sup,1

2.25
3.00

3.50
5.25

5.50
8.50

10.50
14.50

1.00
2.50

max 1 2
1 0
2 0
x inf,1x c1 1 0
x c1 1 x sup,10
x inf,2x c2 2 0
x c2 2 x sup,20
.
x inf,3x c1 x c2 1 2 0
x c1 x c2 1 2 x sup,30
x inf,4x c1 2x c2 1 2 2 0
x c1 2x c2 1 2 2 x sup,40
x inf,5x c1 x c2 1 2 0
x c1 x c2 1 2 x sup,50

27

out that the resut is expressed as a set of admissible solutions, contrarily to more classical estimation methods that produce a unique solution the
most probable one in stochastic terms.

x 1 2.416 2.999
x 2 4.042 4.516
x 3 x 1 x 2
.
x 4 x 1 2x 2
x 5 x 1 x 2

12

3. Review of interval arithmetics

11

Such a problem may be solved by using classical algorithms in the field of optimization see, for
example, the linear matrix inequality solver of
MATLAB. For example, with the data gathered in
Table 1, one obtains 1 0.292, 2 0.437, x c1
2.708, and x c2 4.479. Thus the final description of the reconciled data is defined by the box
represented by Eq. 12. It is important to point

As previously mentioned in Section 2, the proposed reconciliation technique intensively uses


variables described under an interval form and operations between these intervals. This section reviews basic interval arithmetic operations see
Table 2 and algorithms for interval computations
used in the rest of the paper. Only real intervals
are considered; as a definition, a real interval is a
segment on the real number axis and an interval
vector of Rv is an v -dimensional rectangle or
box and is the Cartesian product of intervals. It
should be noted that all continuous basic functions
such as sin, cos, . . . extend easily to intervals. In
the following, all the intervals are typewritten with
bold characters.
Interval arithmetic takes into consideration the
uncertainty of all of the parameters of a system
and is able to provide strict bounds of the vari-

Table 2
Interval arithmetic operations.
Definition or operation
Interval number
Center
radius
Interval addition
Interval multiplication
Scalar multiplication
Interval division

Formulation
x x ,x
x : lower bound, x : upper bound
x x ,x

x c (x x
)/2
x r (x
x )/2
xx c x r , 1
zxy x y ,x
y

zxy x y
,x
y
zxy
z min(x y ,xy,x
y ,x
y),max(x y ,xy,x
y ,x
y)
if a0, zax ax ,ax

if a0, zax ax
,ax
1 1
zx:y x ,x
,
y y

unless 0 y ,y
in which case the result of division is undefined

28

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

ables that are to be estimated. Initially, interval


computation was developed to quantify the uncertainty of results calculated with a computer using a
floating point number representation 26. A considerable body of literature has been published on
the use of intervals in various fields such as identification, control, signal and image processing.

4. Interval data reconciliation: General case


The structural information in a plant can be conveniently represented by a direct graph, the nodes
of which represent the process units as reactors,
tanks, distillation columns, while the arcs represent streams of circulating matter. The mathematical model originated from mass conservation laws
and assuming linear relationships is written under
the exact structural form

M x0,

13

where M R is the incidence matrix of the process graph with n the number of nodes and v the
number of arcs, xRv is the vector of true values unreachable to the measure.
As claimed before, the assumption of an underlying normal distribution is not very realistic and
can be criticized in regard to the fact that it extends to infinity. In the common practice, the probability of variation of more than five standard deviations is very small and enough to be considered
as negligible. Moreover, for a value around zero, a
significant probability of a negative value could
occur against all physical sense. Thus the use of a
log-normal distribution or other adapted distribution could avoid this kind of difficulty. That may
justify our approach for data reconciliation which
does not assume the hypothesis of normal distribution. There are a few works published in this
area. In Ref. 27, the use of bounds for the estimation with an interval formulation is called to
mind and developed on an example. More recently, in Refs. 28 and 29 the use of the LMI
approach linear matrix inequality allows us to
formulate more generally the bounded estimation
problem and an admissible solution is proposed.
According to the precision of the measurement
devices, the available measurement x is expressed
as intervals:
n. v

x x
x
,

14

where x and x are, respectively, the upper and


lower bounds of process variables. Indeed, a measurement x is represented by an interval x x
whose length is directly related to the measurement precision.
Considering now the model 13 and the measurement 14, it is desired to give an estimation x
of the flow rates. The proposed strategy consists of
two steps. First, taking into account the model
constraint 13, we reduce the number of variables
to be estimated. Second, we choose to express the
estimations of the true values with an interval representation.
First step: Data reduction
As the different equations of the model represented by the rows of the matrix M ) are independent, the matrix M has full row rank. Therefore
the following partitions always hold:

M M b

15

M h

with M b being a nn regular matrix. That allows


us to express the model 13 as

xAx b ,x b Rv n ,

16a

I
A H ,

16b

HM 1
b Mh .

16c

Summarizing, the process was initially described by v variables; however, redundancy expressed by the model of the process allows us to
reduce the number of variables to v n. Therefore
reconciliation will be performed using only a subset of variables.
Second step: Interval estimation
According to the previous decomposition, the
state x has to be estimated taking into account the
measurement intervals 14. As explained with the
example of Section 2, the estimation is constrained to be inside a box described by an interval
form:

x b x c x r , x c Rv n , x r Rv n ,
Rv n ,

17a

1,

17b

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

where the operator performs an element-byelement product of two vectors. Indeed, the estimation x b is chosen as a box, the center of which
being x c , the width being 2x r . In definition 17,
the variable allows us to consider all the values
inside the box and consequently, for the state estimation problem, to consider a set of admissible
solutions. Gathering Eqs. 14, 16b, and 17
gives

29

dimension x r,i x r,0 ,i1v n; the function to


maximize in Eq. 19 is then equal to x r,0 which
leads to a convex problem. The results produced
by that optimization are the estimates x c of the
center and x r of the radius of the admissible set of
solutions x b x c x r , Eq. 17a. Thus from
definition 16a the whole set of variables x may
be deduced.
5. Example

x A x c x r x
,

18a

1.

18b

Then Eq. 18 has to be solved in respect to the


center x c and the radius x r ; moreover, we suggest
to maximize the volume of the box characterized
by the components x r,i of x r . Thus, using partitioning Eq. 16b, we have to solve the optimization problem:
n
max vi1
x r,i
x c x r x
0
x c x r x
0
.
Ax c A x r x
0
Ax c A x r x 0

19

This classical problem may be solved by using


constraint optimization techniques, see, for example, Ref. 30. A more simple formulation may
be adopted when the box is reduced to a cube of

Fig. 3 shows the process graph system that has


been simulated; it may easily represent a mass
flow network in mineral flotation or a steammetering system such as those used in Ref. 22 or
more generally any system involved with material
transportation. This type of directed graph is very
classically used for describing the interdependence
of flows and inventory data in a process expressed
in terms of the material balances. The directions of
its arcs are the same as those of the streams in the
process flow sheet. The nodes in the process graph
correspond to the units, tanks, and junctions in the
process flow sheet 1. There is a total of 15 arcs
with eight unit nodes: all the flow rates are measured, otherwise, when measurements are missing,
preliminary decomposition and variable classifications have to be performed. The method described
in Section 4 is applied to this process graph with
data obtained by simulation. First of all, we have
to express the model of the process and its partitionning form 16. According to the process graph
of Fig. 3, we deduce the incidence matrix M:

30

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

Fig. 3. Process graph.

In that matrix, each column is associated to a flow


and each row is associated to a node. Then, we
separate the variables of the process according to
the partitioning Eq. 15:

x x Th ,x Tb T ,
x b x 1 ,x 2 ,x 3 ,x 5 ,x 7 ,x 12 ,x 13 ,x 15 ,
x h x 4 ,x 6 ,x 8 ,x 9 ,x 10 ,x 11 ,x 14 .

the columns of which correspond to the flow reordered as

x x 1 ,x 2 ,x 3 ,x 5 ,x 7 ,x 12 ,x 13 ,x 15 ,x 4 ,x 6 ,
x 8 ,x 9 ,x 10 ,x 11 ,x 14 .

21

It is useful to note that, according to the reorganization of the x vector, we now have the equivalent incidence matrix:

Thus the reader should verify the expression of A,


Eq. 16c:

20

For example, the first mass balance equation is


now expressed x 1 x 8 x 9 x 100 which can be
seen in Fig. 3 as the result of the agregation of
nodes 1, 2, 3, and 6. In the rest of the paper, all
references to the incidence matrix M and the flow
vector x agree with the new representations 20
and 21.

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

In Table 3, columns 2 and 3 contain the bounds


x inf and x sup of the measurements from which one
can also evaluate the center x c ( x infx sup) /2 and
the radius x r ( x supx inf) /2. Before data reconciliation, a preliminary test may be performed in
order to appreciate the coherence of the measurement. For that, mass balance residuals are computed according to the formula:

rM x.

22

As the measurements are expressed as intervals,


the residuals are themselves intervals. Thus we
have to compute, according to the interval arithmetic rules in Table 2, the lower and upper bounds
of these residuals. The reader should verify the
expression of the interval residuals respectively
computed from the measurements 20 and from
the estimations 21:

r M x c M x r ,M x c M x r ,

23

r M x c M x r ,M x c M x r .

24

Based on definitions 21 and 22, Table 4


shows the bounds of the residuals computed both
for the raw measurements columns 2 and 3, and
for the reconciled data columns 4 and 5. The
reader has to note that all the raw interval residu-

Table 3
Measurements and reconciled data.
Flow

x inf

x sup

x inf

x sup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

42.8
6.2
23.8
40
28.2
4.3
8.7
16.2
8.1
14.8
4.0
8.5
11.0
1.7
18.1

50.8
12.2
29.8
48
38.2
8.3
12.7
24.2
12.1
18.8
8.0
12.5
15.0
4.7
28.1

44.67
8.2
25.27
42.73
32.04
6.26
10.2
18.9
9.51
15.26
4.74
10.02
12.0
2.25
21.65

45.67
9.2
26.27
43.73
33.04
7.26
11.2
19.9
10.51
16.26
5.74
11.02
13.0
3.25
22.65

46.83
9.25
26.81
43.93
33.16
6.34
10.76
20.01
10.10
16.70
6.09
10.61
13.01
3.18
23.20

31

Table 4
Residual bounds computed from measured rows 2 and 3
and reconciled data rows 4 and 5.
Unit

r inf

r sup

r inf

r sup

1
2
3
4
5
6
7
8

12
17
7
11
12
6.0
7.5
10.5

12
17
7
11
12
6.0
7.5
10.5

2.0
3.0
1.5
2.0
2.5
1.5
2.0
1.5

2.0
3.0
1.5
2.0
2.5
1.5
2.0
1.5

als contain the zero value; that means that, according to uncertainties affecting the measurements,
the residual may be zero and thus the data are
consistent no gross errors affect them. These reconciled data are presented in columns 4 and 5
lower and upper bounds x inf and x sup from which
may be deduced the center x c and the radius x r ) of
Table 3 and thus a set of admissible solutions is
given to the user. It is clear that the reconciled data
are more coherent than the raw data, since the interval residuals have a significantly smaller radius.
The last column of Table 3 indicates the estimates
x obtained with the classical least-square approach. For that purpose, the measures have
been taken as the centers of the measurement intervals and the standard deviations as the radius of
the intervals. As explained before, the comparison
between the two estimates is somewhat hazardous,
the least-square approach giving one solution
since the interval approach gives a set of admissible solutions; however, we can appreciate the
proximity of the least-square solution in respect to
the bounds of the interval solution.
Remark. Let us comment the problem of gross
errors. As mentioned in the example of Section 2,
they can be a prior detected through a residual
analysis as explained in previous work 9,22,20.
To illustrate that point, the same process has been
used with the set of measurements given in Table
5 from which the interval residuals have been
computed. As indicated in Table 6, one residual,
the second, indicates the presence of abnormal
measurements. Applying classical signature analysis 31,24,22 generalized to interval 29 allows
us to detect that measurement number 4 contains a
gross error. Consequently, data reconciliation has
to be performed after removing the effect of the

32

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

Table 5
Measurements and reconciled data.
Flow

x inf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

42.9
9.1
24.9
54
31.1
5.2
9.9
18
8.2
14.7
4
8.7
11.2
3.0
21

x sup
50.9
11.1
28.9
56
35.1
7.2
11.9
22
12.2
18.7
6.0
12.7
15.2
5.0
25

x inf

x sup

46.3
9.9
25.5
43
32.3
6.6
10.5
20.6
9.7
15.6
4.9
10.5
11.6
3
23.8

46.7
10.3
25.9
43.4
32.7
7
10.9
21
10.1
16
5.3
10.9
12
4.4
24.2

fourth flow measurement. There are two ways to


do that, either by reducing the model equations
according to observability analysis considering
that this flow is not measured 22, or by increasing the measurement interval of flow number 4 in
order to suppress its influence. The second way is
easy to implement and for the proposed example,
the interval measurement was fixed to 0, 100.
The reconciliation procedure was again applied
and the estimates are gathered in columns 4 and 5
of Table 5.
6. Conclusion
The exposed technique represents an interesting
alternative to the classical technique of data recTable 6
Residual bounds computed from measured rows 2 and 3
and reconciled data rows 4 and 5.
Unit

r inf

r sup

1
2
3
4
5
6
7
8

10
3
6
7
18
6.0
4.0
6.0

10
21
6
7
4
4.0
8.0
4.0

onciliation using the principle of the likelihood


function maximization combined with the distribution of the measurement errors. It requires very
few theoretic hypotheses for its implementation
and is essentially based on a semiempirical knowledge relative to the variable plausible confidence
domains. We have shown that the classical problems of total material balance equilibration can be
solved. Direct extensions, not reported in the paper, have been done; they mainly concern partial
balances and balances with missing measures, taking into account presence of outliers.
In the future, various extensions of this work
can be envisaged. The first concerns the development of a reconciliation method using simultaneously the precise knowledge structurally exact
balances equations, the imprecise knowledge
balances expressed by inequalities constraints,
the distribution functions of the errors when they
are available, and the inequality constraints on the
correction rate when the probability distribution
functions are unknown. The second extension
concerns the integration of fuzzy models or constraints under the form of propositions for instance: the flow rate of a given stream of the process is large or under the form of rules composed
of premises and consequences for instance: if the
flow rate of a given stream of the process is small
then the concentration of the corresponding flow is
high. That would allow the whole available
knowledge on a process to be used with their respective weights. The third extension, for which to
our knowledge there are no published works, concerns the characterization of the bounds of the
measurements; it seems possible, using a set of
measurements collected on an adequate time horizon, to estimate the bounds of the measurement
errors and simultaneously reconcile the data.

References
1 Mah, R. S. H., Stanley, G. M., and Downing, D., Reconciliation and rectification of process flow and inventory data. Ind. Eng. Chem. Process Des. Dev. 15 1,
175183 1976.
2 Sood, M. K., Reklaitis, G. W., and Woods, J. M., Solution of material balances for flowsheets modeled
with elementary modules: The unconstrained case.
AIChE J. 25 2, 209219 1979.
3 Holly, W., Cook, R., and Crowe, C. M., Reconciliation
of mass flow rate measurements in a chemical extraction Plant. Can. J. Chem. Eng. 67, 595 601 1989.

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

4 Grauf, E., Jansky, J., and Langenstein, M., Reconciliation of process data on basis of closed mass and energy balances in nuclear power plants. SERA, 9,
Safety Engineering and Risk Analysis, pp. 23 40,
1999.
5 Harikumar, P. and Narasimhan, S., A method to incorporate bounds in data reconciliation and gross error
detection. Part 2: Gross error detection strategies.
Comput. Chem. Eng. 17 11, 11211128 1993.
6 Adrot, O., Maquin, D., and Ragot, J., Fault detection
with model parameter structured uncertainties. European Control Conference, ECC99, Karlsruhe, Germany, August 31September 3, 1999.
7 Maquin, D., Adrot, O., and Ragot, J., Data reconciliation with uncertain models. ISA Trans. 39, 35 45
2000.
8 Maquin, D., Bloch, G., and Ragot, J., Data reconciliation for measurements. Eur. J. Diagn. Saf. Autom. 1
2, 145181 1991.
9 Narasimhan, S. and Mah, R. S. H., Treatment of
general steady state process models in gross error
identification. Comput. Chem. Eng. 13 7, 851 853
1989.
10 Gertler, J. and Singer, D., Augmented models for statistical fault isolation in complex dynamic systems.
IEEE American Control Conference, pp. 317322,
Boston, MA, 1985.
11 Madron, F., Process Plant Performance: Measurement
and Data Processing for Optimization and Retrofits.
Ellis Horwood, London, 1992.
12 Vaclavek, V., Studies on system engineering III. Optimal choice of balance measurements in complicated
chemical systems. Chem. Eng. Sci. 24, 947955
1969.
13 Jefferson, T. R., An entropy approach to material balancing in mineral processing circuits. Int. J. Min. Process. 18, 251261 1986.
14 Brown, D., Marechal, F., Heyen, G., and Paris, J., Application of data reconciliation to the simulation of
system closure options in a paper deinking process.
European Symposium on Computer Aided Process
Engineering-13, edited by A. Kraslawski and I. Turunen. Elsevier, New York, 2003, Vol. 14, pp. 1001
1006.
15 Hodouin, D. and Flament, F., New developments in
material balance calculations for mineral processing
industry. Society of Mining Engineers Annual Meeting, Las Vegas, February 27March 2, 1989.
16 Hodouin, D., Mirabedini, A., Makni, S., and Bazin, C.,
Reconciliation of mineral processing data containing
correlated measurement errors. Int. J. Min. Process. 54
3 4, 201215 1998.
17 Sunde, S., Berg, O., Dahlberg, L., and Fridqvist, N.
O., Data reconciliation in the steam-turbine cycle of a
boiling water. Nucl. Technol. 143 2, 103124
2003.
18 Schraa, O. and Crowe, C. M., The numerical solution
of bilinear data reconciliation problems using unconstrained optimization methods. Comput. Chem. Eng.
22, 12151228 1998.

33

19 Dovi, V. G. and Del Borghi, A., Reconciliation of process flow rates when measurements are subject to detection limits: The bilinear case. Ind. Eng. Chem. Res.
38, 28612866 1999.
20 Romagnoli, J. and Sanchez, M., Data Processing and
Reconciliation for Chemical Process. Academic, New
York, 2000.
21 Bagajewicz, M. J., Process Plant Instrumentation: Design and Upgrade. Technomic, Lancaster, PA, 2000.
22 Narasimhan, S. and Jordache, C., Data Reconciliation
and Gross Error Detection. Gulf, Houston, TX, 2000.
23 Maquin, D., Narasimhan, S., and Ragot, J., Data validation with unknown variance matrix. 9th European
Symposium on Computer Aided Process Engineering,
Budapest, Hungary, May 31June 2, 1999.
24 Crowe, C. M., Data reconciliationProgress and challenges. J. Process Control 6 2,3, 8998 1996.
25 Ozyurt, D. B. and Pike, R. W., Theory and practice of
simultaneous data reconciliation and gross error detection for chemical processes. Comput. Chem. Eng. 28
1, 381 402 2004.
26 Moore, R. E., Methods and Applications of Interval
Analysis. SIAM, Philadelphia, 1979.
27 Himmelblau, D. M., Material balance rectification via
interval arithmetic. Process Systems Engineering,
PSE95. The Use of Computers in Chemical Engineering, IChemE Symposium Series, Vol. 92, pp. 121
133, 1985.
28 Mandel, D., Abdollahzadeh, A., Maquin, D., and
Ragot, J., Data reconciliation by inequality balance
equilibration. Int. J. Min. Process. 53, 157169
1998.
29 Ragot, J., Maquin, D., and Adrot, O., LMI approach
for data reconciliation. 38th Conference of Metallurgists, Symposium Optimization and Control in Minerals, Metals and Materials Processing, Quebec, Canada,
August 2226, 1999.
30 Bonnans, J. F., Gilbert, J. Ch., Lemarchal, C., and Sagastizbal, C. A., Numerical Optimization: Theoretical
and Practical Aspects. Springer, New York, 2002.
31 Maquin, D. and Ragot, J., Comparison of gross errors
detection methods in process data. 30th IEEE Conference on Decision and Control, Brighton, December
1991, pp. 22532261.

Jose Ragot was born in


Nancy, France on April 28,
1947. He received a doctoral
degree Ph.D. in electrical engineering in November 1973.
He passed its so-called
Doctorat-es-Sciences in November 1980. He is now at the
Automatic Control Research
Center of Nancy UMR CNRS
7039 and has been a professor
in electrical engineering at an
engineer school in geology
ENSG since 1985. His field
of interest includes modelization and identification, data reconciliation
and process diagnosis, and more generally every method allowing the
increase of dependability of industrial processes.

34

Ragot, Maquin, Alhaj-Dibo / ISA Transactions 44 (2005) 2334

Didier Maquin was born in


Nancy, France on November
22, 1959. He received a doctoral degree Ph.D. in electrical engineering in November
1987. He passed its so-called
Habilitation a` Diriger des
Recherches in November
1997. He is now at the Automatic Control Research Center of Nancy UMR CNRS
7039 and is a professor in
electrical engineering at an
engineer school in mechanical
and electrical engineering
ENSEM; his field of interest includes data reconciliation and process
diagnosis and more generally every method allowing the increase of
dependability of industrial processes.

Moustapha Alhaj Dibo was


born in Talhadia Syria in
January 1970. He received a
diploma of electronic engineering from Aleppo University
Syria in 1994. From 19941995 he worked with the general society of electrical power
generating in Syria. From
1996-1999 he worked as an assistant in department of automatic control and industrial
electronic in Aleppo University. He received a Masters
degree in control, signals, and communication from national polytechnic institute of Lorraine INPL in France in 2001. Presently he is
preparing a Ph.D. in control and signal processing at INPL.

Das könnte Ihnen auch gefallen