Sie sind auf Seite 1von 10

28 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO.

1, JANUARY 2006

Bootstrap Control
M. Aronsson, L. Arvastson, Member, IEEE, J. Holst, Member, IEEE, B. Lindoff, Member, IEEE, and A. Svensson

Abstract—In this paper, we present a new way to control linear timevariations, getting better predictions and then hopefully
stochastic systems. The method is based on statistical bootstrap also improved control. This approach is taken in [6] where
techniques. The optimal future control signal is derived in such the nature of the timevarations are assumed to be known. The
a way that unknown noise distribution and uncertainties in pa-
rameter estimates are taken into account. This is achieved by re- algorithm of Palsson et al. is extended in [7]. The time-varying
sampling from existing data when calculating statistical distribu- parameters are, however, in most cases not known and have to
tions of future process values. The bootstrap algorithm takes care be viewed as stochastic processes.
of arbitrary loss functions and unknown noise distribution even for Most results are based on the control signal being derived in
small estimation sets. The efficient way of utilizing data implies that such a way that it minimizes the distance between the ref-
the method is also well suited for slowly time-varying stochastic
systems. erence trajectory and the output signal. Furthermore it is com-
monly assumed that stochastic parts of the system are Gaussian
Index Terms—Generalized predictive control, optimal control, random processes. This leads to explicit expressions for the con-
quality control, resampling, statistical bootstrap techniques, sta-
tistical process control, stochastic control. trol signal since one only has to compute the first and second
statistical moments, i.e., the mean and variance, of the future
process values. The controller is in practice used in more gen-
I. BACKGROUND eral situations by assuming that the assumptions above is a good
enough description of the real system. However, instead of using
O PTIMAL control of unknown or time-varying systems is
in general a difficult task, which simultaneously must take
the character of the unknown timevariation of the parameters
the Gaussian assumption about the stochastic part, it should be
more to the point to make an optimal use of information ob-
and the fullfillment of the control action, e.g., optimization of tained by observing the system.
some loss function, into account. The optimal controller thus In this paper, we present a new way to control linear sto-
has a dual objective. chastic systems which is based on statistical resampling tech-
The general dual control problem, in case the problem is for- niques. The optimal future control signal is derived such that
mulated in a stochastic setting, has an optimal but computation- unknown noise distribution and uncertainties in parameter esti-
ally not achievable solution, originally presented in Fel’dbaum mates are taken into account. This is achieved by deriving esti-
(1960–1961). A number of approximations to that control prin- mates of future process distributions using statistical bootstrap.
ciple are given, cf. [1] for a general treatment and [2] and [3] for Thus, the method is not based on any assumptions about the
more recent results. noise distribution. Another key benefit with bootstrap control is
A commonly used approximative technique is to use open its ability to handle any loss function since the bootstrap tech-
loop optimal feedback (OLOF) controllers, i.e., feedback based nique gives the distributions of future process values, which is
only on the information available at present time. A common needed to integrate a general loss function. The method was de-
controller in this class is the generalized predictive controller veloped during studies of predictive control in energy produc-
(GPC) which was presented in [5], which treated timeinvariant tion systems where other loss functions than minimum variance
systems described by ARIMAX models. Extensions of this of Gaussian distributions sometimes are essential.
basic algorithm in order to be able to handle timevariations In Section II, a survey of the statistical bootstrap technique
are normally based on the assumption that the variations are is presented and in Section III the control problem is defined.
such that it is possible to extend the control algorithm with In Section IV, we present the bootstrap control algorithm and in
an adaptive parameter estimation procedure and then use the Section V simulation results are shown. Finally, in Section VI,
obtained estimates during the whole prediction horizon. This we summarize and give some directions for future research.
procedure is reasonable when the underlying timevariations are
slow. In general however, as pointed out in [6], it is more to the II. BOOTSTRAP IDEA
point to extend the basic GPC in order to explicitly handle the This section gives a short survey of the bootstrap technique,
initially introduced in [8]. First the general bootstrap ideas are
Manuscript received September 28, 1999; revised September 12, 2000 and presented. Thereafter, extensions relevant to bootstrap control
May 26, 2005. Recommended by Associate Editor J. Spall. are discussed, i.e., bootstrap in regression and autoregressive
M. Aronsson is with Occam Associates AB, SE-111 43 Stockholm, Sweden. models, and use of bootstrap when computing prediction inter-
L. Arvastson is with SimCorp A/S, DK-2100 Copenhagen, Denmark.
J. Holst is with the Division of Mathematical Statistics, Centre for Mathemat- vals.
ical Sciences, Lund University, Lund, Sweden. Bootstrap is useful also when the estimation set is small, so
B. Lindoff is with Ericsson Mobile Platforms AB, Research & Strategic Tech- that asymptotic results are not applicable, or when dealing with
nology, Lund, Sweden.
A. Svensson is with Saab Bofors Dynamics AB, Linköping, Sweden. unknown distributions. Let be independent identi-
Digital Object Identifier 10.1109/TAC.2005.861722 cally distributed (i.i.d.) observations of a stochastic variable ,
0018-9286/$20.00 © 2006 IEEE
ARONSSON et al.: BOOTSTRAP CONTROL 29

with unknown distribution . A statistic is desired, and The next step, extending bootstrap techniques to autoregres-
an estimator is available. We want to know the accu- sive time series models, has been discussed by several authors.
racy of this estimator, e.g., a confidence interval for . The simplest way of producing bootstrap replicates of an
If the distribution was known, the problem would be easy process
to solve: From an arbitrary number of new “observa-
tion vectors” could be drawn. Putting these into (8)
the estimator would give observations of the
statistic , from which it would be easy to calculate e.g., an em- (where is a monic polynomial, is the backward shift op-
pirical confidence interval for as the and erator, i.e., , and is an i.i.d. noise sequence
percentiles of the empirical distribution with zero mean and finite variance) was discussed in [11]. Using
a standard e.g., least squares estimation procedure on the avail-
# able data gives an estimate of . Bootstrap re-
(1)
samples are drawn from the residuals . Then bootstrap
resamples of the process, , are created from
However, since the distribution is unknown it is not possible
to draw exactly. However, as the empirical dis-
tribution
(9)
#
(2)
Other authors (e.g., [12]) choose in-
is an approximation of , bootstrap replicates stead. However, both of those methods have the drawback of
drawn from will be approximations of . conditioning the bootstrap replicates on initial values (true ini-
Putting the bootstrap resamples into the function gives tial values or zeros as initial values). This does not influence
bootstrap estimates . Empirical confidence in- the asymptotical results, but as bootstrap is also useful when the
tervals, and other statistics of interest, can be computed from available amount of data is small, i.e., when asymptotic results
the empirical distribution of . This handwaving are not necessarily good approximations, the error from condi-
argumentation is also theoretically rigorous in [8] and [9]. tioning on initial values might cause inaccuracy.
In [10], it was shown how to use bootstrap when dealing with Usually is estimated by minimizing the prediction error,
regression models which leads to too small residuals for prediction purposes. This
was discussed in [13], and rescaling of the residuals with a factor
(3) was suggested. The rescaled residuals are
denoted .
where is now an matrix, is the parameter vector Progress in bootstrapping AR-processes was made in [14].
and is an vector of i.i.d. noise with zero mean and finite A prediction in an model is the conditional expectation
variance . The least squares estimate of is given by given the last values, and in order to obtain this in a boot-
strap manner the concept of backcasting (see [15]) and back-
(4) ward residuals may be considered. The backward representation
of an AR process is
Bootstrap replicates are drawn from the residuals
(10)

(5) where is the forward shift operator, i.e., , and


is the backward noise. As before an estimate of is
(if there is no intercept column in , the mean should be sub-
obtained using standard methods, and from the backward repre-
tracted from the residuals before drawing the replicates). Now
sentation (10) backward predictions
bootstrap replicates of are constructed as
(11)
(6)

and from them bootstrap replicates of are com- (index denoting backward) can be calculated, as well as the
puted as backward residuals

(7) (12)

It turns out that, provided is large and From the backward residuals resamples can be
is small (see [10]), the distribution of (the empirical drawn.
distribution can be computed arbitrarily well just by increasing However, drawing bootstrap resamples from the backward
) is a good approximation of the distribution of . residuals in this way demands that they are i.i.d. As explained
Using this property, accuracy measurements of or in [15], this is not very often the case and therefore the backward
can easily be computed. method must be altered. Instead of using (11)–(12) resamples
30 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

are drawn from the forward residuals , and backward are unknown, possibly timevarying, but with known model
residual resamples are created using the relation order. Further, is a sequence of independent identically dis-
tributed random variables with and .
(13) The distribution of is assumed to be unknown and its density
function will be denoted .
However, using (13) for generation of backward residual re- In order to control the system (15) one has to estimate the
samples requires initial values . As we deal with unknown parameter vector. The standard recursive least squares
resampled sequences this is not a problem: A prolonged se- (RLS) parameter estimation algorithm or windowed offline least
quence can be drawn, where squares (WLS) can be used, see [19].
can be arbitrarily large. The control strategy at time aims at choosing the future con-
From resampled paths trol signals according to
are created in the intuitive way using the backward represen-
tation (10). From these paths bootstrap replicates are esti-
mated, and using these replicate polynomials, replicates of the (16)
future values are generated from
where is the time horizon and is the
(14) desired reference trajectory. Further, is the
generic -algebra at time , i.e., the information gathered up to
where are drawn from . time . The function can be of any kind, for instance a
Hence, using this technique the problem of conditioning on standard or criterion
initial values is gone.
There are also other possible ways of solving the problem
with initial values [16] suggests constructing the bootstrap repli- (17)
cates by selecting blocks of length , with replacement, from
the blocks of observed data, where . Another
(18)
possibility, described in [13], is to use a block of size as ini-
tial values. For every bootstrap replicate a new starting block is
chosen at random from the observed data, and then the replicate or of band limiting type, i.e.,
paths are generated using replicates drawn from the rescaled for-
ward residuals . (19)
From the resampled future values prediction intervals
with approximate confidence are easily calculated as where is an indicator function. Equation (19) means that
the and percentiles of the distribution of .
This method for prediction intervals is further described in
[17] and [18]. When empirically comparing this method with
standard Gaussian prediction intervals the largest improvement
is obtained when working with non-Gaussian noise, just as
expected. But even when using Gaussian noise (with prediction
horizon ) this method gives more accurate intervals than From a statistical point of view one can, for som functions
the Gaussian method; this is (as noted in [18]) because the , interpret (16) as a minimization of a function of prediction
predictions are not linear functions of Gaussian variables, errors with respect to . Hence, the name adaptive predictive
but products of Gaussian variables (i.e., and , etc). control is commonly used for this kind of controller, see e.g.,
If the method is to be interesting for control purposes it must [5] for discussions about adaptive predictive control.
be possible to include external signals; i.e., extending the AR In order to compute the loss function (16) we have to rewrite
model to an ARX model. An intuitive way of doing this was the -step prediction, , at time . One can show; see, e.g.,
empirically studied in [12], and will be further discussed and [1], that it is possible to find polynomials of degree
developed in Section IV. and of degree that fulfill the Diophantine
equation
III. PROBLEM FORMULATION
(20)
Assume that the system to be controlled is an
-process, i.e., If the polynomials and are time-invariant
one can use (15) and (20) to obtain the decomposition
(15)

where the polynomials (21)

Equation (21) consists of the three different parts:


1) known values of the output signal, i.e., ;
ARONSSON et al.: BOOTSTRAP CONTROL 31

2) future and present control signals, As the polynomials are estimated for every time step (using
; information from the latest time steps only), the true param-
3) future noise, . eters can be time-varying. Furthermore, as discussed in [18],
A common technique to handle time-varying systems is to bootstrapping the polynomials may improve the results when
assume that the time-variations are small enough to be treated dealing with non-Gaussian noise, even if the parameters are not
as constants during a certain time interval. We hereafter use this time-varying.
assumption which implies that (21) can be used also for slowly Many versions of the algorithm are possible; apart from
time-varying systems. Hence, the decomposition will be used in choosing loss function and dimension and weights of ,
the bootstrap algorithm in the next chapter. different backward and forward resampling methods can be
chosen, cf. Section II, and a forgetting factor can be included.
IV. BOOTSTRAP CONTROL ALGORITHM For implementation purposes, the algorithm for
will be stepwisely described. The extension to general is
Suppose that the controlling phase is started at time , when straight-forward although different numerical problems might
data are available for the latest time units. As described by occur.
[12] the past control signals are kept fixed, while Step 1: The available data at time , i.e.,
bootstrap is applied to the stochastic part of the process. , are used for estima-
From the available data, parameter estimates can be es- tion of the unknown parameters of the -polynomial and the
timated using standard least squares technique (see [19]), and unknown parameters of the -polynomial as a regression
from these estimates polynomials can be calculated ac- model
cording to (20). Using (21) can be decomposed into two parts,
, where (26)

with
(22)
(23) (27)

As is always the case when estimating the parameters in every


As described in Section II (and as will be elaborated in step 3), time step, too good control gives too little information for the pa-
bootstrap replicates of the process are generated, and from them rameter estimation. This may result in a nonstable process. As
replicates of the polynomials are generated, to- the back-casting technique is valid only for stable processes it
gether with corresponding . Using future values will result in erroneous resamples and computational problems
(where is the largest prediction horizon if it is applied to nonstable processes. This is solved by checking
of interest) are calculated from (23) (using drawn from for stability in the estimated polynomials, and if the estimated
). model turns out to be nonstable the estimated parameters from
The terms are avail- the previous step are used, both in calculating residuals and re-
able as functions of the -dimensional vector sampling. The stability of the closed-loop system has to be
. Adding gives bootstrap replicates studied separately.
as functions of . Step 2: The residuals from the regression, will under-
Including the bootstrap replicates, the loss functions of estimate the original noise sequence, but if they are centered
and band limit types become (i.e., the mean is subtracted) and scaled with the factor
(for ), the result will
be consistent. This is the rescaling from [13] discussed in
(24) Section II, adapted for ARX processes, with time steps
bootstrapped and parameters estimated. Step 3: Using
and the backcasting method, new forward error sequences are
(25) resampled from and backward error sequences are
created using
respectively. Index denotes the th out of bootstrap repli- (28)
cates. Now the control signal can be determined by (multidi-
mensional) optimization as in (16). The backward error sequences are used to obtain sequences of
The advantage of this method is that any loss function can the stochastic part of the process through
be used, no matter how complicated it is. As long as sufficiently
efficient optimization methods are available the horizon can (29)
be arbitrarily large (i.e., can have arbitrarily many dimen-
with as initial values. The
sions), and the loss function can have different weights for dif-
resampled process sequences are obtained as
ferent values of . It is also possible to use only parts of the re-
sulting vector (e.g., ) for the control signal, and not decide (30)
about the others (in case ) until the following
time steps. where is the contribution due to the external signal.
32 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

Fig. 1. (Top) Uncontrolled AR(2) process. (Bottom) Value of parameter a . The other parameters are constant.

Step 4: The resampled process sequences are used to give of process values was used in all the simulations in order to
resampled estimates of the parameters of the - and -polyno- make the results comparable. Also, the same random number
mials. sequence was used. The uncontrolled process is shown in Fig. 1.
Step 5: Future values are sampled by forward simulation,
with the resampled - and -polynomials and residuals drawn The control problem that is the origin for the simulations is a
from , a sufficiently large number of times, i.e., problem that could appear in quality control. There is a process
that needs to be controlled, and the output is acceptable if the
process value is kept within a certain interval during
(31) at least 90 percent of the time. In addition the cost to control the
process is proportional to the absolute value of the control signal
where the optimal is still unknown. , i.e., the sum of the control signals should be kept
Step 6: The simulated future values are used for optimizing as small as possible. Three different simulations show different
the specified loss function, and thus getting the optimal degrees of sophistication in order to solve this problem.
for control. One simulation, simulation 4, where the innovation process is
non-Gaussian is also presented. It demonstrates that the method
V. SIMULATIONS works for non-Gaussian processes as well. This case has an in-
A number of simulations have been performed in order to novation process of independent random variables with
check the performance of the bootstrap based control system. , i.e., the same mean value and standard deviation as for
The process in the presented simulations is the Gaussian cases.
The control system used 50 samples of old process values
for estimation in all the simulations, thus the control system
was started at time . It is possible to start the control
with earlier, using less data in the beginning, but this is not further
investigated here. The predicted loss functions, and the values of
the control signal, were calculated by creating 1000 resamples
of future process values at every time step.
Simulation 1: The first simulation uses a control system
and . The bootstrap control algorithm can handle based on a quadratic loss function
varying process parameters, so the parameter was chosen to
vary as shown in Fig. 1. The rest of the parameters were con-
stants; , and . The same sequence
ARONSSON et al.: BOOTSTRAP CONTROL 33

Fig. 2. Behavior of the process when controlled using a quadratic loss function (simulation 1). The bottom plot shows (=1) when the estimated open loop process
parameters correspond to an unstable process and the previous parameters are used and (=0) when the parameters are updated.

leading to a variant of minimum variance control. The perfor- solve for a traditional control system. The performance is shown
mance is shown in Fig. 2. There are periods when the con- in Fig. 4. During a short period toward the end of the simulation
trol system works so well that it becomes hard to estimate the there is very little information in data, which results in a bad es-
process, but if estimates leading to an unstable process are dis- timate of the process parameters. However, after a short period
carded, the algorithm works very well. The sum of the control of poor control, enough information about the process is avail-
signals was 9970, and the process is inside the in- able and the estimator and control works well again. The sum of
terval during 96.9 percent of the time. The standard the control signals was 16450, which is larger than
deviation of the controlled process is 4.68. It should be noted for the previous loss function, and the process was within the
that the control system handles the change in process parameters specified interval during 92.7 percent of the time. A large part
very well. A traditional minimal variance controller with known of both the summed control signals and the time outside the in-
parameters would have a process error with a standard deviation terval comes from the short period when the control is poor and
of 4. Such a controller was within the limits (2.5 standard de- the control signals become large. If the period from to
viations) 98.6 percent of the time and had . 1000 is excluded the sum of the control signals is 10460, and the
The bootstap controller is close in performance although it is not process was inside the interval during 96.6 percent of the time.
based on any direct information about the process parameters. This is comparable to the result in Simulation 1.
Simulation 2: The second simulation has a more compli- Simulation 3: The third and most complicated of the pre-
cated loss function, where the control system will maximize the sented loss functions leads to a control system that will attempt
probability of the process staying within the interval , to keep at least 90 percent of the process values within the in-
and when possible use small control signals. The loss function terval , and at the same time control the process as
was little as possible, i.e., set the control signal to zero as often as
possible. This is a loss function that follows all the intentions
that we started out with, and it can be written as
The predicted expected loss as a function of the control variable
at different times during the simulation is shown in Fig. 3,
where it can be seen how the optimal control signal can be
(32)
chosen.
This simulation demonstrates the ability of the bootstrap con-
trol to work for loss functions that are difficult or impossible to or in MATLAB code as:
34 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

Fig. 3. Predicted expected loss as a function of the control variable u at four times during Simulation 2, with g = I . The shape changes due to the
current estimates of the process parameters and residuals, and the position is also affected by the current process values.

ysort sort(y); badly for a short period. However, the poor control will result
if abs(ysort(0.90*Nrepl)) in more information about the process during this period which
Loss sum(abs(y)>10); will improve the parameter estimates and the control system will
else start to work well again. This should preferably be handled in
Loss u; an optimal way which is the topic of dual control, cf. [1].
end;

VI. SUMMARY AND CONCLUSION


where y is a vector of length Nrepl Nrepl being the
number of bootstrap replicates of the future process value. The This paper addresses and presents a new solution to the
performance is shown in Fig. 5. The aim of 90 percent of the problem of simultaneous estimation and optimal control of an
values inside the interval is fulfilled, actually 91.6 percent of unknown or time-varying dynamic system. The new solution
them were inside the interval, and the control signal was zero presented, bootstrap control, is an adaptive predictive con-
in 401 of the 1000 time units. The sum of the control signals troller, which falls into the class of OLOF approximations to
was 3670, which is significantly lower than for the dual control.
previous loss functions. The new method is based on statistical bootstrap techniques,
Simulation 4: The last simulation evaluate bootstrap control making it possible to derive estimates of the future process
for non-Gaussian processes, see Fig. 6. The same loss function values taking the unknown noise distribution as well as the
as in simulation 3 is used with similar results. The sum of the unknown process description into account, allowing for de-
control signals, , is 5038 and the controlled process termining the control signal from optimization of an arbitrary
is within the interval 91.9 percent of the time. The criterion function.
control signal is zero at 548 occurences out of 1000. Notice that The empirical distribution of the noise is used to create an
the controller is on average trying to keep the process below zero arbitrary number of replicates of the observed noise, each of
to handle the skewness of the exponential distribution. which is the basis for replicates of the system description and
Other more or less complicated loss functions have also been thus for future process values. This makes the new method in-
tested with similar results. In a few simulations occasional large dependent of the distribution of the disturbances and also makes
values of the process have been observed, especially when the it possible to apply to nonlinear as well as to linear systems
process is controlled tightly. It is then more difficult to get good as long as the system description remains invertible. Further-
estimates of the process parameters and if the estimated model more, this way of persistently using the obtained data makes
is incorrect, the control of the process will be controlled rather the method well suited also for treating time varying systems
ARONSSON et al.: BOOTSTRAP CONTROL 35

Fig. 4. Behavior of the process when controlled using a rectangular loss function (simulation 2). The bottom plot shows (=1) when the estimated open-loop
process parameters correspond to an unstable process and the previous parameters are used and (=0) when the parameters are updated.

Fig. 5. Behavior of the process when controlled using the control system with the most complicated loss function (simulation 3). The bottom plot shows (=1)
when the estimated open-loop process parameters correspond to an unstable process and the previous parameters are used and (=0) when the parameters are
updated.
36 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

Fig. 6. Behavior of the non-Gaussian process when controlled using the control system with the most complicated loss function (simulation 4). The bottom
plot shows (=1) when the estimated open loop process parameters correspond to an unstable process and the previous parameters are used and (=0) when the
parameters are updated.

where the rate of time-variation is small compared to the time [3] B. Lindoff, J. Hoist, and B. Wittenmark, “Analysis of approximation
span associated with the data used for the bootstrap. The optimal of dual control,” Int. J. Adapt. Control Signal Process., vol. 13, pp.
593–620, 1999.
control is determined from the distribution of future process [4] A. Fel’dbaum, “Dual control theory I-IV,” Automat. Remote Control, vol.
values, produced from the bootstrap technique. Hence the boot- 21, pp. 874–880, 1960–61.
strap control method is applicable to arbitrary loss functions by [5] D. Clarke, C. Mohtadi, and P. Tuffs, “Generalized predictive con-
trol—Part I. The basic algorithm,” Automatica, vol. 23, pp. 137–148,
integrating over these distributions. The loss may be formulated 1987.
e.g., in terms of probabilities for process values exceeding lim- [6] O. Palsson, H. Madsen, and H. Søgaard, “Generalized predictive control
iting values as well as more traditionally as quadratic deviations for nonstationary systems,” Automatica, vol. 30, pp. 1991–1997, 1994.
[7] H. Søgaard, “Stochastic systems with embedded parameter varia-
between process and reference trajectories. tions—Applications to district heating,” Ph.D. dissertation, Inst. Math.
In SPC, reference values are used to monitor quality of Statist. Oper. Res., Tech. Univ. Denmark, Lyngby, Denmark, 1993.
production by demanding certain fractions of measured quality [8] B. Efron, “Bootstrap methods: Another look at the jackknife,” Ann. Stat.,
vol. 7, no. 1, pp. 1–26, 1979.
variables of the product being inside given limits. The bootstrap [9] B. Efron and R. Tibshirani, An Introduction to the Bootstrap. New
control method makes it possible to formulate and use this kind York: Chapman & Hall, 1993.
of loss functions as part of a quality control strategy. [10] D. Freedman, “Bootstrapping regression models,” Ann. Statist., vol. 9,
no. 6, pp. 1218–1228, 1981.
Analyzes of this new controller, concerning e.g., its robust- [11] B. Efron and R. Tibshirani, “Bootstrap methods for standard errors, con-
ness or stability of the resulting closed-loop system, are impor- fidence intervals, and other measures of statistical accuracy,” Statist. Sci.,
vol. 1, no. 1, pp. 54–77, 1986.
tant subjects for further research. The quality control aspect of [12] D. Freedman and S. Peters, “Bootstrapping an econometric model: Some
this technique is another topic for future investigations. empirical results,” J. Bus. Econ. Stat., vol. 2, no. 2, pp. 150–158, 1984.
The new controller is computationally intensive. However, it [13] R. Stine, “Estimating properties of autoregressive forecasts,” J. Amer.
Statist. Assoc., vol. 82, no. 400, pp. 1072–1078, 1987.
offers a solution to a number of otherwise intractable stochastic [14] L. Thombs and W. Schucany, “Bootstrap prediction intervals for autore-
control problems. gression,” J. Amer. Statist. Assoc., vol. 85, no. 410, pp. 486–492, 1990.
[15] F. Breidt, R. Davis, and W. Dunsmuir, “On backcasting in linear time
series models,” New Directions Time Ser. Anal., vol. 1, pp. 25–40, 1992.
[16] H. Künsch, “The jackknife and the bootstrap for general stationary ob-
servations,” Ann. Statist., vol. 17, no. 3, pp. 1217–1241, 1989.
REFERENCES [17] F. Breidt, R. Davis, and W. Dunsmuir, “Improved bootstrap prediction
intervals for autoregressions,” J. Time Series Anal., vol. 16, no. 2, pp.
[1] K. J. Åström and B. Wittenmark, Adaptive Control. Reading, MA: Ad- 177–200, 1995.
dison-Wesley, 1989. [18] B. McCullough, “Bootstrapping forecast intervals: An application to
[2] B. Lindoff and J. Hoist, “Adaptive predictive control for time-varying AR(p) models,” J. Forecasting, vol. 13, pp. 51–66, 1994.
stochastic system,” in Proc. 36th IEEE Conf. Decision and Control, San [19] L. Ljung, System Identification, Theory for the User. Upper Saddle
Diego, CA, 1997, pp. 2477–2482. River, NJ: Prentice-Hall, 1987.
ARONSSON et al.: BOOTSTRAP CONTROL 37

Mattias Aronsson received the M.Sc. in engineering Bengt Lindoff (M’96) received the M.Sc. degree in
physics from the Lund Institute of Technology, Lund electrical engineering and the Ph.D. degree in mathe-
University, Lund, Sweden. matical statistics from Lund Institute of Technology,
His studies and research in mathematical statistics Lund University, Lund, Sweden, in 1992 and 1997,
have been directed toward stochastic modeling and respectively.
time series analysis. He is a Partner at Occam Asso- Since 1998, he has been a Senior Research Engi-
ciates, a management consulting firm based in Stock- neer with Ericsson Mobile Platforms AB (former Er-
holm, Sweden. icsson Mobile Communications AB), Lund, Sweden,
where he has been working with radio interface re-
search for TDMA, CDMA, and OFDM systems with
focus on implementation aspects for mobile termi-
nals. He has written a number of conference papers discussing cellular com-
munications from a mobile terminal point of view. Furthermore, he has filed
Lars Arvastson (M’95) received the M.Sc. degree more than 50 patents in the area of cellular communications.
in engineering physics and the Ph.D. degree in math-
ematical statistics from the Lund Institute of Tech-
nology, Lund University, Lund, Sweden, in 1992 and
2001, respectively.
His research interests includes stochastic mod-
eling, production planning, and energy systems.
He is currently with SimCorp A/S, Copenhagen,
Denmark, where he is developing software for
financial instrument analysis.

Jan Holst (S’72–M’78) received the M.Sc. degree


in engineering physics and the Ph.D. degree in auto-
matic control from the Lund Institute of Technology,
Lund University, Lund, Sweden, in 1970 and 1977,
respectively.
He has been an Assistant Professor in automatic
control at Lund University and an Associate Pro- Anders Svensson received the M.Sc. degree in en-
fessor in mathematical statistics at the Technical gineering physics from the Lund Institute of Tech-
University of Denmark. Since 1986, he has been a nology, Lund University, Lund, Sweden, in 1993, the
Full Professor with the Division of Mathematical M.A. degree in statistics from the University of Cali-
Statistics, Lund University. His main research fornia, Santa Barbara, in 1994, and the Ph.D. degree
interests concern theoretical as well as practical issues of adaptive and in mathematical statistics from Lund University, in
stochastic procedures in prediction, control and signal processing, mod- 1998.
eling, identification, and estimation techniques. He has been engaged in a His research interests include stochastic and de-
large number of cooperative projects with industrial partners, encompassing terministic modeling, estimation, and prediction. He
signal processing on vehicles; modeling, prediction, supervision and con- is currently with Saab Bofors Dynamics, Linköping,
trol in energy systems; noise reduction in mobile communication; finance Sweden, with systems engineering, guidance, navi-
modeling, etc. gation, and control.

Das könnte Ihnen auch gefallen