Beruflich Dokumente
Kultur Dokumente
Some comparisons
Refinements
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
x(t)
= A0 x(t) + A1 x(t h),
x() = 0, [h, 0]
(s) + A1 esh x
(s).
s
x(s) = A0 x
for some h
> 0.
whether this system is stable h [0, h]
This clearly requires that
Model transformations
Some comparisons
Refinements
Precise methods
Methods yielding exact stability intervals:
Nyquist criterion
(Tsypkin, 1946)
Delay-sweeping arguments
(Cooke & Grossman, 1982; Walton & Marshall, 1987)
Common pitfalls:
not suitable for analytic controller design
not readily extendible to multiple-delay systems
Concluding remarks
Model transformations
Some comparisons
Refinements
Precise methods
Methods yielding exact stability intervals:
Nyquist criterion
(Tsypkin, 1946)
Delay-sweeping arguments
(Cooke & Grossman, 1982; Walton & Marshall, 1987)
Common pitfalls:
not suitable for analytic controller design
not readily extendible to multiple-delay systems
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Model transformations
Some comparisons
Refinements
Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski methods
Analysis based on
1. constructing a Lyapunov-Krasovski functional (storage function), like
Z0
P2 ()x(t + )d
Z0 Z0
h h
Model transformations
Some comparisons
Refinements
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x A1 (1 esh )
x
s
x = A0 x
sh
= (A0 + A1 )
x A1 1es
= (A0 + A1 )
x A1
1esh
s
s
x
)
+ A1 esh x
(A0 x
x(t)
= (A0 + A1 )x(t) A1
Zh
0
A0 x(t ) + A1 x(t h ) d.
sh
(s) = det sI A0 A1 esh det sI A1 1es
,
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x A1 (1 esh )
x
s
x = A0 x
sh
= (A0 + A1 )
x A1 1es
= (A0 + A1 )
x A1
1esh
s
s
x
)
+ A1 esh x
(A0 x
x(t)
= (A0 + A1 )x(t) A1
Zh
0
A0 x(t ) + A1 x(t h ) d.
sh
(s) = det sI A0 A1 esh det sI A1 1es
,
Model transformations
Some comparisons
Refinements
Concluding remarks
+ A1 esh x
s
x = A0 x
or in the time domain:
d
x(t) + A1
dt
Zh
0
sh
I + A1 1es
s
x = (A0 + A1 )
x
x(t )d = (A0 + A1 )x(t).
det I + A1 1es
Zh
x(t )d,
Model transformations
Some comparisons
Refinements
Concluding remarks
+ A1 esh x
s
x = A0 x
or in the time domain:
d
x(t) + A1
dt
Zh
0
sh
I + A1 1es
s
x = (A0 + A1 )
x
x(t )d = (A0 + A1 )x(t).
det I + A1 1es
Zh
x(t )d,
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x A1 1es
s
x = A0 x
s
x
(in fact, this is midway toward first transformation) or in the time domain:
x(t)
= (A0 + A1 )x(t) A1
Zh
)d.
x(t
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x A1 1es
s
x = A0 x
s
x
(in fact, this is midway toward first transformation) or in the time domain:
x(t)
= (A0 + A1 )x(t) A1
Zh
)d.
x(t
Model transformations
Some comparisons
Refinements
Concluding remarks
+ A1 esh x
s
x = A0 x
s
x=y
sh
= (A0 + A1 )
y
x A1 1es
I 0
0 0
Zh
0
I
x(t)
x(t)
0
=
y(t )d.
y(t)
A0 + A1 I y(t)
A1 0
Model transformations
Some comparisons
Refinements
Concluding remarks
+ A1 esh x
s
x = A0 x
s
x=y
sh
= (A0 + A1 )
y
x A1 1es
I 0
0 0
Zh
0
I
x(t)
x(t)
0
=
y(t )d.
y(t)
A0 + A1 I y(t)
A1 0
Model transformations
Some comparisons
Refinements
Concluding remarks
What bothers me
conservatism sources are hidden
LK functional choice, cross-terms approximations, model transformation,. . .
1esh
,
s
Model transformations
Some comparisons
Refinements
Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
G(s)
Theorem
Let G(s) and (s) be stable and such that
kk 6 1
and
kGk < 1.
Concluding remarks
Model transformations
Some comparisons
Refinements
kesh k 6 1,
Then,
h.
+ A1 esh x
stable h > 0
s
x = A0 x
{z
}
|
if
}|
{
z
+ A1 x
stable kk 6 1
s
x = A0 x
k(sI A0 )1 A1 k < 1,
Concluding remarks
Model transformations
Some comparisons
Refinements
kesh k 6 1,
Then,
h.
+ A1 esh x
stable h > 0
s
x = A0 x
{z
}
|
if
}|
{
z
+ A1 x
stable kk 6 1
s
x = A0 x
k(sI A0 )1 A1 k < 1,
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Mesh = esh M,
M Rnn
Then,
+ A1 esh x
stable h > 0
s
x = A0 x
{z
}
|
if
}|
{
z
+ A1 x
stable kk 6 1 such that M = M
s
x = A0 x
We then end up with delay-independent condition
which is LMI-able.
Model transformations
Some comparisons
Refinements
Concluding remarks
Disadvantages:
delay independent, hence too conservative
(not so many, if any, problems, where delays can become arbitrarily large)
Model transformations
Some comparisons
Refinements
Concluding remarks
Disadvantages:
delay independent, hence too conservative
(not so many, if any, problems, where delays can become arbitrarily large)
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x A1 (1 esh )
x.
s
x = A0 x
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x.
x A1 (1 esh )
s
x = A0 x
Term 1 esh is a better candidate for approximations because its
size (norm) does depend on the phase lag of ejh .
1 ej1 h
Im
1 ej2 h
Re
Im
1 ej3 h
Re
Im
Re
Model transformations
Some comparisons
Refinements
Concluding remarks
= (A0 + A1 )
+ A1 esh x
x.
x A1 (1 esh )
s
x = A0 x
Term 1 esh is a better candidate for approximations because its
size (norm) does depend on the phase lag of ejh .
1 ej1 h
Im
1 ej2 h
Re
Im
1 ej3 h
Re
Im
Re
Model transformations
Some comparisons
Refinements
Covering 1 esh
Im
)
(
l h
1
Re
2 sin 2h
lh () =
2
6
if h
>
if h
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
x stable h [0, h]
s
x = (A0 + A1 )
x A1 (1 esh )
|
{z
}
if
}|
{
z
stable k/lh k 6 1
s
x = (A0 + A1 )
x + A1 x
Model transformations
Some comparisons
Refinements
Concluding remarks
x stable h [0, h]
s
x = (A0 + A1 )
x A1 (1 esh )
|
{z
}
if
}|
{
z
stable k/lh k 6 1
s
x = (A0 + A1 )
x + A1 x
Model transformations
Some comparisons
Refinements
Rational approximations of lh
We need to construct stable and rational W(s) such that
Some examples:
,
W0 (s) = hs
W1 (s) =
2 3 hs
hs+2
3
W3 (s) =
s2 +1.567s+
2.007hs
2
s2 +1.283s+
hs+2
.
= 2.358/h
with
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Rational approximations of lh
We need to construct stable and rational W(s) such that
Some examples:
,
W0 (s) = hs
W1 (s) =
2 3 hs
hs+2
3
W3 (s) =
s2 +1.567s+
2.007hs
2
s2 +1.283s+
hs+2
(note that |W1 (j)| < |W0 (j)| for all > 0),
.
= 2.358/h
with
Model transformations
Some comparisons
Refinements
Rational approximations of lh
We need to construct stable and rational W(s) such that
Some examples:
,
W0 (s) = hs
W1 (s) =
2 3 hs
hs+2
3
W3 (s) =
s2 +1.567s+
2.007hs
2
s2 +1.283s+
hs+2
.
= 2.358/h
with
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Rational approximations of lh
We need to construct stable and rational W(s) such that
Some examples:
,
W0 (s) = hs
W1 (s) =
2 3 hs
hs+2
3
W3 (s) =
s2 +1.567s+
2.007hs
2
s2 +1.283s+
hs+2
.
= 2.358/h
with
which is LMI-able.
Model transformations
Some comparisons
Refinements
Concluding remarks
Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)
conservatism sources, unlike LK approach, clearly seen.
Disadvantages:
seems to be too conservative in general
conservatism sources, unlike LK approach, clearly seen1 .
1
Model transformations
Some comparisons
Refinements
Concluding remarks
Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)
conservatism sources, unlike LK approach, clearly seen.
Disadvantages:
seems to be too conservative in general
conservatism sources, unlike LK approach, clearly seen1 .
1
Model transformations
Some comparisons
Refinements
Concluding remarks
Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)
conservatism sources, unlike LK approach, clearly seen.
Disadvantages:
seems to be too conservative in general
conservatism sources, unlike LK approach, clearly seen1 .
1
Model transformations
Some comparisons
Refinements
Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
These are
(Huang & Zhou, 00), who cast delay robustness problem as -problem and
claimed that LK-based solutions are much more conservative
(LK derivations mostly use W0 -bound and static scaling);
(Zhang, Knospe, & Tsiotras, 01), who proved that several LK-based results
are equivalent to (statically scaled) SG-based results, which
use W0 -bound on |1 ejh |.
Model transformations
Some comparisons
Refinements
Concluding remarks
These are
(Huang & Zhou, 00), who cast delay robustness problem as -problem and
claimed that LK-based solutions are much more conservative
(LK derivations mostly use W0 -bound and static scaling);
(Zhang, Knospe, & Tsiotras, 01), who proved that several LK-based results
are equivalent to (statically scaled) SG-based results, which
use W0 -bound on |1 ejh |.
Model transformations
Some comparisons
Refinements
Concluding remarks
These are
(Huang & Zhou, 00), who cast delay robustness problem as -problem and
claimed that LK-based solutions are much more conservative
(LK derivations mostly use W0 -bound and static scaling);
(Zhang, Knospe, & Tsiotras, 01), who proved that several LK-based results
are equivalent to (statically scaled) SG-based results, which
use W0 -bound on |1 ejh |.
Model transformations
Some comparisons
Refinements
Concluding remarks
stable h [0, h] if
x(s)
s
x(s) = (A0 + A1 )
x(s) A1 (s)hs
is stable for all kk 6 1 such that M = M for all M. In principle, this
is guaranteed if M = M > 0 such that to
1
kM I + (A0 + A1 )(sI A0 A1 )1 A1 M1 k < h
,
yet we may want to rewrite it as
1
0
I
0
0 M s I 0
< 1
1
A0 + A1 I
A1 M
0 0
h
Model transformations
Some comparisons
Refinements
Concluding remarks
stable h [0, h] if
x(s)
s
x(s) = (A0 + A1 )
x(s) A1 (s)hs
is stable for all kk 6 1 such that M = M for all M. In principle, this
is guaranteed if M = M > 0 such that to
1
kM I + (A0 + A1 )(sI A0 A1 )1 A1 M1 k < h
,
yet we may want to rewrite it as
1
0
I
0
< 1
0 M s I 0
1
A0 + A1 I
A1 M
0 0
h
Model transformations
Some comparisons
Refinements
Concluding remarks
A X + X A X B C
B X
I 0 < 0.
X such that E X = X E > 0 and
C
0 I
In our case,
E X = X E > 0
I 0
0 0
X11 X12
X11 X21
I 0
=
> 0.
X22
X12
X21 X22
0 0
Hence
Model transformations
Some comparisons
Refinements
Concluding remarks
A X + X A X B C
B X
I 0 < 0.
X such that E X = X E > 0 and
C
0 I
In our case,
E X = X E > 0
I 0
0 0
X11 X12
X11 X21
I 0
=
> 0.
X22
X12
X21 X22
0 0
Hence
Model transformations
Some comparisons
Refinements
Concluding remarks
A X + X A X B C
B X
I 0 < 0.
X such that E X = X E > 0 and
C
0 I
In our case,
E X = X E > 0
I 0
0 0
X11 0
X11 X21
I 0
=
> 0.
0 X22
X21 X22
0 0
Hence
> 0.
X12 = 0 and X11 = X11
Model transformations
Some comparisons
Refinements
Concluding remarks
X21 + X A
X11 X21
A
21
21
1 M
X
22
22 A1 M
22
<0
1
1
0
M A1 X21
M A1 X22
h I
1
0
M
0
h
I
+ AX
22 X A1 h
X21 + X A
X11 X21
A
21
21
A h
< 0,
X22 X
hY
X11 X21 + X A
X22
1
22
22
hY
hA1 X22
hA1 X21
.
.
=
where A
A0 + A1 and Y = M2 .
This is
exactly the condition of (Fridman, 01) derived via LK technique.
Model transformations
Some comparisons
Refinements
Concluding remarks
X21 + X A
X11 X21
A
21
21
1 M
X
22
22 A1 M
22
<0
1
1
0
M A1 X21
M A1 X22
h I
1
0
M
0
h
I
+ AX
22 X A1 h
X21 + X A
X11 X21
A
21
21
A h
< 0,
X22 X
hY
X11 X21 + X A
X22
1
22
22
hY
hA1 X22
hA1 X21
.
.
=
where A
A0 + A1 and Y = M2 .
This is
exactly the condition of (Fridman, 01) derived via LK technique.
Model transformations
Some comparisons
Refinements
Concluding remarks
1
I 0
0
I
0
0 I s
,
0 0
A0 + A1 I
A1
which does not introduce any additional dynamics.
place.
Model transformations
Some comparisons
Refinements
Concluding remarks
1
I 0
0
I
0
0 I s
,
0 0
A0 + A1 I
A1
which does not introduce any additional dynamics.
place.
Model transformations
Some comparisons
Refinements
Example
Consider the system (Kolmanovski & Richard, 99):
1 0.5
2 2
x(t)
=
x(t) +
x(t h)
0.5 1
2 2
The following stability bounds are available:
Method
max
h
IV
0.271
SGT+W0
0.2716
SGT+W1
0.3042
SGT+lh
0.3047
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs
h
)
2
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs
h
)
2
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs
h
)
2
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs
h
)
2
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs
h
)
2
Model transformations
Some comparisons
Refinements
Concluding remarks
Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs
The question is
what makes LK methods so dominating ?
h
)
2
Model transformations
Some comparisons
Refinements
Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
x(t)
=
x(t)
x(t h).
0 0.9
1 1
This system can be presented as the cascade
x2
1
s+0.9+esh
esh
x1
1
s+2+esh
Model transformations
Some comparisons
Refinements
Concluding remarks
x(t)
=
x(t)
x(t h).
0 0.9
1 1
This system can be presented as the cascade
x2
1
s+0.9+esh
Delay-dependent
esh
x1
1
s+2+esh
Delay-independent stable
Model transformations
Some comparisons
Refinements
Concluding remarks
x(t)
=
x(t)
x(t h).
0 0.9
1 1
This system can be presented as the cascade
x2
1
s+0.9+esh
esh
x1
Delay-dependent
1
s+2+esh
Delay-independent stable
0 0
0 I
yields
2 0
0 0
1 0 sh
sh
+
s
x=
x
(1 e
)
x
e
x
0 1.9
0 1
1 0
and effectively reduces this system to s
x2 = 1.9
x2 + (1 esh )
x2 .
Model transformations
Some comparisons
Refinements
Concluding remarks
s
x=
2 0
1 0 sh
x
x
e
0 0.9
1 1
stability
2 .
s
x2 = 0.9
x2 esh x
It then becomes clear why some methods are less conservative than others
on this particular example (and alike).
Method
max
h
I
.99
II
.99
III+PI
4.36
I+
4.35
II+
4.35
IV+PI
4.47
SGT+W3
4.84
Exact
6.17
Model transformations
Some comparisons
Refinements
Concluding remarks
Beyond SGT
Small Gain Theorem is not the only frequency-domain robustness tool. We
may try to
combine small gain and passivity arguments
Model transformations
Some comparisons
Refinements
Concluding remarks
Shifted covering
Clearly,
= (A0 + A1 V(s))
+ A1 esh x
x.
x A1 (V(s) esh )
s
x = A0 x
We may then try to
choose V(s) to reduce conservatism of covering |V(j) ejh |.
Model transformations
Some comparisons
Refinements
Concluding remarks
Im
)
(
l h
Re
Re
V(j)
cos 2h ej 2
V(j) =
0
6
if h
>
if h
1
() = 2 lh
().
and lh,V
Model transformations
Some comparisons
Refinements
Concluding remarks
Im
)
(
l h
Re
Re
V(j)
cos 2h ej 2
V(j) =
0
6
if h
>
if h
1
() = 2 lh
().
and lh,V
Model transformations
Some comparisons
Refinements
Concluding remarks
2
V1 (s) =
.
hs + 2
Covering radii are then:
5
1
lh,V
lh
5
10
lh,V
15
20
25
2
10
10
10
Model transformations
Some comparisons
Refinements
Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Concluding remarks
Model transformations
Some comparisons
Refinements
Concluding remarks
Concluding remarks