Sie sind auf Seite 1von 74

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

An Opinionated View on Delay Robustness


Leonid Mirkin
Faculty of Mechanical Engineering
Technion IIT

May 18, 2006

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Single-delay system with uncertain delay


Consider LTI system

x(t)
= A0 x(t) + A1 x(t h),

x() = 0, [h, 0]

or, equivalently, in the s-domain:

(s) + A1 esh x
(s).
s
x(s) = A0 x

Wed like to be able to check

for some h
> 0.
whether this system is stable h [0, h]
This clearly requires that

A 1 : delay-free system is stable, i.e., A0 + A1 is Hurwitz.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Precise methods
Methods yielding exact stability intervals:
Nyquist criterion
(Tsypkin, 1946)

Delay-sweeping arguments
(Cooke & Grossman, 1982; Walton & Marshall, 1987)

Schur-Cohn criterion inspirations


(J. Chen, G. Gu, & Nett, 1995)

Common pitfalls:
not suitable for analytic controller design
not readily extendible to multiple-delay systems

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Precise methods
Methods yielding exact stability intervals:
Nyquist criterion
(Tsypkin, 1946)

Delay-sweeping arguments
(Cooke & Grossman, 1982; Walton & Marshall, 1987)

Schur-Cohn criterion inspirations


(J. Chen, G. Gu, & Nett, 1995)

Common pitfalls:
not suitable for analytic controller design
not readily extendible to multiple-delay systems

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

The quest for alternatives


Has intensified during the last decade:
a zillion of papers published
dominated by Lyapunov-Krasovski (LK) methods
(LMI solutions derived via state-space LK technique)

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Lyapunov-Krasovski methods
Analysis based on
1. constructing a Lyapunov-Krasovski functional (storage function), like

V = x(t) P1 x(t) + 2x(t)

Z0

P2 ()x(t + )d

Z0 Z0

x (t + )P3 (, )x(t + )dd + ,

h h

2. calculating its derivative along system trajectory,


3. completing squares via approximating some cross-terms,
4. ending up with LMI conditions guaranteeing that V < 0.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Lyapunov-Krasovski methods: transformations


Apparent obstacle in the use of this approach is that
equation x(t)

= A0 x(t) + A1 x(t h) is not quite compatible

with Lyapunov-Krasovski techniques (not LK-friendly).

Conventional way to circumvent this obstacle is to


transform this model to more suitable form

by rearranging its terms.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Lyapunov-Krasovski methods: transformations


Apparent obstacle in the use of this approach is that
equation x(t)

= A0 x(t) + A1 x(t h) is not quite compatible

with Lyapunov-Krasovski techniques (not LK-friendly).

Conventional way to circumvent this obstacle is to


transform this model to more suitable form

by rearranging its terms.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The first transformation


Rewrite

= (A0 + A1 )
+ A1 esh x
x A1 (1 esh )
x
s
x = A0 x
sh

= (A0 + A1 )
x A1 1es
= (A0 + A1 )
x A1

1esh
s

s
x
)
+ A1 esh x
(A0 x

or in the time domain:

x(t)
= (A0 + A1 )x(t) A1

Zh
0


A0 x(t ) + A1 x(t h ) d.

Turns out to be more LK-friendly, yet introduces additional dynamics:


sh 
(s) = det sI A0 A1 esh det sI A1 1es
,

stability of which is hard to check (source of additional conservatism).

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The first transformation


Rewrite

= (A0 + A1 )
+ A1 esh x
x A1 (1 esh )
x
s
x = A0 x
sh

= (A0 + A1 )
x A1 1es
= (A0 + A1 )
x A1

1esh
s

s
x
)
+ A1 esh x
(A0 x

or in the time domain:

x(t)
= (A0 + A1 )x(t) A1

Zh
0


A0 x(t ) + A1 x(t h ) d.

Turns out to be more LK-friendly, yet introduces additional dynamics:


sh 
(s) = det sI A0 A1 esh det sI A1 1es
,

stability of which is hard to check (source of additional conservatism).

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The second transformation


Rewrite


+ A1 esh x
s
x = A0 x
or in the time domain:

d
x(t) + A1
dt

Zh
0

sh

I + A1 1es

s
x = (A0 + A1 )
x


x(t )d = (A0 + A1 )x(t).

Turns out to be more LK-friendly, yet


requires the stability of
sh

det I + A1 1es

= 0 or, equiv., of x(t) = A1

Zh

x(t )d,

which is hard to verify, so it might be source of additional conservatism too.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The second transformation


Rewrite


+ A1 esh x
s
x = A0 x
or in the time domain:

d
x(t) + A1
dt

Zh
0

sh

I + A1 1es

s
x = (A0 + A1 )
x


x(t )d = (A0 + A1 )x(t).

Turns out to be more LK-friendly, yet


requires the stability of
sh

det I + A1 1es

= 0 or, equiv., of x(t) = A1

Zh

x(t )d,

which is hard to verify, so it might be source of additional conservatism too.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The third transformation


Rewrite
sh

= (A0 + A1 )
+ A1 esh x
x A1 1es
s
x = A0 x

s
x

(in fact, this is midway toward first transformation) or in the time domain:

x(t)
= (A0 + A1 )x(t) A1

Zh

)d.
x(t

Somehow is also LK-friendly, yet claimed to


introduce additional terms to V
i.e., leads to overdesign (might be source of additional conservatism too).

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The third transformation


Rewrite
sh

= (A0 + A1 )
+ A1 esh x
x A1 1es
s
x = A0 x

s
x

(in fact, this is midway toward first transformation) or in the time domain:

x(t)
= (A0 + A1 )x(t) A1

Zh

)d.
x(t

Somehow is also LK-friendly, yet claimed to


introduce additional terms to V
i.e., leads to overdesign (might be source of additional conservatism too).

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The fourth transformation


Rewrite

+ A1 esh x

s
x = A0 x

s
x=y
sh

= (A0 + A1 )
y
x A1 1es

or in the time domain as descriptor system

I 0
0 0



 

   Zh

0
I
x(t)
x(t)
0
=

y(t )d.

y(t)
A0 + A1 I y(t)
A1 0

Not surprise that it is also considered LK-friendly. Moreover, it is


claimed to be less conservative.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

The fourth transformation


Rewrite

+ A1 esh x

s
x = A0 x

s
x=y
sh

= (A0 + A1 )
y
x A1 1es

or in the time domain as descriptor system

I 0
0 0



 

   Zh

0
I
x(t)
x(t)
0
=

y(t )d.

y(t)
A0 + A1 I y(t)
A1 0

Not surprise that it is also considered LK-friendly. Moreover, it is


claimed to be less conservative.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

What bothers me
conservatism sources are hidden
LK functional choice, cross-terms approximations, model transformation,. . .

rationale behind model transformations is obscure (recondite?)


(mysteriously, they all concentrated on

1esh
,
s

yet I found no hint why)

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

The Small Gain Theorem


(s)

G(s)

Theorem
Let G(s) and (s) be stable and such that

kk 6 1

and

kGk < 1.

Then the closed-loop system is internally stable.

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Delay as unstructured uncertaintyI


Straightforward approach is to exploit the facts that

kesh k 6 1,

Then,

h.

+ A1 esh x
stable h > 0
s
x = A0 x
{z
}
|

if
}|
{
z
+ A1 x
stable kk 6 1
s
x = A0 x

We then end up with delay-independent (sufficient) condition

which is easily solvable.

k(sI A0 )1 A1 k < 1,

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Delay as unstructured uncertaintyI


Straightforward approach is to exploit the facts that

kesh k 6 1,

Then,

h.

+ A1 esh x
stable h > 0
s
x = A0 x
{z
}
|

if
}|
{
z
+ A1 x
stable kk 6 1
s
x = A0 x

We then end up with delay-independent (sufficient) condition

which is easily solvable.

k(sI A0 )1 A1 k < 1,

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyI (contd)


Conservatism can be a bit reduced by noticing that

Mesh = esh M,

M Rnn

Then,

+ A1 esh x
stable h > 0
s
x = A0 x
{z
}
|

if
}|
{
z
+ A1 x
stable kk 6 1 such that M = M
s
x = A0 x
We then end up with delay-independent condition

M = M > 0 such that kM(sI A0 )1 A1 M1 k < 1,

which is LMI-able.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyI (contd)


Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)

Disadvantages:
delay independent, hence too conservative
(not so many, if any, problems, where delays can become arbitrarily large)

all phase information about delay is neglected

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyI (contd)


Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)

Disadvantages:
delay independent, hence too conservative
(not so many, if any, problems, where delays can become arbitrarily large)

all phase information about delay is neglected

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII


Rewrite

= (A0 + A1 )
+ A1 esh x
x A1 (1 esh )
x.
s
x = A0 x

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII


Rewrite

= (A0 + A1 )
+ A1 esh x
x.
x A1 (1 esh )
s
x = A0 x
Term 1 esh is a better candidate for approximations because its
size (norm) does depend on the phase lag of ejh .
1 ej1 h

Im

1 ej2 h

Re

Here 1 < 2 < 3 .

Im

1 ej3 h

Re

Im

Re

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII


Rewrite

= (A0 + A1 )
+ A1 esh x
x.
x A1 (1 esh )
s
x = A0 x
Term 1 esh is a better candidate for approximations because its
size (norm) does depend on the phase lag of ejh .
1 ej1 h

Im

1 ej2 h

Re

Here 1 < 2 < 3 .

Im

1 ej3 h

Re

Im

Re

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Covering 1 esh
Im

)
(
l h
1

Re

Simple geometry yields that

2 sin 2h
lh () =
2

6
if h
>
if h

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII (contd)


Then,

x stable h [0, h]
s
x = (A0 + A1 )
x A1 (1 esh )
|
{z
}
if
}|
{
z
stable k/lh k 6 1
s
x = (A0 + A1 )
x + A1 x

We then have delay-dependent (sufficient) condition

k(sI A0 A1 )1 A1 lh (s)k < 1,

which might not be easy to check though, because


lh
() is not rational.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII (contd)


Then,

x stable h [0, h]
s
x = (A0 + A1 )
x A1 (1 esh )
|
{z
}
if
}|
{
z
stable k/lh k 6 1
s
x = (A0 + A1 )
x + A1 x

We then have delay-dependent (sufficient) condition

k(sI A0 A1 )1 A1 lh (s)k < 1,

which might not be easy to check though, because


lh
() is not rational.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Rational approximations of lh
We need to construct stable and rational W(s) such that

|W(j)| > lh (),

Some examples:
,
W0 (s) = hs
W1 (s) =

2 3 hs

hs+2
3

W3 (s) =

s2 +1.567s+

2.007hs

2
s2 +1.283s+

hs+2

.
= 2.358/h
with

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Rational approximations of lh
We need to construct stable and rational W(s) such that

|W(j)| > lh (),

Some examples:
,
W0 (s) = hs
W1 (s) =

2 3 hs

hs+2
3

W3 (s) =

s2 +1.567s+

2.007hs

2
s2 +1.283s+

hs+2

(note that |W1 (j)| < |W0 (j)| for all > 0),

.
= 2.358/h
with

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Rational approximations of lh
We need to construct stable and rational W(s) such that

|W(j)| > lh (),

Some examples:
,
W0 (s) = hs
W1 (s) =

2 3 hs

hs+2
3

W3 (s) =

s2 +1.567s+

2.007hs

2
s2 +1.283s+

hs+2

.
= 2.358/h
with

We then end up with delay-dependent (sufficient) condition

k(sI A0 A1 )1 A1 W(s)k < 1,

which is easily calculable. . .

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Rational approximations of lh
We need to construct stable and rational W(s) such that

|W(j)| > lh (),

Some examples:
,
W0 (s) = hs
W1 (s) =

2 3 hs

hs+2
3

W3 (s) =

s2 +1.567s+

2.007hs

2
s2 +1.283s+

hs+2

.
= 2.358/h
with

. . . or, exploiting M(1 esh ) = (1 esh )M, with (sufficient) condition

M = M > 0 such that kM(sI A0 A1 )1 A1 W(s)M1 k < 1,

which is LMI-able.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII (contd)


The idea is rather old (pre-LK), it
can be traced back to (Owens & Raya, 82; Morari & Zafiriou, 89) and
extensively exposed in (Wang, Lundstrom
& Skogestad, 94).

Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)
conservatism sources, unlike LK approach, clearly seen.

Disadvantages:
seems to be too conservative in general
conservatism sources, unlike LK approach, clearly seen1 .
1

This appears to harm the acceptance of the method.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII (contd)


The idea is rather old (pre-LK), it
can be traced back to (Owens & Raya, 82; Morari & Zafiriou, 89) and
extensively exposed in (Wang, Lundstrom
& Skogestad, 94).

Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)
conservatism sources, unlike LK approach, clearly seen.

Disadvantages:
seems to be too conservative in general
conservatism sources, unlike LK approach, clearly seen1 .
1

This appears to harm the acceptance of the method.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Delay as unstructured uncertaintyII (contd)


The idea is rather old (pre-LK), it
can be traced back to (Owens & Raya, 82; Morari & Zafiriou, 89) and
extensively exposed in (Wang, Lundstrom
& Skogestad, 94).

Advantages:
easily understandable
easily tractable
easily extendible to multiple-delay problems
easily incorporable into controller design (H optimization)
conservatism sources, unlike LK approach, clearly seen.

Disadvantages:
seems to be too conservative in general
conservatism sources, unlike LK approach, clearly seen1 .
1

This appears to harm the acceptance of the method.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

SG isnt more conservative than LK


Two papers in the early 2000s showed that in many cases
LK conditions might actually be more conservative than SG conditions.

These are
(Huang & Zhou, 00), who cast delay robustness problem as -problem and
claimed that LK-based solutions are much more conservative
(LK derivations mostly use W0 -bound and static scaling);
(Zhang, Knospe, & Tsiotras, 01), who proved that several LK-based results
are equivalent to (statically scaled) SG-based results, which
use W0 -bound on |1 ejh |.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

SG isnt more conservative than LK


Two papers in the early 2000s showed that in many cases
LK conditions might actually be more conservative than SG conditions.

These are
(Huang & Zhou, 00), who cast delay robustness problem as -problem and
claimed that LK-based solutions are much more conservative
(LK derivations mostly use W0 -bound and static scaling);
(Zhang, Knospe, & Tsiotras, 01), who proved that several LK-based results
are equivalent to (statically scaled) SG-based results, which
use W0 -bound on |1 ejh |.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

SG isnt more conservative than LK


Two papers in the early 2000s showed that in many cases
LK conditions might actually be more conservative than SG conditions.

These are
(Huang & Zhou, 00), who cast delay robustness problem as -problem and
claimed that LK-based solutions are much more conservative
(LK derivations mostly use W0 -bound and static scaling);
(Zhang, Knospe, & Tsiotras, 01), who proved that several LK-based results
are equivalent to (statically scaled) SG-based results, which
use W0 -bound on |1 ejh |.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Descriptor approach vs. Small Gain Theorem


. We have that our system is
Consider covering of |1 ejh | with W0 = sh

stable h [0, h] if
x(s)
s
x(s) = (A0 + A1 )
x(s) A1 (s)hs
is stable for all kk 6 1 such that M = M for all M. In principle, this
is guaranteed if M = M > 0 such that to

1
kM I + (A0 + A1 )(sI A0 A1 )1 A1 M1 k < h
,
yet we may want to rewrite it as


 
1 

 



0
I
0
0 M s I 0
< 1
1

A0 + A1 I
A1 M
0 0
h

and end up with the problem of

calculating H norm of a descriptor system.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Descriptor approach vs. Small Gain Theorem


. We have that our system is
Consider covering of |1 ejh | with W0 = sh

stable h [0, h] if
x(s)
s
x(s) = (A0 + A1 )
x(s) A1 (s)hs
is stable for all kk 6 1 such that M = M for all M. In principle, this
is guaranteed if M = M > 0 such that to

1
kM I + (A0 + A1 )(sI A0 A1 )1 A1 M1 k < h
,
yet we may want to rewrite it as


1 

 
 



0
I
0
< 1
0 M s I 0
1

A0 + A1 I
A1 M
0 0
h

and end up with the problem of

calculating H norm of a descriptor system.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

H norm of descriptor systems

Theorem (Rehm & Allgover,


99)

Let det(sE A) 6 0, then kC(sE A)1 Bk < iff


A X + X A X B C
B X
I 0 < 0.
X such that E X = X E > 0 and
C
0 I
In our case,

E X = X E > 0

I 0
0 0



 



X11 X12
X11 X21
I 0
=
> 0.

X22
X12
X21 X22
0 0

Hence

X12 = 0 and X11 = X11


> 0.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

H norm of descriptor systems

Theorem (Rehm & Allgover,


99)

Let det(sE A) 6 0, then kC(sE A)1 Bk < iff


A X + X A X B C
B X
I 0 < 0.
X such that E X = X E > 0 and
C
0 I
In our case,

E X = X E > 0

I 0
0 0



 



X11 X12
X11 X21
I 0
=
> 0.

X22
X12
X21 X22
0 0

Hence

X12 = 0 and X11 = X11


> 0.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

H norm of descriptor systems

Theorem (Rehm & Allgover,


99)

Let det(sE A) 6 0, then kC(sE A)1 Bk < iff


A X + X A X B C
B X
I 0 < 0.
X such that E X = X E > 0 and
C
0 I
In our case,

E X = X E > 0

I 0
0 0



 



X11 0
X11 X21
I 0
=
> 0.

0 X22
X21 X22
0 0

Hence

> 0.
X12 = 0 and X11 = X11

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Descriptor approach vs. Small Gain Theorem (contd)


Proceedings further, we get LMI solvability condition
+ AX
22 X A1 M1 0

X21 + X A
X11 X21
A
21
21
1 M

X11 X21 + X22


X
X

X
22
22 A1 M
22
<0

1
1

0
M A1 X21
M A1 X22
h I
1
0
M
0
h
I

or, equivalently (via Schur complement of the (4, 4) term), LMI

+ AX

22 X A1 h

X21 + X A
X11 X21
A
21
21
A h
< 0,
X22 X
hY
X11 X21 + X A
X22
1
22
22

hY
hA1 X22
hA1 X21

.
.
=
where A
A0 + A1 and Y = M2 .
This is
exactly the condition of (Fridman, 01) derived via LK technique.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Descriptor approach vs. Small Gain Theorem (contd)


Proceedings further, we get LMI solvability condition
+ AX
22 X A1 M1 0

X21 + X A
X11 X21
A
21
21
1 M

X11 X21 + X22


X
X

X
22
22 A1 M
22
<0

1
1

0
M A1 X21
M A1 X22
h I
1
0
M
0
h
I

or, equivalently (via Schur complement of the (4, 4) term), LMI

+ AX

22 X A1 h

X21 + X A
X11 X21
A
21
21
A h
< 0,
X22 X
hY
X11 X21 + X A
X22
1
22
22

hY
hA1 X22
hA1 X21

.
.
=
where A
A0 + A1 and Y = M2 .
This is
exactly the condition of (Fridman, 01) derived via LK technique.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Descriptor approach is a version of SGT too


Thus, we have that
descriptor transformation leads to conditions, which are equivalent to
application of scaled Small Gain Theorem under
> |1 ejh | (in a sense, the weakest covering)
covering jh
bringing in some redundancy into state-space realization via
1
A1 + (A0 + A1 ) sI A0 A1
A1

 
 

1 


I 0
0
I
0
0 I s

,
0 0
A0 + A1 I
A1
which does not introduce any additional dynamics.

In other words, descriptor transformation appears to be


smart solution to problem one should not have gotten into in the first

place.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Descriptor approach is a version of SGT too


Thus, we have that
descriptor transformation leads to conditions, which are equivalent to
application of scaled Small Gain Theorem under
> |1 ejh | (in a sense, the weakest covering)
covering jh
bringing in some redundancy into state-space realization via
1
A1 + (A0 + A1 ) sI A0 A1
A1

 
 

1 


I 0
0
I
0
0 I s

,
0 0
A0 + A1 I
A1
which does not introduce any additional dynamics.

In other words, descriptor transformation appears to be


smart solution to problem one should not have gotten into in the first

place.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Example
Consider the system (Kolmanovski & Richard, 99):




1 0.5
2 2

x(t)
=
x(t) +
x(t h)
0.5 1
2 2
The following stability bounds are available:
Method
max
h

IV
0.271

where unscaled SGT was used.

SGT+W0
0.2716

SGT+W1
0.3042

SGT+lh
0.3047

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Some arguments in favor of SGT

Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs

Small Gain Theorem:


1. Pros and cons transparent
2. Can use tighter coverings

3. Hinges upon static scalings

3. Can use dynamic scalings ()

4. Design limited to FD controllers

4. Design can use Smith predictors

(hence nominal delay has to be h0 = 0)

(hence nominal delay may be h0 =

h
)
2

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Some arguments in favor of SGT

Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs

Small Gain Theorem:


1. Pros and cons transparent
2. Can use tighter coverings

3. Hinges upon static scalings

3. Can use dynamic scalings ()

4. Design limited to FD controllers

4. Design can use Smith predictors

(hence nominal delay has to be h0 = 0)

(hence nominal delay may be h0 =

h
)
2

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Some arguments in favor of SGT

Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs

Small Gain Theorem:


1. Pros and cons transparent
2. Can use tighter coverings

3. Hinges upon static scalings

3. Can use dynamic scalings ()

4. Design limited to FD controllers

4. Design can use Smith predictors

(hence nominal delay has to be h0 = 0)

(hence nominal delay may be h0 =

h
)
2

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Some arguments in favor of SGT

Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs

Small Gain Theorem:


1. Pros and cons transparent
2. Can use tighter coverings

3. Hinges upon static scalings

3. Can use dynamic scalings ()

4. Design limited to FD controllers

4. Design can use Smith predictors

(hence nominal delay has to be h0 = 0)

(hence nominal delay may be h0 =

h
)
2

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Some arguments in favor of SGT

Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs

Small Gain Theorem:


1. Pros and cons transparent
2. Can use tighter coverings

3. Hinges upon static scalings

3. Can use dynamic scalings ()

4. Design limited to FD controllers

4. Design can use Smith predictors

(hence nominal delay has to be h0 = 0)

(hence nominal delay may be h0 =

h
)
2

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Some arguments in favor of SGT

Lyapunov-Krasovski functional:
1. Pros and cons obscure
(?)
2. Hinges upon W0 (s) = hs

Small Gain Theorem:


1. Pros and cons transparent
2. Can use tighter coverings

3. Hinges upon static scalings

3. Can use dynamic scalings ()

4. Design limited to FD controllers

4. Design can use Smith predictors

(hence nominal delay has to be h0 = 0)

(hence nominal delay may be h0 =

The question is
what makes LK methods so dominating ?

h
)
2

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Trading off DIS-DDS


as
+ A1 esh x
We can also rewrite equation s
x = A0 x
,
s
x = (A0 + A1 )
x A1 (1 esh )
x A1 ( I)esh x
where is arbitrary. If
= 0, delay-independent conditions recovered
= I, delay-dependent conditions recovered

In general, brings more freedom and this freedom is LMI-able.


This freedom is exploited in LK methods too,
either explicitly (parametrized model transformation)
or implicitly (via so-called Parks inequality for bounding cross-terms)
(connection not quite transparent, though Zhang, Knospe & Tsiotras (01) showed it via
equivalence of resulting LMIs)

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Trading off DIS-DDS


as
+ A1 esh x
We can also rewrite equation s
x = A0 x
,
s
x = (A0 + A1 )
x A1 (1 esh )
x A1 ( I)esh x
where is arbitrary. If
= 0, delay-independent conditions recovered
= I, delay-dependent conditions recovered

In general, brings more freedom and this freedom is LMI-able.


This freedom is exploited in LK methods too,
either explicitly (parametrized model transformation)
or implicitly (via so-called Parks inequality for bounding cross-terms)
(connection not quite transparent, though Zhang, Knospe & Tsiotras (01) showed it via
equivalence of resulting LMIs)

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Trading off DIS-DDS: example


An (almost classical) example of (Li & de Souza, 97) considers the system




2 0
1 0

x(t)
=
x(t)
x(t h).
0 0.9
1 1
This system can be presented as the cascade

x2

1
s+0.9+esh

esh

x1

1
s+2+esh

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Trading off DIS-DDS: example


An (almost classical) example of (Li & de Souza, 97) considers the system




2 0
1 0

x(t)
=
x(t)
x(t h).
0 0.9
1 1
This system can be presented as the cascade

x2

1
s+0.9+esh
Delay-dependent

esh

x1

1
s+2+esh
Delay-independent stable

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Trading off DIS-DDS: example


An (almost classical) example of (Li & de Souza, 97) considers the system




2 0
1 0

x(t)
=
x(t)
x(t h).
0 0.9
1 1
This system can be presented as the cascade

x2

1
s+0.9+esh

esh

x1

Delay-dependent

This means that the choice =

1
s+2+esh
Delay-independent stable

0 0
0 I

yields






2 0
0 0
1 0 sh
sh
+

s
x=
x
(1 e
)
x
e
x
0 1.9
0 1
1 0
and effectively reduces this system to s
x2 = 1.9
x2 + (1 esh )
x2 .

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Trading off DIS-DDS: example (contd)


Thus

s
x=




2 0
1 0 sh

x
x
e
0 0.9
1 1

stability

2 .
s
x2 = 0.9
x2 esh x

It then becomes clear why some methods are less conservative than others
on this particular example (and alike).
Method
max
h

I
.99

II
.99

III+PI
4.36

I+
4.35

II+
4.35

IV+PI
4.47

SGT+W3
4.84

Exact
6.17

More successful methods get rid of x1 , either explicitly (via ) or implicitly


(via Parks inequality).

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Beyond SGT
Small Gain Theorem is not the only frequency-domain robustness tool. We
may try to
combine small gain and passivity arguments

to end up with less conservative results. This can be done in the


IQC framework

for example, as shown in (Megretski & Rantzer, 97).

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Shifted covering
Clearly,

= (A0 + A1 V(s))
+ A1 esh x
x.
x A1 (V(s) esh )
s
x = A0 x
We may then try to
choose V(s) to reduce conservatism of covering |V(j) ejh |.

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Shifted covering: how to choose V(s)


Im

Im

)
(
l h

Re

Re

V(j)

Simple geometry yields then:




cos 2h ej 2
V(j) =
0

6
if h
>
if h

1
() = 2 lh
().
and lh,V

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Shifted covering: how to choose V(s)


Im

Im

)
(
l h

Re

Re

V(j)

Simple geometry yields then:




cos 2h ej 2
V(j) =
0

6
if h
>
if h

1
() = 2 lh
().
and lh,V

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Shifted covering: rational approximation

Frequency response cos 2h ej 2 can be quite accurately approximated by

2
V1 (s) =
.
hs + 2
Covering radii are then:
5

1
lh,V

lh
5

10

lh,V
15

20

25
2
10

10

10

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Outline
Lyapunov-Krasovski methods & model transformations
Good ol (scaled) Small Gain Theorem
Some comparisons
Possible refinements
Concluding remarks

Concluding remarks

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Concluding remarks

Advantages of LK-methods are in no proportion to their popularity

Relations between LK and SGT are yet to be understood


(it looks like that all we can show is that they result in the same LMIs; it would be of
great value to have clear correspondence between intermediate steps of each method)

Model transformations

Small Gain Theorem

Some comparisons

Refinements

Concluding remarks

Concluding remarks

Advantages of LK-methods are in no proportion to their popularity

Relations between LK and SGT are yet to be understood


(it looks like that all we can show is that they result in the same LMIs; it would be of
great value to have clear correspondence between intermediate steps of each method)

Das könnte Ihnen auch gefallen