Sie sind auf Seite 1von 16

Page 1

ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 1
ECE 573 Power System Operations and
Control

3. Review of Minimization Problem Solution
Techniques
George Gross
Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 2
UNCONSTRAINED MINIMIZATION
Consider the simple minimization problem:

and is continuously differentiable
A necessary condition for a minimum at is

is determined by finding the root of by
solving the set of n equations in n unknowns
( ) { }
n
min f x x e
( )
*
T
f x 0 V =
x
*
(UMP )
( ):
n
f
( ) f V
x
*
Page 2
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 3
NOTATION
Consider a continuously differentiable function
; for any , we
write


is always a row vector and so is a
column vector and is called the gradient of f
n
f :
n
, , ,

1 2
T
n
x x x x = e (

( )
1 2
, , ,
n
f f f
f x
x x x
(
c c c
V A
(
c c c

( )
T
f x ( V

( )
f x V
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 4
NOTATION
For the mapping ,


is an matrix with each being a
row vector in
n m
g :
( )
( )
( )
1
2
x
x
x
x m
g x
g g x
g
x
g x
( V
(
c V
(
V =
(
c
(
(
V

m n ( )
x i
g x V
n
Page 3
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 5
NOTATION
The Hessian is the second derivative of f
( ) ( )
2
2
f
H x f x
x
x
c c
A = V
c
c
( )
( )
( )
( )
2
n
n
n
n n n n
x x
2 2 2
1 1 1 2 1 1
2 2 2
2 2 1 2 2
2 2 2
1 2
f f f
f x
x x x x x x x
f f f
f x
x x x x x x x H x
f f f
f x
x
x x x x
| | c c c c | |
V
|
|
c c c c c c c
|
|
|
|
c c c c
| V
|
c c c c c c c = = |
|
|
|
|
|
|
c |
c c c
V
|
|
| c
c c c c c c
\ .
\ .
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 6
NOTATION
We note that, by definition, is always
a symmetric matrix since


for a twice continuously differentiable function
f :
2 2
, 1, 2, ,
i j j i
f f
i j n
x x x x
c c
= =
c c c c
( )
H x
n

Page 4
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 7
THE GRADIENT DIRECTION
The Taylor series expansion for obtains
for small , we neglect the h.o.t. so that
Suppose we set ; then,
( ) ( ) ( )
x
f x x f x f x higher order terms in x o o o + A = + V A + A
( ) ( )
x
f x x f x f x o o ( + A ~ + V A

( )
T
x
x f x ( A = V

( ) ( ) ( )
2
x
f x x f x f x o o + A ~ V
for o > 0
o
( )
f
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 8
STEEPEST DESCENT
Thus is a direction of descent and is
called the steepest descent direction; o is called the
step size
There is a large collection of optimization
techniques for general nonlinear functions; the
simplest is the steepest descent scheme
( )
T
x
f x ( V

Page 5
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 9
STEEPEST DESCENT ALGORITHM
Step 0: determine an initial point x
(0)
; set ,
define the convergence tolerances c
1
> 0, c
2
> 0
Step 1: compute
Step 2: if , stop; else evaluate o
Step 3: set
Step 4: if ,stop and is the
solution; else, set and go to Step 1
( )
( )
T
x
v
f x
(
V

( )
( ) 1 x
v
f x c V <
( ) ( ) ( )
( )
1
T
x
v v v
x x f x o
+
(
= V

( )
( )
( )
( ) 2
1 v v
f x f x c
+
<
( )
1 v
x
+
1 v v = +
v 0 =
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 10
STEEPEST DESCENT ITERATIONS
( ) f x c =
1
( ) f x c c = <
2 1
( )
3 2
f x c c = <
( )
x
3
( )
x
4
( )
x
2
( )
x
1
( ) 0
x
x
2
contours
. of constant
value of
direction of the
negative of the
gradient
( ) f x
x
1
2 1 4 3
> c c c c > >
( )
3 4
f x c <c =
Page 6
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 11
SLOW CONVERGENCE OF THE
STEEPEST DESCENT METHOD
( ) 0
x
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 12
We use Newtons method to find the root of

Note that in this case the Jacobian of is
the Hessian of
NEWTONS METHOD FOR
MINIMIZATION
( )
T
f x 0 ( V =

( )
f x ( V

( )
f x
( )
( )
2
2
f x
H x
x
c
=
c
Page 7
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 13
Consider the problem



with continuously differentiable
We convert ( ECMP ) into the form of ( UMP ) by
defining a multiplier and the Lagrangian
EQUALITY - CONSTRAINED
MINIMIZATION
( ECMP )
( )
( )
. .
min f x
st
g x 0 =

)
( )
n m
g :
( ) ( ) ( )
,
T
x f x g x = + L
m
e
penalty for
( )
g x 0 = violating
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 14
The necessary conditions for an optimum at



The presence of the m constraints leads
to the augmentation of the dimension of the
decision problem from n to n + m
A scheme for (UMP ) solution may be deployed to
solve for the (n + m) - dimensional optimal
decision variables
EQUALITY - CONSTRAINED
MINIMIZATION
( ) ( ) ( ) ( )
( ) ( )
T
T
x x
T T
x f x g x 0
x g x 0

V = V + V =
V = =
* * * * *
* * *
,
,
L
L
( )
* *
, x
( ) 0 x g =
( )
* *
, x
Page 8
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 15
Consider the minimization problem



with
One way to solve (I CMP ) is by transforming
(I CMP ) into the form of (UMP )
INEQUALITY - CONSTRAINED
MINIMIZATION
(I CMP )
( )
( )
. .
min f x
s t
x 0 h s

)
( )
n
h :
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 16
INEQUALITY - CONSTRAINED
MINIMIZATION
We introduce for each inequality
the penalty function


and add the term to the objective
function with
( )
( )
( ) ( )
2
i
i
i i
0 if h x 0
p x
h x if h x 0
s

A

(
>

( )
i
h x
( )
i i
p x k
i
0 k >
Page 9
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 17
The penalty coefficient is chosen large enough
so as to force to satisfy each constraint
Thus we convert the problem to the (UMP ) form


We can use appropriate (UMP ) solution schemes
for determining
INEQUALITY - CONSTRAINED
MINIMIZATION
( ) ( )
1
i i
i
min f x p x k
=

`
)
+

x
*
x
i
k
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 18
GENERAL MINIMIZATION PROBLEM
We consider the constrained minimization
problem




with and continuously
differentiable functions

( )
( )
( )



min f x,u
s.t.
g x,u 0
h x,u 0

)
=
s
(CMP)
( )
,

m n
f :
( )
,
m n m
g :
n m
u x e e and
and
Page 10
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 19
The inequality is treated by
appending the penalty functions
, to the objective to construct

Thus we focus on the resulting (ECMP)
GENERAL MINIMIZATION PROBLEM
( )
,
m n
h :
i =1,2, ,
( ) ( ) ( )
1

, , ,
i i
i
f x u f x u p x u k
=
= +

( )
s.t.
g x,u 0 =
( )
, ,
i i
p x u k
( )
min f x,u
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 20
GENERAL MINIMIZATION PROBLEM
We may view the constraint as the
functional means by which is defined
We construct the Lagrangian
( ) ( ) ( )
, , , ,
T
x u f x u g x u = + L
penalty for violating
( )
, g x u 0 =
( )
g x,u 0 =
( )
x x u =
Page 11
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 21
NECESSARY CONDITIONS OF
OPTIMALITY
For minimizing the unconstrained L , the
necessary conditions for optimality are



We consider a point at which
so that the total derivative
( )
,
T T
T T
T T
x x x
u u u
f g 0
f g 0
g x u 0

V = V + V =
V = V + V =
V = =
L
L
L
( )
x,u
( )
g x , u 0 =
dg g g
x
0
du x u u
c c
c
= + =
c c c
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 22
We introduce the assumption that is
nonsingular in the region of interest; then


expresses the sensitivity of with respect to
which is obtained from the implicit functional
relationship
NECESSARY CONDITIONS OF
OPTIMALITY
x
g
c
c
1
g g
x
u x u

c c
| |
c
=
|
c c c
\ .
x u
( ) 0 u , x g =
Page 12
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 23
From the necessary conditions

it follows that

We use this information for constructing the
reduced gradient as a descent direction
NECESSARY CONDITIONS OF
OPTIMALITY
T T
x x x
f g 0 V = V + V = L
T
x x
f g

(
= V V

1
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 24
THE REDUCED GRADIENT
We call the reduced gradient the total derivative




Geometrically is the projection of the total
derivative of f on the u-subspace of the space
We adapt the steepest descent to solve (CMP )
-
1 2
1
, , ,
T
n
T
T T T
u x u u
df df df df
du du du du
g g
f f f g
x u

(
=
(

(
(
( ( ( ( = V V = V + V
(

(


df
du
u x
Page 13
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 25
EXAMPLE: MINIMIZE ACTIVE
POWER LOSSES
y
13
= 4 j 10
y
23
= 4 j 5
P
3
+ j Q
3
= 2.0 - j 1.0 p.u.
P
2
= 1.7 p.u.
P,V bus
reference/swing
bus
P,Q bus
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 26
VARIABLES
Control variables:


State variables:

u =
V
1
V
2

Page 14
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 27
OBJECTIVE FUNCTION



We notice that P
2
, P
3
are fixed, so any changes in
the losses will be reflected in changes in P
1
, so


2
Re
both lines
f I R
| |
=
|
\ .

ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 28
EQUALITY CONSTRAINTS
( )
( )
( )
2 2
3 3
3 3
,
( , ) ,
,
P V P
g x u P V P
Q V Q
u
u
u
(
(
=
(
(


where
The AC power flow equations for bus 2 and 3
Page 15
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 29
REDUCED GRADIENT METHOD
( ) ( ) ( )
, , , ,
T
L x u f x u g x u = +
2 2
1 2
3 3
1 2
3 3
1 2
P P
V V
g
P P
u V V
Q Q
V V
( c c
(
c c
(
( c
c c
=
(
c c c
(
(
c c
(
c c
(

2 2 2
2 3 3
3 3 3
2 3 3
3 3 3
2 3 3
P P P
V
g
P P P
x V
Q Q Q
V
u u
u u
u u
( c c c
(
c c c
(
( c
c c c
=
(
c c c c
(
(
c c c
(
c c c
(

ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 30
REDUCED GRADIENT METHOD
( )
1
T
T
u x
g g
df
f f
du x u

(
c c
| |
= V V (
|
c c
( \ .

Page 16
ECE 573 2001 - 2013 George Gross, University of Illinois at Urbana-Champaign; All Rights Reserved 31
REDUCED GRADIENT METHOD
We pick


We solve the AC power flow and determine ,
we select a step size and iterate until
we find the solution

( )
( 1) ( )
u
df
u u
du v
v v
o
+
=
x
(0)

Das könnte Ihnen auch gefallen