Sie sind auf Seite 1von 13

Quiescent Steady State (DC) Analysis The Newton-Raphson Method

J. Roychowdhury, University of California at Berkeley

Slide 1

Solving the System's DAEs


d ~ x ~ (~ (t)) + f (~ (t)) + ~ = ~ q x b(t) 0 dt

DAEs: many types of solutions useful


DC steady state: no time variations state transient: ckt. waveforms changing with time transient periodic steady state: changes periodic w time

linear(ized): all sinusoidal waveforms: AC analysis nonlinear steady state: shooting, harmonic balance shooting

noise analysis: random/stochastic waveforms analysis sensitivity analysis: effects of changes in circuit analysis parameters
Slide 2

J. Roychowdhury, University of California at Berkeley

QSS: Quiescent Steady State (DC) Analysis


d ~ x ~ (~ (t)) + f (~ (t)) + ~ = ~ q x b(t) 0 dt

Assumption: nothing changes with time

x, b are constant vectors; d/dt term vanishes

Why do QSS?

z }| { ~ x f (~ ) + ~ = ~ b 0

~ (~ ) g x

quiescent operation: first step in verifying functionality stepping stone to other analyses: AC, transient, noise, ... the problem: solving them numerically most common/useful technique: Newton-Raphson method
Slide 3

Nonlinear system of equations


J. Roychowdhury, University of California at Berkeley

The Newton Raphson Method

g x 0 Iterative numerical algorithm to solve ~ (~ ) = ~


1 2
a
i ii

start with some guess for the solution repeat


check if current guess solves equation

if yes: done! if no: do something to update/improve the guess

Newton-Raphson algorithm

start with initial guess ~ 0 ; i=0 x repeat until convergence (or max #iterations) d~ (~ i ) g x compute Jacobian matrix: Ji = d~ x solve for update ~ : Ji ~ = ~ (~ i ) x x g x
x x x update guess: ~ i+1 = ~ i + ~ i++;
Slide 4

J. Roychowdhury, University of California at Berkeley

Newton-Raphson Graphically

g(x)

Scalar case above Key property: generalizes to vector case

J. Roychowdhury, University of California at Berkeley Slide 5

Newton Raphson (contd.)

Does it always work? No.

Conditions for NR to converge reliably


g(x) must be smooth: continuous, differentiable starting guess close enough to solution
Slide 6

practical NR: needs application-specific heuristics

J. Roychowdhury, University of California at Berkeley

NR: Convergence Rate

Key property of NR: quadratic convergence Suppose x is the exact solution of g(x) = 0 th At the i NR iteration, define the error i = xi x

meaning of quadratic convergence: i+1 < c2 i

(where c is a constant)

NR's quadratic convergence properties


if g(x) is smooth (at least continuous 1st and 2nd derivatives) and g 0 (x ) 6= 0 and kxi x k is small enough, then: NR features quadratic convergence

J. Roychowdhury, University of California at Berkeley

Slide 7

Convergence Rate in Digits of Accuracy

Quadratic convergence

Linear convergence

J. Roychowdhury, University of California at Berkeley

Slide 8

NR: Convergence Strategies

reltol-abstol on deltax

stop if norm(deltax) <= tolerance

tolerance = abstol + reltol*x


reltol ~ 1e-3 to 1e-6 abstol ~ 1e-9 to 1e-12

better

more sophisticated possible

apply to individual vector entries (and AND) organize x in variable groups: e.g., voltages, currents, (scale DAE equations/unknowns first) e.g., use sequence of x values to estimate conv. rate

residual convergence criterion

Combinations of deltax and residual

g x stop if k~ (~ )k < residual

ultimately: heuristics, tuned to application


Slide 9

J. Roychowdhury, University of California at Berkeley

Newton Raphson Update Step

Need to solve linear matrix equation

d~ (~ ) g x J= : Jacobian matrix d~ x
3 x1 6 . 7 ~ (t) = 4 . 5 ; x . xn 2 2 3 g1 (x1 ; ; xn ) 6 7 . . ~ (~ ) = 4 g x 5 . g1 (x1 ; ; xn )

J ~ = ~ (~ ) : x g x

Ax = b problem

Derivatives of vector functions

If

then

J. Roychowdhury, University of California at Berkeley

6 6 6 d~ g 6 . ,6 . d~ x 6 dg . 6 n1 4 dx1
dgn dx1

dg1 dx1 dg2 dx1

dg1 dx2 dg2 dx2

. . .

dg1 dxn1 dg2 dxn1

. . .

dgn1 dx2 dgn dx2

dgn1 dxn1 dgn dxn1

7 7 7 . 7 . 7 . 7 dgn1 7 dxn 5
dgn dxn

dg1 dxn dg2 dxn

Slide 10

DAE Jacobian Matrices

d ~ x q x b(t) 0 Ckt DAE: ~ (~ (t)) + f (~ (t)) + ~ = ~ dt


2 3 2 3

iE

iL

2 3 2 3 0 0 e1 (t) diode(e1 ; IS ; Vt ) iE 6 Ce2 7 6 0 7 6 e2 (t) 7 7~ iE + iL + e2 6 7 f (~ ) = 6 7 7 ~(~ ) = R 6 7 b(t) = 6 q x ~ (t) = 6 x 4 0 5 ~x 4E(t)5 4 iL (t) 5 4 5 e2 e1 LiL 0 iE (t) e2

2 0 60 d~ q Jq , =6 40 d~ x 0

0 C 0 0

0 0 0 L

3 0 07 7 05 0

~ 6 df Jf , =6 4 d~ x

2 ddiode
dv

(e1 ) 0 1 0

0
1 R

1 1

0 1 0 0

3 1 17 7 05 0
Slide 11

J. Roychowdhury, University of California at Berkeley

Newton Raphson: Computation

Need to solve linear matrix equation

Ax=b: where much of the computation lies


J ~ = ~ (~ ) x g x

: Ax = b problem

large circuits (many nodes): large DAE systems, large Jacobian matrices in general (for arbitrary matrices of size n)

solving Ax = b requires
O(n2) memory O(n3) computation! (using, e.g., Gaussian Elimination)

but for most circuit Jacobian matrices


O(n) memory, ~O(n1.4) computation because circuit Jacobians are typically sparse
Slide 12

J. Roychowdhury, University of California at Berkeley

Dense vs Sparse Matrices

Sparse Jacobians: typically 3N-4N non-zeros

compare against N2 for dense

J. Roychowdhury, University of California at Berkeley

Slide 13

Das könnte Ihnen auch gefallen