Sie sind auf Seite 1von 18

Stability of a system:

Basics of System
Dynamics

Presented by :
Guided by:
Kethan Srivathsava Segu
Madhu Talluri Prof Dr. S.K.Sharma
State space equation
Linear Dynamical System
It is defined as the rate of change of its state variables as
linear functions of satate variables, control variables and exogenous variables.
It also depends on the initial value of state variables.

A system is called autonomous or free if it is free from external variables and if


control variable vector is absent.
Solution of state equation:
Consider a autonomous system with state equation

Applying laplace and its inverse and solving, we get

Taking a state transformation of E, we get


AE

Using the above transformation and assuming that A has distinct eigen values and E
represents corresponding eigen vectors
AE=^
This helps in decoupling the above into Independent systems ,we get
=
In terms of original state variables ,it is
Eigen value and stability:
Basing on the location of eigenvalue in the complex plane the stability of a system
can be found.
Basing on location of eigenvalues in complex plane many o its characteristics like
degree of damping etc., can be found.
Routh Stability Criteria:

Roots should be present in the left half plane, if not present the
system can never attain stability.
Routh's stability criterion tells us whether or not there are unstable
roots in a polynomial equation without actually solving for them.
This stability criterion applies to polynomials with only a finite number
of terms. When the criterion is applied to a control system,
information about absolute stability can be obtained directly from the
coefficients of the characteristic equation.

The procedure in Routh's stability criterion is as follows:
1. Write the polynomial in s in the following form:

where the coefficients are real quantities. We assume that ; that is, any zero root has been
removed.
If any of the coefficients are zero or negative in the presence of at least one positive coefficient,
there is a root or roots that are imaginary or that have positivemial are difficult to obtain. For anth-
degree polynomial.
If all coefficients are positive, arrange the coefficients of the polynomial in rows and columns
according to the following pattern:
The process of forming rows continues until we run out of elements. (The total number of
rows is n + 1.) The coefficients b,, b2, b3, and so on, are evaluated as follows:

The number of sign changes in the first column indicate the maximum number of roots
with positive real parts i.e, maximum number of poles on the right hand side.
Liapunov Stability
Criterion:
It is
a very versatile approach for both linear and nonlinear systems.
The added advantage of this method is it can be used even to synthesize control policy.

It states that a system is asymptotically stable in the vicinity of equilibrium point at the
origin if there exists a scalar function v(x) such that
a) v(x) is having continuous partial derivatives in a region s around the origin.
b)v(x)>0 for x0
c)v(0)=0
d)(x)<0 for x0
The function (x) which satisfies the above conditions is a liapunov function.
No guide is given regarding how to formulate a liapunov function for a dynamic system but
it is stated that if such a function exist then the system is stable.
Consider a autonomous linear time invariant system
With only equilibrium state at
origin
Let v(x) is a liapunov function in quadratic form,

V(x)=
=+.
Where p is a positive definite real symmetrical matrix
Now,
PX+
= -, where R = -(
TO satisfy the said conditions R must be positive definite.

A convenient alternative is to first select an R and then proceed in the reverse direction to
obtain P whose existence proves that system is stable according to liapunov criteria.
Optimal Control:
Basic
optimal control problem in continuous time is formulated as follows: Consider a
system defined on a time interval 0<t<T,

With fixed conditions of x(0)=and allowable controls u(t)


With Objective function to be maximized given by

J=(x(t))+

In some cases the control variable may be having a fixed bounds, Like fraction of profit
that is to be reinvested by a businessman must lie between 0 and 1.
In the above equation the integral represents things that are acquired over time.

Modified objective and the Hamiltonian:

To characterize a optimal control , we shall trace out the


effect of an arbitrary small change in u(t) and require that it be non improving for the
objective. That is we start with the assumption that the control u(t) is optimal and the
small change considered earlier gives a negative deviation in the objective function J.

As the above said is a bit difficult to show directly because of the mathematical
complexities involved we apt a indirect way by defining a modified objective

The term in the brackets is zero for any trajectory. So we can consider maximizing
instead of J.
For convineince define the Hamiltonian function
H(,x,u)=(x,u)+l(x,u)
Now,
considering a small disturbance and integration by parts yields

(T(0+ ) dt

By using multidimension version of taylor theorem, we get

Where p() denotes terms that are of smaller order than . Now taking proper (t) and solving proves
the required.
Example
s Push cart problem

A problem involving a second order system is the problem of accelerating a


cart in such a way as to maximize the total distance travelled in a given time ,minus the
total effort.
The system is

x(0)=0
(0)=0
Where x is the horizontal position and u is the applied force,
J=x(t)-0.5dt
the integral term represents penalty for effort applied.
Defining state variables x1=x and x2=, we have

with initial conditions, x2(0)=0=x1(0)


The adjoint system equation is

The final conditions on the adjoint equations are

Which implies

The Hamiltonian is

Applying partial derivatives to maximize, we get


U(t)=
Which conclude that the applied force should decrease linearly with time, reaching zero at the final
time
Insects as
Optimizers

Many
insects such as wasps live in colonies and have an annual life cycle.

Their population consists of two sects : workers and reproductive.

At the end of each summer all members of colony die out except for the young queens who
may start new colonies in early spring.

Let w(t) and q(t) denote, respectively, the workers and reproductive population levels in the
colony.

At any time t 0<t<T, in the season the colony can devote a fraction u(t) of its effort in
enlarging the work force and the remaining to produce reproductive.

t)=bu(t)w(t)-w(t)

Das könnte Ihnen auch gefallen