Chapter 1
1.1 System theoretic approach
What is the ‘ system theoretic approach ’ ?
To understand or to make ’things’.
Build bigger ’things’ in terms of smaller ’things’ using simple ’connections’.
Use only external properties of ’things’.(black box approach)
Note : ‘appropriate’ meaning to things and connections could be assigned.
Example:
Electrical Networks : We build by connecting (i.e using KCL, KVL constraints) mul titerminal de vices. For these multiterminal devices we only look at exter nal (v, i) characteristics such as
i
i
i
i
1
2
3
.
.
.
n− 1
n
i
The connections are ’simple’ KCL and KVL.
= G
v
v
v
v
.
.
.
n− 1
3
2
1
v
n
(1.1)
Block Diagram based description of systems : The block diagram is shown in Fig. 1.1.
In this case the connections are again simple: Summer and Connection Points .
Summer : It has many input variables and single output variables
z _{1} = u − z _{4}
(1.2)
Connection Point : It has one input and many outputs.
z _{2} = y z _{2} = z _{3}
(1.3)
(1.4)
Is this feasible? Can this always be done? What if ’connections’ are actually complicated? Example: Suppose we have at nodes e ^{x} ^{1} + e ^{x} ^{2} + sin( x _{3} ) = 0. See Fig. 1.2 Solution lies in using new systems but keeping the connections simple as shown in Fig. 1.3
xˆ _{1} = e ^{x} ^{1}
1
(1.5)
Figure 1.1:
Figure 1.2:
Figure 1.3:
2
xˆ _{2} = e ^{x} ^{2}
(1.6)
xˆ _{3} = sin( x _{3} ) 
(1.7) 
xˆ _{1} + xˆ _{2} + xˆ _{3} = 0 
(1.8) 
Connections need not introduce constrains in terms of equat ions. Example, when subsystems are communities of people and arrows indicate some social inter action.
The kinds of system we deal with invariable yield connection constrains which are simple. Linear ones with 0,+1,1 coeﬃcients. Both the block diagram description as well as that of electrical networks is of this type. Note that their members are not of the type used for device characteristic w=Ri, the value
of R is not precisely known. The + 1,1 of the connection is pre cise. This again should be exploited
during computation involving the system.
Advantages of keeping the connection constrains simple:
Good subsystems connected together yield good results.
In terms of nature of equations the connection equations bei ng simple would qualify would qualify as good and the above statement would be a consequence of this fact.
Example:
When connections are electrical i.e. KCL and KVL
(Example from audience) if subsystems have continuous V I ch aracteristics system will have continuous
V I characteristics. (Inductors –current continuous, Capacitors voltage continuous)
Subsystems ‘Linear ⇒ Systems ‘Linear Connect all subsystems having characteris tics Ai + Bv = s them and bring out ports you will see characteristics Kv _{p} + M i _{p} = S _{p}
1.2 ‘System’
Let us now deﬁne a ‘system’ as a collection of ‘input’ , ‘outpu t’ pairs. Example : Electrical Multiport
^{u}
^{u}
1
k
Typical element
+ Output
^{y} 1
− Voltage
Sensors
^{y} m
Output
Current
Sensors
(( u _{1} ,
u _{k} ) , ( y _{1} ,
y _{m} ) ∈ S
(1.9)
u _{i} (.), y _{i} (.) could be functions of t or i Choose v as input and i as output. (this choice is arbitrary) Question:
Does ( v _{1} , i _{1} ) ∈ S _{R} ?
Check if v _{1} = Ri, if yes ( v _{1} , i _{1} ) ∈ S _{R} (otherwise it does not belong to S _{R} ) similarly, if v=E, (v, i) ∈ S _{E} and if, v= ^{d}^{L} , (v, i) ∈ S _{L}
dt
3
(1.10)
System is linear iﬀ
v, i
Figure 1.5:
v,i
L
( u ^{1} , y ^{1} ) , ( u ^{2} , y ^{2} ) ∈ S = ( αu ^{1} + βu ^{2} ) , αy ^{1} + βy ^{2} ) ∈ S
Is the system deﬁned by ^{d} ^{3} ^{u} + 2 ^{d} ^{2} ^{u} + u = ^{d}^{y} linear?
dt ^{3}
dt ^{2}
dt
(1.11)
_{d}_{t} d is a linear operation, linear combination of linear operati ons yield a linear operation.
1.3 Causality
1. past inﬂuences the future
2. future does not inﬂuence the past
We will take the 2 ^{n}^{d} point as the key. Think of a special case where for a given inpu t function of time, there is a unique output function and we build our system as follows
For each input function u , we just pick some y and say that this is the output function corresponding to it. So, for each u there is a unique y in the ‘system’.
Now, consider the input functions u ^{1} and u ^{2} and corresponding output functions y ^{1} and y ^{2}
The result could be as in Fig 1.6 and Fig 1.7
Figure 1.6:
Can such a system be called causal?
The two inputs were the same upto t _{1} and deviates after t _{1} . How did the system react to the deviation before t _{1} ? This, one would not expect in a causal system. The above situ ation should be forbidden in a ‘causal’ system. We need to however state it in the context of the deﬁnition of system as a “collection of inputoutput pairs”.
4
u
2
K
t
^{t} 1
Figure 1.7:
So, we say it is like
If two input functions u _{1} , u _{2} are the same upto t _{1} and then deviate, there
must be atleast one ( u ^{1} , y ^{1} ) and one ( u ^{2} , y ^{2} ) pair, such that y ^{1} , y ^{2} are the same upto t _{1} .
Formally, u ^{1} ( t ) = u ^{2} ( t ),
t ≤ t _{1} , and ( u ^{1} , y ^{1} ) ∈ S
⇒ ∃ ( u ^{2} , y ^{2} ) ∈ S such that, y ^{1} ( t ) = y ^{2} ( t ), t ≤ t _{1} .
1.4 Time Invariance
Time Invariance: No instance of time is special for the system. The time origin could be anywhere.
( u, y ) ∈ S = > ( u _{T} , y _{T} ) ∈ S, where x _{T} ( t ) = x ( t − T )
(1.12)
Suppose ( u, y ) as in adjacent ﬁgure in S , then, ( u ^{′} , y ^{′} ) also in S
Figure 1.8:
Note: Other notions such as System/Subsystem Reciprocity and Sys tem/Subsystem Passivity are also usually discussed.
Problem: Examine if the system stated below or in the ﬁgure are linear, time invariant, causal
5
Figure 1.9:
6
a ) 
y = u + t ^{d}^{u} dt 

b 
) 
y = sin( u ) + ^{d}^{u} dt 

c 
) 
e ^{u} + 2 ^{d}^{u} dt + sin( u ) = 
y 
d 
) 

e 
) 
^{d}^{y} dt + y + u 
= 0 for 
y = u for t > 10
= 2u for t < 10
y > u
d ^{2} y
dt ^{2}
+ u = 0 for y < u
Figure 1.10:
1.5 State
Deﬁne state as extra information at time t given input up to t to ﬁnd y(t)
Problem: For the circuits of Fig. 1.9 and Fig. 1.10 taking input to be (a) v (b) i. What is a good choice of state? In many important instances of system e.g. electrical circu its with the usual elements the state can be taken to be a vector function of time. We can then write
Y
In the linear case we can often write
= f ( x, u, u,˙
). ( u, u,˙
known because past of x is known)
x˙ 
= 
Ax + Bu + C u˙ 
(1.13) 
y 
= Cx + Du + D _{1} u˙ 
When the system is given with the state vector, we would say it is the “System with State”, which would then be describable as y = f ( x ( t ) , u ( . )  _{τ}_{<}_{t} )
A “system with state” is linear iﬀ
y ^{1} ( t ) = f ( x ^{1} ( t ) , u ^{1} ( . )  _{τ}_{<}_{t} )
y ^{2} ( t ) = f ( x ^{2} ( t ) , u ^{2} ( . )  _{τ}_{<}_{t} )
(1.14)
⇒ αy ^{1} + βy ^{2} = f ( αx ^{1} + βx ^{2} , αu ^{1} ( . ) + βu ^{2} ( . )  _{τ}_{<}_{t} )
7
Notice that the states are also linearly combined.
Causality
Is a system governed by a diﬀerential equation causal? Intui tively, what does causality mean? Common usage is as follows:
( i ) past inﬂuences future  ‘causal’
( ii ) future inﬂuences past  ‘noncausal’
( i ) and ( ii ) are not negations of each other. In this sense, if a system is governed by
dy
dt
= ay ( t ) + bu ( t )
it can be thought of as ﬁtting into both or neither depending on how strictly we use the word ‘inﬂuence’. For
(1.15)
dy
dt
= + ay ( t ) + bu ( t )
can also be rewritten as follows. Set τ = − t . Then the above equation reads
dy
dτ ^{=} ^{−} dy
dt
=
− ay ( τ ) − bu ( τ )
(1.16)
So if in eq. 1.15 past inﬂuences future, so does it in eq. 1.16. But what is past in eq. 1.15 is future in eq. 1.16. So let us use for deﬁnition of causality, the key idea ‘can we p redict the future input knowing the past behavior of the system?’ Here past behavior can be thought to include both past input and output. Deﬁne z ( t ) = e ^{−} ^{a}^{t} u ( t ). So eq. 1.15 becomes
e ^{−} ^{a}^{t} y˙ ( t ) − ae ^{−} ^{a}^{t} y ( t ) = e ^{−} ^{a}^{t} u ( t )
i.e., ^{d}^{z} = e ^{−} ^{a}^{t} u ( t ) = uˆ ( t ) say. Since uˆ ( t ), u ( t ) can be obtained from each other instantaneously, we need on ly look at the equation = uˆ ( t ). If z is known to be diﬀerentiable (left derivative = right derivative), then we can predict u ( t _{0} ) by taking dz
dz
dt
dt
lim
t → t _{0} , t<t _{0}
dt
= u ( t _{0} ) So in this case we appear to be predicting the input.
So for practical situations, it is better to think of a diﬀere ntial equation to have a left derivative, whenever
you have ’ _{d}_{t} ’ and u ( t _{0} ) to mean
The above discussion is for ‘ordinary’ functions. Observe t hat once we interpret diﬀerential equations this way, eq. 1.15 and eq. 1.16 do not govern the same diﬀerent ial equation. To summarize, diﬀerential equations do not have a proﬀered d irection of time beforehand. The direction is imposed by us additionally when we model physical systems by, for instance, interpreting _{d}_{t} and u ( t ) as above. Suppose that the system is governed by
d
lim _{t}_{<}_{t} 0 u ( t ), i.e., u ( t _{0} − ).
t → t _{0} ,
d
x˙ 
= Ax + Bu 
y 
= Cx + Du 
where the x ( t ) is the state of the system at time t . If we set x ( t _{0} − ) = 0 and insist u ( t ) = 0, t < 0 then it is easy to see that y ( t ) = 0, t < 0. We deﬁne the state x ( t ) of the system as follows. Given x ( t _{0} ), u ( t ) for t ≤ 0, we can uniquely determine output y ( t ) for t > t _{0} . For our purposes the following restricted deﬁnition of caus ality is adequate.
8
Let S be a system with input variable u ( . ) and output variable y ( . ) and state x ( . ). We say S is causal
iﬀ u ( t ) = 0, t < t _{0} , x ( t _{0} − ) = 0 always
implies y ( t ) = 0 for t < t _{0} for all t _{0} .
In particular, the impulse response for such a system is zero for t < 0.
A system governed by
x˙ 
= Ax + Bu 
y 
= Cx + Du 
where u , y , x are input, output and state variables, is causal in the above point of view.
Exercises
1. x is an eigen vector of matrix A iﬀ x = 0, Ax = λ x for some λ . An eigen vector of A ^{T} is called a row eigen vector of A .
Show that λ is an eigen value iﬀ det ( λ I − A ) = 0
Real matrices may have complex eigen values. Show that th ey always occur in pairs of conjugates. The corresponding eigen vectors also are conjugates.
Let A ^{∗} be the conjugate transpose of A . Show
(a)
(b)
(c)
( det ( sI − A )) ^{∗} = det (( sI − A ) ^{∗} ) = det (( s ^{∗} I − A ^{∗} ))
So the eigen values of A and A ^{∗} are the same if A is real. corresponding to eigen value σ and λ = σ then
∗
r _{λ} x _{σ} = 0
If r _{λ} is a row eigen vector of A
∗
(d)
(e)
(f)
(g)
If an n × n matix A has n distinct eigen values then the matrix P, whose columns are eigen vectors corresponding to distinct eigen values, is inverti ble. Hence,
P ^{−} ^{1} AP =
λ ^{1} .
.
λ
n
where λ _{i} are the distinct eigen values of A .
If A is Hermitian, ie. A ^{∗} = A then P ^{∗} P = I In this case,
P ^{∗} AP =
λ ^{1} .
.
λ
n
and all the eigen values are real. Real symmetric matrices ar e Hermitian. In this case P can be taken to be real.
p ^{′} is an eigen vector of A corresponding to eigen value λ if T ^{−} ^{1} p ^{′} is an eigen vector of T ^{−} ^{1} AT corresponding to λ .
If A is Hermitian it is always possible to ﬁnd P such that P ^{∗} = P ^{−} ^{1} and
P ^{∗} AP =
even if λ _{i} are not distinct.
λ ^{1} .
.
λ
n
9
(h) 
( A − λ _{1} I ) x = 0 has solution space V _{λ} _{1} say. If λ = σ show V _{λ} , V _{σ} are orthogonal. (Take x , y to be orthogonal if x ^{∗} y = xy ^{∗} = 0). 
(i) 
For any vector space V , Show that it is always possible to ﬁnd a matrix whose rows for m a basis of V and are mutually orthogonal. Let us call such a basis an orthogonal basis of V . 
(j) 
Let P have as its rows the orthogonal bases of the vector space V _{λ} for each eigen value of λ of the Hermitian matrix A . Show that P is a square matrix and P ^{∗} P = I and further that 
P ^{∗} AP =
λ ^{1} .
.
λ
n
2. (a) For the system described by x˙ = Ax + Bu prove
Total solution = Zero input solution + Zero state solution.
Let x ( t ) satisfy x˙ = Ax + Bu in the interval [0, ∞ ). Here, A is an n × n matrix and B is an n × m matrix. It can be shown that given x (0) and u ( . ) in [0, t ), there exists a unique x ( t ) which satisﬁes the above diﬀerential equation. Let x ^{1} be the unique solution corresponding to ( x ^{1} (0), u ^{1} ( . )), x ^{2} be the one corresponding to ( x ^{2} (0), u ^{2} ( . )), then by direct substitution we can verify that α x ^{1} + β x ^{2} is the unique solution corresponding to ( α x ^{1} (0) + β x ^{2} (0), α u ^{1} ( . ) + β u ^{2} ( . )).
[Observe that when input is [ u _{1} ( . ) , u _{2} ( . ) ,
that due to the following m ‘elementary’ situations:
., u _{m} ( . )] ^{T} , the solution can be broken up further into
u
_{1} ( . )
0
.
.
.
0
,
u
0
_{2} ( . )
.
.
.
0
,
.
.
.
.
.
.,
u
0
0
.
.
.
_{m} ( . )
i.e., one of the inputs active and the others zero. In other words superposition of inputs is applicable for zero state solution.] In particular, if
x ^{1} (0) = 0, 
u ^{1} ( . ) = u ( . ) 
x ^{2} (0) = x (0) 
u ^{2} ( . ) = 0. 
We have solution x corresponding to ( x (0) , u ( . )) = x ^{1} + x ^{2} where x ^{1} is called the zero state solution and x ^{2} the zero input solution. Observe that to prove the above result we have used (a) linear ity and (b) uniqueness of solution corresponding to initial condition and input. This idea can be therefore generalized to apply it to other classes of diﬀerential equations which have the above ’unique solution’ property.
(b) Solve the scalar diﬀerential equation x˙ = ax + bu by the method of integrating factors. [Consider the scalar diﬀerential equation
x˙ = ax + bu
Use the idea of ’integrating factor’ to compute the zero stat e solution to be
and the zero input solution to be
x _{u} ( t ) = t e ^{a} ^{(} ^{t} ^{−} ^{τ} ^{)} bu ( τ ) dτ
0
x _{s} ( t ) = x (0) e ^{a}^{t} .
[How does the situation change if we had x˙ = ax + b _{1} u _{1} + b _{2} u _{2} +
10
+ b _{m} u _{m} ?]]
(c)
If
show that
z = Tx ,
z˙ = TAT ^{−} ^{1} z + TBu
(d) 
Solve x˙ = Ax + Bu when A is diagonalized. 
(e) 
If A is diagonalizable there exists a matrix P such that 
P ^{−} ^{1} AP =
λ ^{1} .
.
λ
n
Show that the columns of P are eigen vectors of A and λ _{1} , λ _{2} values. Choose z = P ^{−} ^{1} x We then have
z˙
=
λ ^{1} .
.
λ
n
_{} z + Bu
where B = P ^{−} ^{1} B ie we have ndecoupled diﬀerential equations. We have
z˙ _{j} = λ _{j} z _{j} +
B j 1 u 1 + B _{j} _{2} u _{2} +
where j = 1 n The solution to this equation is
z _{j} ( t ) = e ^{λ} ^{j} ^{t} z _{j} (0) + t
0
_{e} λ _{j} ( t − τ ) _{[}
B _{j} _{1} u _{1} ( τ ) +
+
., λ _{n} the corresponding eigen
B jm u m
+ B _{j}_{m} u _{m} ( τ ) dτ ]
x
x
_{1} ( t )
.
.
.
_{n} ( t )
Observe that
= ^{} p ^{1}
_{}
p ^{n} ^{}
x
x
_{1}
_{n}
(0)
.
.
.
(0)
_{e} λ _{1} t
.
.
_{e} λ _{n} t
= ^{} p ^{1}
Thus the zero input solution has the form
^{} α _{j} P ^{j} e ^{λ} ^{j} ^{t}
z
_{1} (0)
z
.
.
.
_{n} (0)
p ^{n} ^{}
+ P[ t
0
z
_{1}
z
_{n}
(0)
.
.
.
(0)
( α _{j} = z _{j} ( 0))
_{e} λ _{j} ( t − τ )
B _{j} u ( τ )]
If in the zero input solution you wish to have only the term e ^{λ} ^{j} ^{t} , you must take initial condition to be α _{j} P ^{j} .
If you start with [ x _{1} (0) , x _{2} (0) , ﬁrst ﬁnd the ’coordinates’ α _{1} , α _{2} , ie
p ^{n} ]
., x _{n} (0)] ^{T} , to ﬁnd the zero input solution:
., α _{n} of x ( 0) in terms of the ’eigen vector axes’ [ p ^{1} p ^{2}
_{}
α
α
.
.
.
α
1
2
n
= P ^{−} ^{1}
x
x
x
_{1} (0)
(0)
.
.
.
(0)
_{n}
_{2}
11
and then the solution is
^{} α _{j} P ^{j} e ^{λ} ^{j} ^{t}
(f) Show how to eliminate the ’transient solution’ terms from the total solution due to input and initial condition.
To understand the nature of zero state solution assume there is only one input u _{1} ( t ) = e ^{σ}^{t} with
all other inputs being zero. If λ _{j} = σ , then
t
0
Thus total solution in this case is
_{e} λ _{j} ( t − τ ) _{e} στ _{d}_{τ} _{=} _{e} λ _{j} t ^{e}
( σ − λ ) τ
( σ − λ _{j} ) ^{} t
0
j
=
( σ − ^{1} _{λ} j _{)} [ e ^{σ}^{t} − e ^{λ} ^{j} ^{t} ]
^{} α _{j} P ^{j} e ^{λ} ^{j} ^{t} + [ P]( diag [
1
_{λ} j _{)} ( e ^{σ} ^{t} − e ^{λ} ^{j} ^{t} )]) B ^{j}
( σ −
(If λ _{j} = σ , ^{}
t _{e} λ _{j} ( t − τ ) _{e} λ _{j} τ _{d}_{τ} _{=} _{t}_{e} λ _{j} t _{)}
0
Observe that e ^{λ} ^{j} ^{t} always multiplies an eigen vector corresponding to λ _{j} , Hence if you choose the network with the input e ^{σ}^{t} and σ is not the same as any of the eigen values the nature of the total solution is of the form
x ( t ) = ^{} x ^{j} e ^{λ} ^{j} ^{t} + x ^{σ} e ^{σ}^{t}
where the x ^{j} would be an eigen vector corresponding to λ _{j} . Usually, the second term is called ’forced response’ (ie input forcing its nature on the output ) and the ﬁrst ’transient’ response (under usual circumstances it becomes insigniﬁcant after a short while). If σ = λ _{j} for one of the λ _{j} ’s then the second term would be x ^{σ} te ^{σ}^{t} . If we wish that the ﬁrst term reduces to zero (ie there is no transient solution). We proceed as follows. We ﬁrst ﬁnd the zero state solution. Say this is
x _{u} ( t ) = ^{} x ^{j}
u _{e} λ _{j} t _{+} _{x} σ _{e} σt
where x _{u} ^{j} is an eigen vector. Now choose x (0) = − ^{} x ^{j}
u
Then the zero input solution would be obtained by − x (0) = P
α
.
.
.
α
1
n
α _{j} P ^{j} e ^{λ} ^{j} ^{t}
But we know − x _{u} ^{j} = P ^{j} β _{j} for some values of β So,
Since P is invertible,
So,
− x (0) = P
_{}
α
.
.
.
α
1
n
=
β
.
.
.
β
1
n
β
.
.
.
β
1
n
x _{s} ( t ) = ^{} − x ^{j}
u _{e} λ _{j} t
12
and taking x _{s} ( t ) =
It follows that the total solution is
^{} x ^{j} _{u} e ^{λ} ^{j} ^{t} + x ^{σ} e ^{t} − x _{u} ^{j} e ^{λ} ^{j} ^{t} = x ^{σ} e ^{t}
if x (0) = − ^{} x ^{j}
u
By this means we can cancel the ’transient’ solution and make the response look entirely like the input. This idea is physically of same signiﬁcance. Suppose the λ _{j} are all in the range 1 to 2 where as σ = − 10 ^{6} , the forced response would die out much faster than the trans ient response if you have any transient response term at all. Note that in general the eigen values will be complex. The inp ut would usually have the form e ^{σ} ^{r} ^{t} cos( σ _{i}_{m} + θ ). This corresponds to a linear combination of inputs e ^{σ} ^{r} ^{t} ^{+} ^{j}^{σ} ^{i}^{m} ^{t} and e ^{σ} ^{r} ^{t} ^{+} ^{j}^{σ} ^{i}^{m} ^{t} .
(g) Suppose a real function x ( t ) = ^{} x ^{i} (0) e ^{λ} ^{i} ^{t} Show that λ _{i} occur in conjugate pairs and so do x ^{i} (0).
i. Prove: If λ _{i} are distinct. ^{} α _{i} e ^{λ} ^{i} ^{t} = 0 implies each α _{i} = 0. (Use the fact that if ^{} α _{i} e ^{λ} ^{i} ^{t} = 0 then ^{} α _{i} _{k} e ^{λ} ^{i} ^{t} = 0 for all k OR use induction).
^{k}
d
dt
ii.
iii.
If x ( t ) = ^{} x ^{i} (0) e ^{λ} ^{i} ^{t} = ^{} z ^{i} (0) e ^{λ} ^{i} ^{t} then x ^{i} (0) = z ^{i} (0) x ^{∗} ( t ) = ^{} ( x ^{i} (0)) ^{∗} e ^{λ} ^{t} But x ^{∗} ( t ) = x ( t ) So ^{} ( x ^{i} (0)) ^{∗} e ^{λ} ^{t} = ^{} ( x ^{i} (0)) e ^{λ} ^{i} ^{t}
ie ^{} ( x ^{i} (0)) ^{∗} e ^{λ}
From (i) above this can happen the coeﬃcient of each e ^{λ} ^{i} ^{t} is zero. If λ is not among the λ
for each i
∗
i
∗
i
∗ i ^{t} − ^{} ( x ^{i} (0)) e ^{λ} ^{i} ^{t} = 0
∗
i
this cannot happen. We conclude λ = λ and ( x ^{i} (0)) ^{∗} − ( x ^{i} (0)) = 0 for some j
∗
i
∗
j
∗
j
3. If an R,L,C circuit with positive values of R,L,C. Show that all the eigen values have nonpositive real parts. When the circuit has R,L,C and static devices and input is zer o, the system starts with initial energy
1
2 ^{v}
_{C} (0) Cv _{C} (0) + ^{1} _{2} i ^{T} (0) Li _{L} (0)
T
L
The power absorbed by these devices at any instant
=
d 1
dt ^{[} 2 ^{v}
_{C} ( t ) Cv _{C} ( t ) + ^{1} _{2} i ^{T} ( t ) Li _{L} ( t )]
T
L
=
v _{C} ( t ) i _{C} ( t ) + i ^{T} ( t ) v _{L} ( t )
T
L
This must (by tellegan’s theorem ) be the negative of the powe r absorbed by the remaining (static) devices. If these latter are positive resistors, this power is nonnegative. Hence the stored energy of the inductors and capacitors cannot indeﬁnitely increase as time increases. Therefore for each λ _{k} , e ^{λ} ^{k} ^{t} cannot increase indeﬁnitely. Hence λ _{k} = λ _{k}_{r} + jλ _{k}_{i}_{m} with λ _{k}_{r} ≤ 0.
4. Nature of the eigen values (RL,RC,LC,RLC cases). Recall the method that we used for writig state equations for RLC circuits.(See Figure 1.11). We assumed capacitors + voltage sources do not form loops and in ductors + current sources do not form cutsets. (If they do, the above technique has to be generalized to one where three multiports are connected together) Observe that in the state equations
x˙ = Ax + Bu
the matrix A can be obtained by setting u = 0 and then writing state equations. So, it is to be expected, the eigen values, eigen vectors of the network cor respond to setting voltage sources and current sources to zero. Let us begin by considering the situ ation where all capacitors and inductors are of value 1 in the appropriate units and all sources are zer o. In this case we get,
^{} d v _{C}
dt
d
i _{L}
dt
=
13
L
i
v
C
C
J
^{i} 1
^{i} 2
Static
V
Figure 1.11: Static multiport
L
Observe that
For the resistive multiport, we can write
i
1
v
2
− i _{C} = i _{1} ,
− i _{L} = i _{2} ,
= H
H
11
21
v _{C} = v _{1}
v _{L} = v _{2}
12
H
H 22 v i 2 1
Let us examine the situation of the hybrid matrix above. If i _{L} were set to zero (inductors open circuited) we get,
i 1 = H 11 v 1
ie H _{1}_{1} is a conductance matrix for a purely resistive multiport and is symmetric by tellegan’s theorem and semi positive deﬁnate since the power absorbed by the mul tiport is always non negative which
means v ^{T} i _{1} = v ^{T} H _{1}_{1} v _{1} ≥ 0 for all v _{1} . By similar argument, H _{2}_{2} is also symmetric positive semideﬁ
nate.
To understand the relationship between H _{1}_{2} and H _{2}_{1} . let us use Tellegan’s generalized reciprocity theorem which says
1
1
( v _{1} ) ^{T} i ^{′}^{′}
′
_{1} + ( v _{2} ) ^{T} i _{2} = ( v ) ^{T} i ^{′} _{1} + ( v ) ^{T} i
′
′′
′′
1
′′
2
′
2
for arbitary ’primed’ and ’double primed’ excitation condi tions for a resistive multiport. We rewrite this as
( i ^{′}^{′} ) ^{T} v _{1} + ( v _{2} ) ^{T} i
1
′
′
′′
2
= ( i _{1} ^{′} ) ^{T} v + ( v ) ^{T} i
′′
1
′′
2
′
2
Substituting the multiport relationship and simplifying we get,
′′
( i ) ^{T} ( H _{1}_{2} + H ^{T}
2
21
) v
′
_{1}
′
= ( i _{2} ) ^{T} ( H _{1}_{2} + H ^{T}
21
) v
′′
1
For arbitary values of i _{2} , v _{1} (including zero), So
For arbitary value of i _{2} ^{′} , v
.
′′
^{1} ,
′
( i _{2} ) ^{T} ( H _{1}_{2} + H ^{T}
21
) v
H _{1}_{2} = − H ^{T}
21
14
′′
_{1} = 0
Problems
Laplace Transform Problems
Prove
Figure 1.12:
15
Laplace Transform Problems II
Prove
1. L [ _{n} f ( t )] = s ^{n} F ( s ) − s ^{n}^{−} ^{1} f (0 _{−} ) − s ^{n}^{−} ^{2} f ^{′} (0 _{−} ) − (where f ^{k} denotes the k ^{t}^{h} derivative).
d
^{n}
dt
2. L [ ^{} f ( τ ) dτ ] = ^{F} ^{(} ^{s} ^{)}
t
0
s
3. L [ ^{} _{−}_{∞} f ( τ ) dτ ] = ^{F} ^{(} ^{s} ^{)}
t
s
_{+} f ^{−} ^{1} (0 _{−} )
s
f ^{n}^{−} ^{1} (0 _{−} )
4. L [ tf ( t )] = − _{d}_{s} F ( s )
5. ( f ( t − T )1( t − T )) = e ^{−} ^{s}^{t} F ( s )
d
6. L [ f ( _{a} ] = aF ( as ) , a > 0
7. lim _{t} _{→} _{0} _{+} f ( t ) = lim _{s} _{→}_{∞} sF ( s ) Provided the limit exists
8. lim _{t} _{→}_{∞} f ( t ) = lim _{s} _{→} _{0} sF ( s ) Provided sF(s) is analytic on the j ω axis and on the right half of splane
t
16
Viel mehr als nur Dokumente.
Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.
Jederzeit kündbar.