Sie sind auf Seite 1von 16

Chapter 1

1.1 System theoretic approach

What is the ‘ system theoretic approach ’ ?

To understand or to make ’things’.

Build bigger ’things’ in terms of smaller ’things’ using simple ’connections’.

Use only external properties of ’things’.(black box approach)

Note : ‘appropriate’ meaning to things and connections could be assigned.

Example:

Electrical Networks : We build by connecting (i.e using KCL, KVL constraints) mul titerminal de- vices. For these multiterminal devices we only look at exter nal (v, i) characteristics such as

i

i

i

i

1

2

3

.

.

.

n1

n

i

The connections are ’simple’ KCL and KVL.

= G

v

v

v

v

.

.

.

n1

3

2

1

v

n

(1.1)

Block Diagram based description of systems : The block diagram is shown in Fig. 1.1.

In this case the connections are again simple: Summer and Connection Points .

Summer : It has many input variables and single output variables

z 1 = u z 4

(1.2)

Connection Point : It has one input and many outputs.

z 2 = y z 2 = z 3

(1.3)

(1.4)

Is this feasible? Can this always be done? What if ’connections’ are actually complicated? Example: Suppose we have at nodes e x 1 + e x 2 + sin( x 3 ) = 0. See Fig. 1.2 Solution lies in using new systems but keeping the connections simple as shown in Fig. 1.3

xˆ 1 = e x 1

1

(1.5)

Figure 1.1: Figure 1.2: Figure 1.3: 2

Figure 1.1:

Figure 1.1: Figure 1.2: Figure 1.3: 2

Figure 1.2:

Figure 1.1: Figure 1.2: Figure 1.3: 2

Figure 1.3:

2

xˆ 2 = e x 2

(1.6)

xˆ 3 = sin( x 3 )

(1.7)

xˆ 1 + xˆ 2 + xˆ 3 = 0

(1.8)

Connections need not introduce constrains in terms of equat ions. Example, when subsystems are communities of people and arrows indicate some social inter action.

The kinds of system we deal with invariable yield connection constrains which are simple. Linear ones with 0,+1,-1 coefficients. Both the block diagram description as well as that of electrical networks is of this type. Note that their members are not of the type used for device characteristic w=Ri, the value

of R is not precisely known. The + 1,-1 of the connection is pre cise. This again should be exploited

during computation involving the system.

Advantages of keeping the connection constrains simple:

Good subsystems connected together yield good results.

In terms of nature of equations the connection equations bei ng simple would qualify would qualify as good and the above statement would be a consequence of this fact.

Example:

When connections are electrical i.e. KCL and KVL

(Example from audience) if subsystems have continuous V I ch aracteristics system will have continuous

V I characteristics. (Inductors –current continuous, Capacitors- voltage continuous)

Subsystems ‘Linear Systems ‘Linear Connect all subsystems having characteris tics Ai + Bv = s them and bring out ports you will see characteristics Kv p + M i p = S p

1.2 ‘System’

Let us now define a ‘system’ as a collection of ‘input’ , ‘outpu t’ pairs. Example : Electrical Multiport

u

u

1

k

Typical element

+ − + − Figure 1.4:
+
+
Figure 1.4:

+ Output

y 1

− Voltage

Sensors

y m

− Figure 1.4: + Output y 1 − Voltage Sensors y m Output Current Sensors ((

Output

Current

Sensors

(( u 1 ,

u k ) , ( y 1 ,

y m ) S

(1.9)

u i (.), y i (.) could be functions of t or i Choose v as input and i as output. (this choice is arbitrary) Question:

Does ( v 1 , i 1 ) S R ?

Check if v 1 = Ri, if yes ( v 1 , i 1 ) S R (otherwise it does not belong to S R ) similarly, if v=E, (v, i) S E and if, v= dL , (v, i) S L

dt

3

(1.10)

System is linear iff

R S R
R
S R

v, i

v=E i S E S L
v=E
i
S E
S L

Figure 1.5:

v,i

L

( u 1 , y 1 ) , ( u 2 , y 2 ) S = ( αu 1 + βu 2 ) , αy 1 + βy 2 ) S

Is the system defined by d 3 u + 2 d 2 u + u = dy linear?

dt 3

dt 2

dt

(1.11)

dt d is a linear operation, linear combination of linear operati ons yield a linear operation.

1.3 Causality

1. past influences the future

2. future does not influence the past

We will take the 2 nd point as the key. Think of a special case where for a given inpu t function of time, there is a unique output function and we build our system as follows

For each input function u , we just pick some y and say that this is the output function corresponding to it. So, for each u there is a unique y in the ‘system’.

Now, consider the input functions u 1 and u 2 and corresponding output functions y 1 and y 2

The result could be as in Fig 1.6 and Fig 1.7

1 u K t t 1
1
u
K
t
t
1
y 1 t t 1
y 1
t
t 1

Figure 1.6:

Can such a system be called causal?

The two inputs were the same upto t 1 and deviates after t 1 . How did the system react to the deviation before t 1 ? This, one would not expect in a causal system. The above situ ation should be forbidden in a ‘causal’ system. We need to however state it in the context of the definition of system as a “collection of input-output pairs”.

4

u

2

K

u 2 K t t 1 y 2 t t 1 Figure 1.7: So, we say
u 2 K t t 1 y 2 t t 1 Figure 1.7: So, we say

t

t 1

y 2 t t 1
y 2
t
t 1

Figure 1.7:

So, we say it is like

If two input functions u 1 , u 2 are the same upto t 1 and then deviate, there

must be atleast one ( u 1 , y 1 ) and one ( u 2 , y 2 ) pair, such that y 1 , y 2 are the same upto t 1 .

Formally, u 1 ( t ) = u 2 ( t ),

t t 1 , and ( u 1 , y 1 ) ∈ S

⇒ ∃ ( u 2 , y 2 ) ∈ S such that, y 1 ( t ) = y 2 ( t ), t t 1 .

1.4 Time Invariance

Time Invariance: No instance of time is special for the system. The time origin could be anywhere.

( u, y ) S = > ( u T , y T ) S, where x T ( t ) = x ( t T )

(1.12)

Suppose ( u, y ) as in adjacent figure in S , then, ( u , y ) also in S

y u t y’ u’ t
y
u
t
y’
u’
t

Figure 1.8:

Note: Other notions such as System/Subsystem Reciprocity and Sys tem/Subsystem Passivity are also usually discussed.

Problem: Examine if the system stated below or in the figure are linear, time invariant, causal

5

Figure 1.9: 6

Figure 1.9:

6

a )

y = u + t du dt

b

)

y = sin( u ) + du dt

c

)

e u + 2 du dt + sin( u ) =

y

d

)

e

)

dy dt + y + u

= 0 for

y = u for t > 10

= 2u for t < 10

y > u

d 2 y

dt 2

+ u = 0 for y < u

Problem: Write down all the constraints of the systems in Figure below using a matrix
Problem: Write down all the constraints of the systems in Figure below using a matrix operator
Eg: r 1 + L di 1 + 2i 2 = v 3 can be written as [ 1 0 -1 Ld 2] [ v 1 v 2 v 3 i 1 i 2 ] T
dt
dt
R1
R2
i
+
J
L
V
C
R3

Figure 1.10:

1.5 State

Define state as extra information at time t given input up to t to find y(t)

Problem: For the circuits of Fig. 1.9 and Fig. 1.10 taking input to be (a) v (b) i. What is a good choice of state? In many important instances of system e.g. electrical circu its with the usual elements the state can be taken to be a vector function of time. We can then write

Y

In the linear case we can often write

= f ( x, u, u,˙

). ( u, u,˙

known because past of x is known)

x˙

=

Ax + Bu + C u˙

(1.13)

y

= Cx + Du + D 1 u˙

When the system is given with the state vector, we would say it is the “System with State”, which would then be describable as y = f ( x ( t ) , u ( . ) | τ<t )

A “system with state” is linear iff

y 1 ( t ) = f ( x 1 ( t ) , u 1 ( . ) | τ<t )

y 2 ( t ) = f ( x 2 ( t ) , u 2 ( . ) | τ<t )

(1.14)

αy 1 + βy 2 = f ( αx 1 + βx 2 , αu 1 ( . ) + βu 2 ( . ) | τ<t )

7

Notice that the states are also linearly combined.

Causality

Is a system governed by a differential equation causal? Intui tively, what does causality mean? Common usage is as follows:

( i ) past influences future - ‘causal’

( ii ) future influences past - ‘non-causal’

( i ) and ( ii ) are not negations of each other. In this sense, if a system is governed by

dy

dt

= ay ( t ) + bu ( t )

it can be thought of as fitting into both or neither depending on how strictly we use the word ‘influence’. For

(1.15)

dy

dt

= + ay ( t ) + bu ( t )

can also be rewritten as follows. Set τ = t . Then the above equation reads

dy

= dy

dt

=

ay ( τ ) bu ( τ )

(1.16)

So if in eq. 1.15 past influences future, so does it in eq. 1.16. But what is past in eq. 1.15 is future in eq. 1.16. So let us use for definition of causality, the key idea ‘can we p redict the future input knowing the past behavior of the system?’ Here past behavior can be thought to include both past input and output. Define z ( t ) = e at u ( t ). So eq. 1.15 becomes

e at y˙ ( t ) ae at y ( t ) = e at u ( t )

i.e., dz = e at u ( t ) = uˆ ( t ) say. Since uˆ ( t ), u ( t ) can be obtained from each other instantaneously, we need on ly look at the equation = uˆ ( t ). If z is known to be differentiable (left derivative = right derivative), then we can predict u ( t 0 ) by taking dz

dz

dt

dt

lim

t t 0 , t<t 0

dt

= u ( t 0 ) So in this case we appear to be predicting the input.

So for practical situations, it is better to think of a differe ntial equation to have a left derivative, whenever

you have ’ dt ’ and u ( t 0 ) to mean

The above discussion is for ‘ordinary’ functions. Observe t hat once we interpret differential equations this way, eq. 1.15 and eq. 1.16 do not govern the same different ial equation. To summarize, differential equations do not have a proffered d irection of time before-hand. The direction is imposed by us additionally when we model physical systems by, for instance, interpreting dt and u ( t ) as above. Suppose that the system is governed by

d

lim t<t 0 u ( t ), i.e., u ( t 0 ).

t t 0 ,

d

x˙

= Ax + Bu

y

= Cx + Du

where the x ( t ) is the state of the system at time t . If we set x ( t 0 ) = 0 and insist u ( t ) = 0, t < 0 then it is easy to see that y ( t ) = 0, t < 0. We define the state x ( t ) of the system as follows. Given x ( t 0 ), u ( t ) for t 0, we can uniquely determine output y ( t ) for t > t 0 . For our purposes the following restricted definition of caus ality is adequate.

8

Let S be a system with input variable u ( . ) and output variable y ( . ) and state x ( . ). We say S is causal

iff u ( t ) = 0, t < t 0 , x ( t 0 ) = 0 always

implies y ( t ) = 0 for t < t 0 for all t 0 .

In particular, the impulse response for such a system is zero for t < 0.

A system governed by

x˙

= Ax + Bu

y

= Cx + Du

where u , y , x are input, output and state variables, is causal in the above point of view.

Exercises

1. x is an eigen vector of matrix A iff x = 0, Ax = λ x for some λ . An eigen vector of A T is called a row eigen vector of A .

Show that λ is an eigen value iff det ( λ I A ) = 0

Real matrices may have complex eigen values. Show that th ey always occur in pairs of conjugates. The corresponding eigen vectors also are conjugates.

Let A be the conjugate transpose of A . Show

(a)

(b)

(c)

( det ( sI A )) = det (( sI A ) ) = det (( s I A ))

So the eigen values of A and A are the same if A is real. corresponding to eigen value σ and λ = σ then

r λ x σ = 0

If r λ is a row eigen vector of A

(d)

(e)

(f)

(g)

If an n × n matix A has n distinct eigen values then the matrix P, whose columns are eigen vectors corresponding to distinct eigen values, is inverti ble. Hence,

P 1 AP =

  

λ 1 .

.

λ

n

where λ i are the distinct eigen values of A .

If A is Hermitian, ie. A = A then P P = I In this case,

P AP =

λ 1 .

.

λ

n

and all the eigen values are real. Real symmetric matrices ar e Hermitian. In this case P can be taken to be real.

p is an eigen vector of A corresponding to eigen value λ if T 1 p is an eigen vector of T 1 AT corresponding to λ .

If A is Hermitian it is always possible to find P such that P = P 1 and

P AP =

even if λ i are not distinct.

λ 1 .

.

λ

n

9

(h)

( A λ 1 I ) x = 0 has solution space V λ 1 say. If λ = σ show V λ , V σ are orthogonal. (Take x , y to be orthogonal if x y = xy = 0).

(i)

For any vector space V , Show that it is always possible to find a matrix whose rows for m a basis of V and are mutually orthogonal. Let us call such a basis an orthogonal basis of V .

(j)

Let P have as its rows the orthogonal bases of the vector space V λ for each eigen value of λ of the Hermitian matrix A . Show that P is a square matrix and P P = I and further that

P AP =

λ 1 .

.

λ

n

2. (a) For the system described by x˙ = Ax + Bu prove

Total solution = Zero input solution + Zero state solution.

Let x ( t ) satisfy x˙ = Ax + Bu in the interval [0, ). Here, A is an n × n matrix and B is an n × m matrix. It can be shown that given x (0) and u ( . ) in [0, t ), there exists a unique x ( t ) which satisfies the above differential equation. Let x 1 be the unique solution corresponding to ( x 1 (0), u 1 ( . )), x 2 be the one corresponding to ( x 2 (0), u 2 ( . )), then by direct substitution we can verify that α x 1 + β x 2 is the unique solution corresponding to ( α x 1 (0) + β x 2 (0), α u 1 ( . ) + β u 2 ( . )).

[Observe that when input is [ u 1 ( . ) , u 2 ( . ) ,

that due to the following m ‘elementary’ situations:

., u m ( . )] T , the solution can be broken up further into

u

1 ( . )

0

.

.

.

0

,

u

0

2 ( . )

.

.

.

0

,

.

.

.

.

.

.,

u

0

0

.

.

.

m ( . )

i.e., one of the inputs active and the others zero. In other words superposition of inputs is applicable for zero state solution.] In particular, if

x 1 (0) = 0,

u 1 ( . ) = u ( . )

x 2 (0) = x

(0)

u 2 ( . ) = 0.

We have solution x corresponding to ( x (0) , u ( . )) = x 1 + x 2 where x 1 is called the zero state solution and x 2 the zero input solution. Observe that to prove the above result we have used (a) linear ity and (b) uniqueness of solution corresponding to initial condition and input. This idea can be therefore generalized to apply it to other classes of differential equations which have the above ’unique solution’ property.

(b) Solve the scalar differential equation x˙ = ax + bu by the method of integrating factors. [Consider the scalar differential equation

x˙ = ax + bu

Use the idea of ’integrating factor’ to compute the zero stat e solution to be

and the zero input solution to be

x u ( t ) = t e a ( t τ ) bu ( τ )

0

x s ( t ) = x (0) e at .

[How does the situation change if we had x˙ = ax + b 1 u 1 + b 2 u 2 +

10

+ b m u m ?]]

(c)

If

show that

z = Tx ,

z˙ = TAT 1 z + TBu

(d)

Solve x˙ = Ax + Bu when A is diagonalized.

(e)

If A is diagonalizable there exists a matrix P such that

P 1 AP =

λ 1 .

.

λ

n

Show that the columns of P are eigen vectors of A and λ 1 , λ 2 values. Choose z = P 1 x We then have

z˙

=

λ 1 .

.

λ

n

z + Bu

where B = P 1 B ie we have n-decoupled differential equations. We have

z˙ j = λ j z j +

B j 1 u 1 + B j 2 u 2 +

where j = 1 n The solution to this equation is

z j ( t ) = e λ j t z j (0) + t

0

e λ j ( t τ ) [

B j 1 u 1 ( τ ) +

+

., λ n the corresponding eigen

B jm u m

+ B jm u m ( τ ) ]

x

x

1 ( t )

.

.

.

n ( t )

Observe that

= p 1

p n

x

x

1

n

(0)

.

.

.

(0)

e λ 1 t

.

.

e λ n t

= p 1

Thus the zero input solution has the form

α j P j e λ j t

z

1 (0)

z

.

.

.

n (0)

p n

+ P[ t

0

z

1

z

n

(0)

.

.

.

(0)

( α j = z j ( 0))

e λ j ( t τ )

B j u ( τ )]

If in the zero input solution you wish to have only the term e λ j t , you must take initial condition to be α j P j .

If you start with [ x 1 (0) , x 2 (0) , first find the ’co-ordinates’ α 1 , α 2 , ie

p n ]

., x n (0)] T , to find the zero input solution:

., α n of x ( 0) in terms of the ’eigen vector axes’ [ p 1 p 2

α

α

.

.

.

α

1

2

n

= P 1

x

x

x

1 (0)

(0)

.

.

.

(0)

n

2

11

and then the solution is

α j P j e λ j t

(f) Show how to eliminate the ’transient solution’ terms from the total solution due to input and initial condition.

To understand the nature of zero state solution assume there is only one input u 1 ( t ) = e σt with

all other inputs being zero. If λ j = σ , then

t

0

Thus total solution in this case is

e λ j ( t τ ) e στ dτ = e λ j t e

( σ λ ) τ

( σ λ j ) | t

0

j

=

( σ 1 λ j ) [ e σt e λ j t ]

α j P j e λ j t + [ P]( diag [

1

λ j ) ( e σ t e λ j t )]) B j

( σ

(If λ j = σ ,

t e λ j ( t τ ) e λ j τ dτ = te λ j t )

0

Observe that e λ j t always multiplies an eigen vector corresponding to λ j , Hence if you choose the network with the input e σt and σ is not the same as any of the eigen values the nature of the total solution is of the form

x ( t ) = x j e λ j t + x σ e σt

where the x j would be an eigen vector corresponding to λ j . Usually, the second term is called ’forced response’ (ie input forcing its nature on the output ) and the first ’transient’ response (under usual circumstances it becomes insignificant after a short while). If σ = λ j for one of the λ j ’s then the second term would be x σ te σt . If we wish that the first term reduces to zero (ie there is no transient solution). We proceed as follows. We first find the zero state solution. Say this is

x u ( t ) = x j

u e λ j t + x σ e σt

where x u j is an eigen vector. Now choose x (0) = x j

u

Then the zero input solution would be obtained by x (0) = P

α

.

.

.

α

1

n

α j P j e λ j t

But we know x u j = P j β j for some values of β So,

Since P is invertible,

So,

x (0) = P

α

.

.

.

α

1

n

=

β

.

.

.

β

1

n

β

.

.

.

β

1

n

x s ( t ) = x j

u e λ j t

12

and taking x s ( t ) =

It follows that the total solution is

x j u e λ j t + x σ e t x u j e λ j t = x σ e t

if x (0) = x j

u

By this means we can cancel the ’transient’ solution and make the response look entirely like the input. This idea is physically of same significance. Suppose the λ j are all in the range -1 to -2 where as σ = 10 6 , the forced response would die out much faster than the trans ient response if you have any transient response term at all. Note that in general the eigen values will be complex. The inp ut would usually have the form e σ r t cos( σ im + θ ). This corresponds to a linear combination of inputs e σ r t + jσ im t and e σ r t + jσ im t .

(g) Suppose a real function x ( t ) = x i (0) e λ i t Show that λ i occur in conjugate pairs and so do x i (0).

i. Prove: If λ i are distinct. α i e λ i t = 0 implies each α i = 0. (Use the fact that if α i e λ i t = 0 then α i k e λ i t = 0 for all k OR use induction).

k

d

dt

ii.

iii.

If x ( t ) = x i (0) e λ i t = z i (0) e λ i t then x i (0) = z i (0) x ( t ) = ( x i (0)) e λ t But x ( t ) = x ( t ) So ( x i (0)) e λ t = ( x i (0)) e λ i t

ie ( x i (0)) e λ

From (i) above this can happen the coefficient of each e λ i t is zero. If λ is not among the λ

for each i

i

i

i t ( x i (0)) e λ i t = 0

i

this cannot happen. We conclude λ = λ and ( x i (0)) ( x i (0)) = 0 for some j

i

j

j

3. If an R,L,C circuit with positive values of R,L,C. Show that all the eigen values have nonpositive real parts. When the circuit has R,L,C and static devices and input is zer o, the system starts with initial energy

1

2 v

C (0) Cv C (0) + 1 2 i T (0) Li L (0)

T

L

The power absorbed by these devices at any instant

=

d 1

dt [ 2 v

C ( t ) Cv C ( t ) + 1 2 i T ( t ) Li L ( t )]

T

L

=

v C ( t ) i C ( t ) + i T ( t ) v L ( t )

T

L

This must (by tellegan’s theorem ) be the negative of the powe r absorbed by the remaining (static) devices. If these latter are positive resistors, this power is nonnegative. Hence the stored energy of the inductors and capacitors cannot indefinitely increase as time increases. Therefore for each λ k , e λ k t cannot increase indefinitely. Hence λ k = λ kr + kim with λ kr 0.

4. Nature of the eigen values (RL,RC,LC,RLC cases). Recall the method that we used for writig state equations for RLC circuits.(See Figure 1.11). We assumed capacitors + voltage sources do not form loops and in ductors + current sources do not form cutsets. (If they do, the above technique has to be generalized to one where three multiports are connected together) Observe that in the state equations

x˙ = Ax + Bu

the matrix A can be obtained by setting u = 0 and then writing state equations. So, it is to be expected, the eigen values, eigen vectors of the network cor respond to setting voltage sources and current sources to zero. Let us begin by considering the situ ation where all capacitors and inductors are of value 1 in the appropriate units and all sources are zer o. In this case we get,

d v C

dt

d

i L

dt

=

13

L

i

v

C

C

J

C J i 1 i 2 Static V Figure 1.11: Static multiport L Observe that For

i 1

C J i 1 i 2 Static V Figure 1.11: Static multiport L Observe that For

i 2

Static

V

C J i 1 i 2 Static V Figure 1.11: Static multiport L Observe that For
C J i 1 i 2 Static V Figure 1.11: Static multiport L Observe that For

Figure 1.11: Static multiport

L

Observe that

For the resistive multiport, we can write

i

1

v

2

i C = i 1 ,

i L = i 2 ,

= H

H

11

21

v C = v 1

v L = v 2

v L = v 2

12

H

H 22 v i 2 1

Let us examine the situation of the hybrid matrix above. If i L were set to zero (inductors open circuited) we get,

i 1 = H 11 v 1

ie H 11 is a conductance matrix for a purely resistive multiport and is symmetric by tellegan’s theorem and semi positive definate since the power absorbed by the mul tiport is always non negative which

means v T i 1 = v T H 11 v 1 0 for all v 1 . By similar argument, H 22 is also symmetric positive semidefi-

nate.

To understand the relationship between H 12 and H 21 . let us use Tellegan’s generalized reciprocity theorem which says

1

1

( v 1 ) T i

1 + ( v 2 ) T i 2 = ( v ) T i 1 + ( v ) T i

′′

′′

1

′′

2

2

for arbitary ’primed’ and ’double primed’ excitation condi tions for a resistive multiport. We rewrite this as

( i ) T v 1 + ( v 2 ) T i

1

′′

2

= ( i 1 ) T v + ( v ) T i

′′

1

′′

2

2

Substituting the multiport relationship and simplifying we get,

′′

( i ) T ( H 12 + H T

2

21

) v

1

= ( i 2 ) T ( H 12 + H T

21

) v

′′

1

For arbitary values of i 2 , v 1 (including zero), So

For arbitary value of i 2 , v

.

′′

1 ,

( i 2 ) T ( H 12 + H T

21

) v

H 12 = H T

21

14

′′

1 = 0

Problems

Laplace Transform Problems

Prove

1. L ( δ ( t )) = 1 2. L (1( t )) =
1.
L ( δ ( t )) = 1
2.
L (1( t )) = 1
s
n!
3.
L ( t n ) , forn > 0 =
s n+1
4.
L ( e − at ) = 1
s
+ a
1
5.
L ( te − at ) =
( s + a ) 2
L ( t n − 1 e − at
1
6.
) =
(
n−
1)!
( s + a ) n
1
1 1
7.
L
(
b − a ( e − at − e − bt )) a
= b =
( s + a ) ( s + b )
− 1
s
8.
L
(
( ae − at − be − bt )) a
= b =
b −
a
( s + a )( s + b )
ω
9.
L (sin ωt ) =
s 2 + ω 2
s
10.
L (cos ωt ) =
s 2 + ω 2
ω
11.
L ( e − at sin ωt ) =
( s + a ) 2 + ω 2
s + a
12.
L ( e − at cos ωt ) =
( s + a ) 2 + ω 2
13.
Find the L-transform of the function in Figure 1.12
2
$\delta_4$
$k− at^2$
1
1
4
2 3

Figure 1.12:

15

Laplace Transform Problems II

Prove

1. L [ n f ( t )] = s n F ( s ) s n 1 f (0 ) s n 2 f (0 ) (where f k denotes the k th derivative).

d

n

dt

2. L [ f ( τ ) ] = F ( s )

t

0

s

3. L [ f ( τ ) ] = F ( s )

t

s

+ f 1 (0 )

s

f n 1 (0 )

4. L [ tf ( t )] = ds F ( s )

5. ( f ( t T )1( t T )) = e st F ( s )

d

6. L [ f ( a ] = aF ( as ) , a > 0

7. lim t 0 + f ( t ) = lim s sF ( s ) Provided the limit exists

8. lim t f ( t ) = lim s 0 sF ( s ) Provided sF(s) is analytic on the j ω axis and on the right half of s-plane

t

16