Beruflich Dokumente
Kultur Dokumente
Lecture 1 : Introduction
Objectives
In this lecture you will learn the following
First of all we will try to look into the formal definitions of the terms 'signals' and 'systems' and then go on further to introduce
to you some simple examples which may be better understood when seen from a signals and systems perspective.
We would even frame our main objectives in this course .
Introduction
The intent of this introduction is to give the reader an idea about Signals and Systems as a field of study and its applications. But we
must first, at least vaguely define what signals and systems are.
Signals are functions of one or more variables .
Systems respond to an input signal by producing an output signal .
A
A
A
A
Fig (a)
Fig (b)
As you can see, there is a similarity in the way the input signal is related to the output signal. These similarities will interest us in this
course as we may be able to make inferences common to both these systems from these similarities.
We will develop very general tools and techniques of analyzing systems, independent of the actual context of their use. Our approach in
this course would be to define certain properties of signals and systems (inspired of course by properties real-life examples we have),
and then link these properties to consequences. These "links" can then be used directly in connection with a large variety of systems:
electrical, mechanical, chemical, biological knowing only how the input and output signal are related! Thus, our focus when dealing with
signals and systems will be on the relationship between the input and output signal and not really on the internals of the system.
Conclusion:
In this lecture you have learnt:
Signals are functions of one or more independent variables.
Systems are physical models which gives out an output signal in response to an input signals.
Trying to identify real-life examples as models of signals and systems, would help us in understanding the subject better.
Notice that more than one element in the domain may correspond to the same element in the co-domain .
A function is also sometimes referred to as a mapping. Thus a signal may also be defined as a mapping from one set to another.
For example a speech signal would be mathematically represented by acoustic pressure as a function of time. Some more examples of
signals are voltage, current or power as functions of time. A monochromatic picture can be described as a signal which is mathematically
represented by brightness as a function of two spatial variables.
As mentioned earlier, there may be more than one independent variable. For example, the independent variable for a photograph is 2dimensional space (2 space variables). The variables may also be hybrid, say 2 space variables and 1 time variable (E.g.: a video signal).
Note: In this course, we shall focus our attention on signals of only one variable. Also, for convenience, we shall generally refer to the
independent variable as time. So don't let the recurring reference to time confuse you. It is symbolic for any independent variable you
care to choose.
Discrete-time signals
Discrete variables are those in which there exists a neighbourhood around each value in which no other value is present.
ntuitively, it means a variable like the natural numbers on the real line - we can isolate each instance of the discrete variable from the
other instances.
Why should we bother about discrete variables?
Discrete variables come up intrinsically in several applications. Take for example, the cost of gold in the market every day. The
dependent variable (cost) is a function of discrete time (incremented once every day). Another example is the marks scored by the
students in class. Here the dependent variable (marks) is a function of the discrete variable roll number. While it is perfectly fine to talk
about marks of 02007005, it makes no sense to talk of marks of roll no 02007011.67 - this system is inherently discrete.
Another point that should be noted here is that some results about signals and systems are common to both: continuous as well as
discrete signals, but can be grasped more intuitively in one case as compared to the other. So, we shall pursue the study of both these
cases simultaneously in this course.
Need the discrete variable be uniform?
No. though we imagine natural number or integers when we think of discrete signals, the points need not be equally spaced. For
example, if the markets remained closed on Sundays, we would not record a price for gold on that day - so the spacing between the
variables on this axis changes.
In most common cases, however, the independent variable is uniform - and throughout this course, we shall assume a uniform spacing of
the variable unless otherwise stated explicitly. This assumption makes the analysis more intuitive and also yields several good theorems
for our use, which we shall see as we proceed.
Discrete-time signals
Discrete variables are those in which there exists a neighborhood around each value in which no other value is present.
Intuitively, it means a variable like the natural numbers on the real line - we can isolate each instance of the discrete variable from the
other instances.
Then we can consider a set of tuples (a,b) such that a and b are both in the range 0 to 5 - how can we index them by integers?
Now lets come to something that is discrete alright, but not very intuitive about how we can index it - rational numbers:
We represent the rational numbers along the fourth quadrant, as y/x. The repeated areas (like 2/2, 3/3, 4/2 etc) are to be neglected,
hence are in gray. Then we go on indexing them diagonally as shown by the animation. Now, we go ahead another step - how do we
index a full plane?
Flash File
Note the method: we start in expanding circles from the origin. As soon as a circle cuts integer points, we pause and number the points
clockwise from the positive y axis. This method is by no means unique - but just one set of indexing is enough for us to call the system
discrete. Here we pause to note that although variables like the integer plane above can be indexed by integers, it is far more
convenient to use tuples of integers to index them. It can mathematically be proved that any finite set of integers {a 1 , a 2 , a 3 .... a n
} can be indexed by a single variable. We leave out the proof here, but the interested reader can find it in books on number theory.
Conclusion:
In this lecture you have learnt:
Thus a signal may also be defined as a mapping from one set (domain) to another (co domain).
Continuous-time signal means the mapping is defined over a continuum of values of the independent variable.
A discrete variable is one which can ultimately be indexed by integers (may also be in terms of tuples) .
We will enclose discrete variables in brackets [.] as opposed to parenthesis (.) for continuous variables.
In signals and systems terminology, we say; corresponding to every possible input signal, a system produces an output
signal.
In that sense, realize that a system, as a mapping is one step hierarchically higher than a signal. While the correspondence for a signal is
from one element of one set to a unique element of another, the correspondence for a system is from one whole mapping from a set of
mappings to a unique mapping in another set of mappings!
Examples of systems
Examples of systems are all around us. The speakers that go with your computer can be looked at as systems whose input is voltage
pulses from the CPU and output is music (audio signal). A spring may be looked as a system with the input , say, the longitudinal force
on it as a function of time, and output signal being its elongation as a function of time. The independent variable for the input and output
signal of a system need not even be the same.
Example of CRO
An input voltage signal f(t) is provided to the CRO by using a function generator. The CRO (the system) transforms this input function
into an image that is displayed on the CRO screen. The luminosity of every point on this display (i.e.value of the signal) is dependent on
the x and y coordinates.So, the output S(x, y) has it's independent variable as space, whereas the input independent variable is time.
In fact, it is even possible for the input signal to be continuous-time and the output signal to be discrete-time or vice-versa. For
example, our speech is a continuous-time signal, while a digital recording of it is a discrete-time signal! The system that converts any
one to the other is an example of this class of systems.
As these examples may have made evident, we look at many physical objects/devices as systems, by identifying some variation
associated with them as the input signal and some other variation associated with them as the output signal (the
relationship between these, that essentially defines the system depends on the laws or rules that govern the system) . Thus a
capacitance with voltage (as a function of time) considered as the input signal and current considered as the output signal is not the
same system as a capacitance with, say charge considered as the input signal and voltage considered as the output signal. Why?
The mappings that define the system are different in these two cases.
A signal maps an element in one set to an element in another. A system, on the other hand maps a whole signal in one set to a signal in
another. That is why a system is called a mapping over mappings. Therefore, the value of the output signal at any instant of time
(remember "time" is merely symbolic) in general depends on the whole input signal. Thus, even if the independent variable for the input
and output signal are the same (say time t), do not assume the value the output signal at, say t = 5 depends on only the value
of the input signal at t = 5.
For example, consider the system with description:
The output at, say t = 5 depends on the values of the input signal for all t <= 5.
Henceforth; we shall call systems with both input and output signal being continuous-time as continuous-time systems , and those
with both input and output signal being discrete-time as discrete-time systems. Those that do not fall into either of these classes (i.e.
input discrete-time and output continuous-time and vice-versa) we shall call hybrid systems. Now that the necessary introductions are
done, we can get on to system properties.
Recap
In last lecture you have learnt the following
A system is a mapping across signals, in other words mapping across mappings.
In signals and systems terminology, we say that corresponding to every possible input signal, a system produces an output
signal.
For an explicit description, it is possible to express the output at a point, purely in terms of the input signal.
In case the system has an Implicit description, when the input is provided, we may not be able to calculate the output directly, it
may need some mathematical induction to be done.
also has memory. The output at any instant depends on all past and present inputs.
Linearity:
Now we come to one of the most important and revealing properties systems may have - Linearity. Basically, the principle of linearity is
equivalent to the principle of superposition, i.e. a system can be said to be linear if, for any two input signals, their linear combination
yields as output the same linear combination of the corresponding output signals.
Definition:
(It is not necessary for the input and output signals to have the same independent variable for linearity to make sense. The definition for
systems with input and/or output signal being discrete-time is similar.)
Example of linearity
A capacitor, an inductor, a resistor or any combination of these are all linear systems, if we consider the voltage applied across them as
an input signal, and the current through them as an output signal. This is because these simple passive circuit components follow the
principle of superposition within their ranges of operation.
i.e. the output corresponding to the sum of any two inputs is the sum of the two outputs.
Homogeneity (Scaling)
A system is said to be homogenous if, for any input signal X(t),
i.e. scaling any input signal scales the output signal by the same factor.
To say a system is linear is equivalent to saying the system obeys both additivity and homogeneity.
a) We shall first prove homogeneity and additivity imply linearity.
is not linear.
See for yourself that the system is neither additive, nor homogenous.
Show for yourself that systems with the following descriptions are linear:
Shift Invariance
This is another important property applicable to systems with the same independent variable for the input and output signal. We shall
first define the property for continuous time systems and the definition for discrete time systems will follow naturally.
Definition:Say, for a system, the input signal x(t) gives rise to an output signal y(t). If the input signal x(t - t 0 ) gives rise to output
y(t - t 0 ), for every t 0 , and every possible input signal, we say the system is shift invariant.
i.e. for every permissible x(t) and every t 0
In other words, for a shift invariant system, shifting the input signal shifts the output signal by the same offset.
Note this is not to be expected from every system. x(t) and x(t - t 0 ) are different (related by a shift, but different) input signals and a
system, which simply maps one set of signals to another, need not at all map x(t) and x(t - t 0 ) to output signal also shift by t 0
A system that does not satisfy this property is said to be shift variant.
Examples of Shift Invariance:
Assume y[n] and y(t) are respectively outputs corresponding to input signals x[n] and x(t)
Stability
Let us learn about one more important system property known as stability. Most of us are familiar with the word stability, which
intuitively means resistance to change or displacement. Broadly speaking a stable system is a one in which small inputs lead to
predictable responses that do not diverge, i.e. are bounded. To get the qualitative idea let us consider the following physical example.
Example
Consider an ideal mechanical spring (elongation proportional to tension). If we consider tension in the spring as a function of time as the
input signal and elongation as a function of time to be the output signal, it would appear intuitively that the system is stable. A small
tension leads only to a finite elongation.
There are various ideas/notions about stability not all of which are equivalent. We shall now introduce the notion of BIBO Stability, i.e.
BOUNDED INPUT-BOUNDED OUTPUT STABILITY.
Statement:
CONCLUSION
BIBO Stable system : In a BIBO stable system, every bounded input is assured to give a bounded output. An unbounded input can
give us either a bounded or an unbounded output, i.e. nothing can be said for sure.
BIBO Unstable system: In a BIBO unstable system, there exists at least one bounded input for which output is unbounded. Again,
nothing can be said about the system's response to an unbounded input.
Causality
Causality refers to cause and effect relationship (the effect follows the cause). In a causal system, the value of the output signal at any
instant depends only on "past" and "present" values of the input signal (i.e. only on values of the input signal at "instants" less than or
equal to that "instant"). Such a system is often referred to as being non-anticipative, as the system output does not anticipate future
values of the input (remember again the reference to time is merely symbolic). As you might have realized, causality as a property is
relevant only for systems whose input and output signals have the same independent variable. Further, this independent variable
must be ordered (it makes no sense to talk of "past" and "future" when the independent variable is not ordered).
What this means mathematically is that If two inputs to a causal (continuous-time) system are identical up to some time to, the
corresponding outputs must also be equal up to this same time (we'll define the property for continuous-time systems; the definition for
discrete-time systems will then be obvious).
Definition
Let x 1 (t) and x 2 (t) be two input signals to a system and y 1 (t) and y2 (t) be their respective outputs.
The system is said to be causal if and only if:
This of course is only another way of stating what we said before: for any t 0 : y( t 0 ) depends only on values of x(t) for t <= t 0
As an example of the behavior of causal systems, consider the figure below:
The two input signals in the figure above are identical to the point t = t 0 , and the system being a causal system, their corresponding
outputs are also identical till the point t = t 0 .
Examples of Causal systems
Assume y[n] and y(t) are respectively the outputs corresponding to input signals x[n] and x(t)
1. System with description y[n] = x[n-1] + x[n] is clearly causal, as output "at" n depends on only values of the input "at instants"
less than or equal to n ( in this case n and n-1 ).
2. Similarly, the continuous-time system with description
Theorem:
Statement :If a causal system is either additive or homogeneous ,then y(t) can not be non zero before x(t) is non-zero .
Proo f:
Say x(t) = 0 for all t less than or equal to t 0 .
We have to show that the system response y(t) = 0 for all t less than or equal to t 0 .
Since the system is either additive or homogeneous the response to the zero input signal is the zero output signal. The zero input signal
and x(t) are identical for all t less than or equal to t 0 .
Hence, from causality, their output signals are identical for all t less than or equal to t 0 .
We conclude the discussion on system properties by noting that this is not an end, but merely a beginning! Through much of our further
discussions, we will be looking at an important class of systems - Linear Shift-Invariant (LSI) Systems.
Conclusion:
In this lecture you have learnt:
A system is said to be memoryless, if its output for each value of the independent variable is dependent only on the value of the
input signal at that value of the independent variable.
The principle of linearity is equivalent to the principle of superposition, i.e. a system can be said to be linear if, for any two input
signals, their linear combination yields as output the same linear combination of the corresponding output signals.
To say a system is linear is equivalent to saying that the system obeys both additivity and homogeneity.
Say, for a system, the input signal x(t) gives rise to an output signal y(t), and it is said to be shift invariant if the input signal
x(t - t 0 ) gives rise to the output y(t - t 0 ), for every t 0, and every possible input signal.
A system in which a bounded input leads to a bounded output is said to be BIBO stable.
In a causal system, the value of the output signal at any instant depends only on the "past" and "present" values of the input
signal and/or "past" values of the output signal.
If a system is additive or homogeneous, then x(t)=0 implies that y(t)=0.
If a causal system is either additive or homogeneous ,then y(t) can not be non zero before x(t) is non-zero.
The above expression corresponds to the representation of any arbitrary sequence as a linear combination of shifted Unit Impulses
which are scaled by x[n]. Consider for example the Unit Step function. As shown earlier it can be represented as
Now if we knew the response of a system for a Unit Impulse Function, we can obtain the response of any arbitrary input. To see why this
is so, we invoke the properties of Linearity, Homogeneity ( Superposition ) and Time Invariance.
The left hand side can be identified as any arbitrary input, while the right hand side can be identified as the total output to the signal.
The total response of the system is referred to as the CONVOLUTION SUM or superposition sum of the sequences x[n] and h[n]. The
result is more concisely stated as y[n] = x[n] * h[n], where
Therefore, as we said earlier a LTI system is completely characterized by its response to a single signal i.e. response to the Unit Impulse
signal.
Example Related to Discrete Time LTI Systems
Flash File
Now we plot x[k] and h[n-k] as functions of k and not n because of the summation over k. Functions x[k] and h[k] are the same as
x[n] and h[n] but plotted as functions of k. Then, the convolution sum is realized as follows
1. Invert h[k] about k=0 to obtain h[-k].
2. The function h[n-k] is given by h[-k] shifted to the right by n (if n is positive) and to the left (if n is negative). It may appear
contradictory but think a while to verify this (note the sign of the independent variable).
In the figure below n=1
3. Multiply x[k] and h[n-k] for same coordinates on the k axis. The value obtained is the response at n i.e. Value of y[n] at a particular
n the value chosen in step 2. Now we demonstrate the entire procedure taking n=0,1 thereby obtaining the response at n=0,1. The input
signal x[n] and for this example is taken as :
Remember the independence axis has k as the independent variable. Then taking the product x[k] h[-k] for same k and summing it we
get the value of the response at n=0.
Let h[-k] = g[k]
y[0] = ...........x[-1]g[-1] + x[0] g[0] +............... = (-2) (1) +(1) (2) = 0
Case 2: For n=1
h[1-k] =g[k]
y[1] = .........+ x[0]g[0] + x[1]g[1] + ............. = (1)(1) +(2)(2) = 5
The values are the same as that obtained previously.
The total response referred to as the Convolution sum need not always be found graphically. The formula can directly be applied if the
input and the impulse response are some mathematical functions. We show this by an example next.
Example
We now give an alternative method for calculating the convolution of the given signal x[n] and the response to the unit impulse function.
Let us see how convolution output is the sum of weighted and shifted instances of the impulse response.
Let the given signal x[n] be
Now we break the signal in its components i.e. expressed as a sum of unit impulses scaled and delayed or advanced appropriately.
Simultaneously we show the output as sum of responses of unit impulses function scaled by the same multiplying factor and appropriately
delayed or advanced.
Summing the left and the right hand sides of the above figures we get the input x[n] and the total response on the left and the right
sides respectively. Thus we see the graphical analog the above formula.
The total response referred to as the Convolution sum need not always be found graphically. The formula can directly be applied if the
input and the impulse response are some mathematical functions. We show this by a example.
Conclusion:
In this lecture you have learnt:
The two basic properties of LTI systems are linearity and shift-invariance. It is completely characterised by its impulse
response.
Any discrete time signal x[n] can be represented as a linear combination of shifted Unit Impulses scaled by x[n].
The unit step function can be represented as sum of shifted unit impulses.
The total response of the system is referred to as the CONVOLUTION SUM or superposition sum of the sequences x[n] and h[n].
The result is more concisely stated as y[n] = x[n] * h[n].
The convolution sum is realized as follows
1. Invert h[k] about k=0 to obtain h[-k].
2. The function h[n-k] is given by h[-k] shifted to the right by n (if n is positive) and to the left (if n is negative) (note the sign of
the independent variable).
3. Multiply x[k] and h[n-k] for same coordinates on the k axis. The value obtained is the response at n i.e. Value of y[n] at a
particular n the value chosen in step 2.
Congratulations, you have finished Lecture 5.
similarly,
Looking directly at the Unit Step Function we observe that it can be constructed as a sum of shifted Unit Impulse Functions
The unit function can also be expressed as a running sum of the Unit Impulse Function
We see that the running sum is 0 for n < 0 and equal to 1 for n >= 0 thus defining the Unit Step Function u[n].
Sifting property
Consider the product
. The delta function is non zero only at the origin so it follows the signal is the same as
More generally
It is important to understand the above expression. It means the product of a given signal x[n] with the shifted Unit Impulse Function is
equal to the time shifted Unit Impulse Function multiplied by x[k]. Thus the signal is 0 at time not equal to k and at time k the
amplitude is x[k]. So we see that the unit impulse sequence can be used to obtain the value of the signal at any time k. This is called
the Sampling Property of the Unit Impulse Function. This property will be used in the discussion of LTI systems. For example consider the
product
. It gives
Likewise, the product x[n] u[n] i.e. the product of the signal u[n] with x[n] truncates the signal for n < 0 since u[n] = 0 for n <0
Similarly, the product x[n] u[n-1] will truncate the signal for n < 1.
Now we move to the Continuous Time domain. We now introduce the Continuous Time Unit Impulse Function and Unit Step
Function.
Continuous time unit step and unit impulse functions
The Continuous Time Unit Step Function: The definition is analogous to its Discrete Time counterpart i.e.
u(t) = 0, t < 0
= 1, t 0
In the strict mathematical sense the impulse function is a rather delicate concept. The Impulse function is not an ordinary function. An
ordinary function is defined at all values of t. The impulse function is 0 everywhere except at t = 0 where it is undefined. This difficulty is
resolved by defining the function as a GENERALIZED FUNCTION. A generalized function is one which is defined by its effect on
other functions instead of its value at every instant of time.
Analogy from discrete domain
We will see that the impulse function is defined by its sampling property. We shall develop the theory by drawing analogy from the
Discrete Time domain. Consider the equation
The discrete time unit step function is a running sum of the delta function. The continuous time unit impulse and unit step function are
then related by
The continuous time unit step function is a running integral of the delta function. It follows that the continuous time unit impulse can be
thought of as the derivative of the continuous time unit step function.
Now here arises the difficulty. The unit Step function is not differentiable at the origin. We take a different approach. Consider the signal
whose value increases from 0 to 1 in a short interval of time say delta. The function u(t) can be seen as the limit of the above signal as
delta tends to 0. Given this definition of Unit Step function we look into its derivative. The unit impulse function can be regarded as a
rectangular pulse with a width of
and height (1 /
). As
tends to 0 the function approaches the Unit Impulse function and its
derivative becomes narrower and higher and eventually a pulse of infinitesimal width of infinite height. All throughout the area under the
. In effect the delta function has no duration but unit area. Graphically the function
pulse is maintained at unity no matter the value of
is denoted as spear like symbol at t = 0 and the "1" next to the arrow indicates the area of the impulse. After this discussion we have still
not cleared the ambiguity regarding the value or the shape of the Unit Impulse Function at t = 0. We were only able to derive that the
the effective duration of the pulse approaches zero while maintaining its area at unity. As we said earlier an Impulse Function is a
Generalized Function and is defined by its effect on other functions and not by its value at every instant of time. Consider the product of
an impulse function and a more well behaved continuous function. We will take the impulse function as the limiting case of a rectangular
and height (1/ ) as earlier. As evident from the figure the product function is 0 everywhere except in the small
pulse of width
interval. In this interval the value of x(t) can be assumed to be constant and equal to x(0). Thus the product function is equal to the
tends to 0 the product tends to x(0) times the impulse function.
function scaled by a value equal to x(0). Now as
i.e. The area under the product of the signal and the unit impulse function is equal to the value of the signal at the point of impulse. This
is called the Sampling Property of the Delta function and defines the impulse function in the generalized function approach. As in discrete
time
Or more generally,
Conclusion:
In this lecture you have learnt:
The unit impulse function is defined as:
Sifting Property: The product of a given signal x[n] with the shifted Unit Impulse Function is equal to the time shifted unit
Impulse Function multiplied by x[k].
respectively.
Note: We are towards invoking shift invariance of the system here - we have shifted the signal
We can thus use
of
and
by 4 units.
to pick up a certain point from a discrete signal: suppose our signal x[n] is multiplied by
is zero at all point except n=k. At this point, the value of x 1 [k] equals the value x[k].
Now, we can express any discrete signal as a sum of several such terms:
This may seem redundant now, but later we shall find this notation useful when we take a look at convolutions etc. Here, we also want to
introduce a convention for denoting discrete signals. For example, the signal x[n] and its representation are shown below :
The number below the arrow shows the starting point of the time sequence, and the numbers above are the values of the dependent
variable at successive instants from then onwards. We may not use this too much on the web site, but this turns out to be a convenient
notation on paper.
The unit impulse response:
The response of a system to the unit impulse is of importance, for as we shall show below, it characterizes the LSI system completely.
Let us consider the following system and calculate the unit step response to it: y[n] = x[n] - 2x[n-1] + 3x[n-2]. Now, we apply a unit
step x[n]=d[n] to the system and calculate the response :
x[n]
x[n-1]
x[n-2]
y[n]
..., -1
-2
3, ...
Arbitrary input signals:
Now let us consider some other input, say x[0]=1, x[1]=1 and x=0 for n other than 0 and 1. What will be the response of the above
LSI system to this input? We calculate the response in a table as below
x[n]=
y1 [n] from
y2 [n] from -
..., -1
-2
-1
-2
4, ...
Ah! What we have actually done, is applied the additive (linear), homogenous (linear) and shift invariance properties of the system to get
. The second signal is derived
the output. First, we decomposed the input signal as a sum of known signals: first being the unit step
from the unit step by shifting it by 1. Thus, our input signal is as shown in the figure below. Then, we invoke the LSI properties of the
system to get the responses to the individual signals: the first calculation is show above, while the calculation of response for
shown below.
is
Finally, we add the two responses to get the response y[n] of the system to the input x[n]. The image below shows the final response
with an alternative method of calculating it:
This brings us up to the concept of convolutions, covered in detail in a later section.
Conclusion:
In this lecture you have learnt:
Discrete time LSI systems and their importance
The discrete time unit impulse as a building block
Expressing signals as a linear combination of shifted unit impulses
What is the unit impulse response?
Expressing arbitrary responses as a linear combination of shifted unit impulse responses
Continuous-time systems
Continuous-Continuous systems
Discrete-time systems
Discrete-Discrete systems
1.Logic circuits:
Discrete logic inputs are processed to give
discrete logic outputs.
Hybrid systems
Continuous-Discrete systems
Hybrid systems
Discrete-Continuous systems
Properties of systems
In early parts of this course, we shall concern ourselves with mainly the first two classes, viz. Continuous-time and Discrete-time
systems, but later we shall also deal with Hybrid systems as well. So, we find it worthwhile here to take a look at what properties the
systems of various classes can have:
Property
Memory
Yes
Yes
No
Yes
Yes
No
Yes
Yes
No
Stability
Yes
Yes
Yes
Linearity
Yes
Yes
Yes
Causality
Shift invariance
(Time
invariance)
Note that this is a table of properties which the system can have; they are not necessary properties of a system. Hence, we can find a
Continuous-time system that is stable (though there may be Continuous-time systems which are unstable), but it is impossible to apply
the concept of memory to a discrete-continuous system without modifying the concept itself.
Conclusion:
In this lecture you have learnt:
Memory, causality and shift invariance are defined only if the input and ouput signals are of the same type i.e. both continuous or
discrete.
Stability and linearity do not require the input and output signals to be of the same type.
Congratulations, you have finished Lecture 8.
One can arrive at an expression for an arbitrary input, say x(t) by scaling the height of the rectangular impulse by a factor such that it's
value at t coincides with the value of x(t) at the mid-point of the width of the rectangular impulse. The entire function is hence divided
into such rectangular impulses which give a close approximation to the actual function depending upon how small the interval is taken to
be. For example let x(t) be a signal. It can be approximated as :
The given input x(t) is approximated with such narrow rectangular pulses, each scaled to the appropriate value of x(t) at the
corresponding t (which lies at the midpoint of the base of width
) approaches zero, the rectangular pulse becomes finer in width and the function x(t) can be represented in terms of
the pulse-width (
impulses by the following expression,
since u(t) = 0 for t < 0 and u(t) = 1 for t > 0. In complete analogy with the development on sampling property of discrete unit
impulse we have,
This is known as Sifting Property of the continuous time impulse. Note that the unit impulse puts unit area into zero width.
by shift invariance,
by homogeneity,
by additivity, ( Note : We can perform additivity on infinite terms only if the sum/integral converges. )
This is known as the continuous time convolution of x(t) and h(t). This gives the system response y(t) to the input x(t) in terms of unit
impulse response h(t). The convolution of two signals h(t) and x(t) will be represented symbolically as
Hence, by merely knowing the impulse response one can predict the response of the signal x(t) by using the given formula for
convolution.
RC System
Consider a RC system consisting of a resistor and a capacitor. We have to find out what the response of this system is to the unit
impulse
Now as
the response of
Taking limit as
we get
Hence if we are given the unit step responseu(t) we have been able to calculate the continuous impulse response of the system. Next
we shall see how we can get the unit step response from the impulse response of the same system.
, when fed into the RC system gives the corresponding impulse response h(t), which in this case is given by
But in this case x(t) = u(t), and so the output signal y(t) will be given by :
Now we have
if and only if .
Convolution Operation
Flash File
We now interpret the convolution (x*h)(t) as the common (shaded) area enclosed under the curves x(v) and h(t-v) as v varies over
the entire real axis.
x(v) is the given input function, with the independent variable now called v. h(t-v) is the impulse response obtained by inverting h(v)
and then shifting it by t units on the v-axis.
As t increases clearly h(t-v) can be considered to be a train moving towards the right, and at each point on the v-axis, the area under
the product x(v) and h(t-v) is the value of y(t) at that t.
Conclusion:
In this lecture you have learnt:
The given input x(t) is approximated with narrow rectangular pulses, each scaled to the appropriate value of x(t) at the
corresponding t (which lies at the midpoint of the base of width d). This is called the staircase approximation of x(t).
By merely knowing the impulse response one can predict the response of the signal x(t) by using the given formula for
convolution.
If we are given unit-step response, we can calculate unit-impulse response by differentiating the unit-step response .
If we are given unit-impulse response, we can calculate unit-step response by taking running integral of unit-impulse response .
The convolution (x*h)(t) is the common (shaded) area enclosed under the curves x(v) and h(t-v)as v varies over the entire real
axis.
As t increases, h(t-v) can be considered to be a train moving towards the right and at each point on the v -axis the common area
under the product x(v) and h(t-v) is the value of y(t) at that t.
Continuous :
We shall now discuss the important properties of convolution for LTI systems.
1) Commutative property :
By the commutative property,the following equations hold true :
a) Discrete time:
Note :
1. 'n' remains constant during the convolution operation so 'n' remains constant in the substitution n-k = l even as 'k' and 'l' change.
2. l goes from
to +
, this would not have been so had 'k' been bounded.( e.g :- 0 < k < 11 would make n < l < n 11)
b) Continuous Time:
Proof:
Thus we proved that convolution is commutative in both discrete and continuous variables.
Thus the following two systems : One with input signal x(t)and impulse response h(t) and the other with input signal h(t) and impulse
response x(t) both give the same output y(t).
2) Distributive Property :
By this property we mean that convolution isdistributive over addition.
a) Discrete :
b) Continuous :
A parallel combination of LTI systems can be replaced by an equivalent LTI system which is described by the sum of the individual
impulse responses in the parallel combination.
3) Associative property
a) Discrete time :
Making the substitutions: p = k ; q = (l - k) and comparing the two equations makes our proof complete.
Note: As k and l go from
to +
b) Continuous time :
Lets substitute
Doing some further algebra helps us see equation (2) transforming into equation (1) ,i.e. essentially they are the same. The limits are
also the same. Thus the proof is complete.
Implications
This property (Associativity) makes the representation y[n] = x[n]*h[n] *g[n]unambiguous.
From this property, we can conclude that the effective impulse response of acascaded LTIsystem is given by the convolution of their
individual impulse responses.
Consequently the unit impulse response of a cascaded LTI system is independent of the order in which the individual LTI systems are
connected.
Note :All the above three properties are certainly obeyed by LTI systemsbuthold for non-LTI systems in, as seen from the following
example:
Hence
5) Invertibility :
A system is said to be invertible if there exists an inverse system which when connected in series with the original system produces an
output identical to the input.
We know that
6) Causality :
a) Discrete time:
{ By Commutative Property }
In order for a discrete LTI system to be causal, y[n] must not depend on x[k] for k > n. For this to be true h[n-k]'s corresponding to the x[k]'s
for k > n must be zero. This then requires the impulse response of a causal discrete time LTI system satisfy the following conditions :
Essentially the system output depends only on the past and the present values of the input.
Proof : ( By contradiction )
Let in particular h[k] is not equal to 0, for some k<0
In order for a continuous LTI system to be causal, y(t) must not depend on x(v) for v > t . For this to be true h(t-v)s corresponding to the x(v)s for v > t
must be zero.
This then requires the impulse response of a causal continuous time LTI system satisfy the following conditions :
As stated before in the discrete time analysis,the system output depends only on the past and the present values of the input.
Proof : ( By contradiction )
Suppose, there exists
0.
Now consider
Since,
7) Stability :
A system is said to be stable if its impulse response satisfies the following criterion :
Theorem:
Stability
Stability
Proof of sufficiency:
Suppose
We have
, then:
But as
Proof of Necessity:
Take any n.
Then,
Hence
Hence Proved
by integral
Conclusion:
In this lecture you have learnt:
Convolution obeys commutative, distributive (over addition) and associative properties in both continuous and discrete
domains.
Commutativity implies the system with input signal x(t) and impulse response h(t) and the other with input signal h(t) and impulse
response x(t) both give the same output y(t).
Distributivity implies a parallel combination of LTI systems can be replaced by an equivalent LTI system which is described by
the sum of the individual impulse responses in the parallel combination.
Associativity implies the unit impulse response of a cascaded LTI system is independent of the order in which the individual LTI
systems are connected.
A system is memoryless if and only if h[n] = 0 for all non-zero n .
LTI system is invertible if the the convolution of the impulse response and its inverse results in unit impulse
For a causal discrete time LTI system, h[n] = 0 for all n<0. (similarly for continuous time)
For a stable system ,the impulse response must be absolutely integrable.
Congratulations, you have finished Lecture 10.
2. Memory of the System: The system obviously possesses memory as the derivative operator requires a certain interval length
to be defined in.
3. Causality of the System: To answer this we must consider the left, right and center derivates separately. Clearly the left
derivative is causal while the center and right derivatives may or may not be so. However for a differentiable function, all the
three derivatives being equal, the system is indeed causal.
4. Stability of the System: Consider the input signal shown below. Clearly we see that a bounded input does not lead to a
bounded output which becomes obvious at points where the derivative of the input signals tends to infinity. Thus the system is
not stable.
x(t)
Exercise: Give an example of a bounded input signal such that its derivative is not bounded as time tends to infinity?
Consider x(t) = sin (t 2 )..............................(bounded)
Then x'(t) = 2.t.cos(t 2 )..........................(unbounded)
5. Invertibility of the System: Is the derivative operator invertible? No, because when we consider the class of constants as input
then the output is always zero. Thus the derivative operator is not one is to one. However the system is invertible upto an
additive constant.
Linear Constant Coefficient Differential and Difference Equations
Equations of the form shown below are called linear constant coefficient differential equations:
The above description is in the implicit form. Hence it does not yield a unique interpretation. But we can make the system LSI by
adding the following conditions:
1. Interpret the equation as holding for all time.
2. If we are concerned with only limited interval of time, then impose zero initial conditions.
In order to solve a differential equation we must specify one or more auxilary conditions. Auxilary conditions are required to
characterize the system completely. Differnent choices for the auxilary conditions can lead to different relationships between the
input and output. We want the sytem to be LSI and hence we specify initial rest conditions.
We Specify the initial rest conditions as follows:
For t<=t 0 ,if x(t)=0 then we assume that y(t)=0 and therefore the response for t>t 0 can be calculated from the differential
equation with initial conditions
Note: It should be noted that in the initial rest conditions,t 0 is not a fixed point in time but rather depends on the input x(t). We
now prove that for the initial rest conditions the system is indeed LSI:
We first prove linearity. Suppose x 1 (t) and x 2 (t) are two arbitrary signals such that x 1 (t)=0 for t<t 1 , x 2 (t)=0 for t<t 2 . Let
y<sub>1</sub>(t) and y 2 (t) be the system output for x 1 (t) and x 2 (t) respectively. Then we have to prove the system output
for the input x 3 (t)=a.x 1 (t)+b.x 2 (t) is y 3 (t)=a.y 1 (t)+b.y 2 (t).
Without loss of generality we can assume that t1 <t 2 . Using the initial rest conditions we see that for y 3 (t), t0 =t 1 . Due to the
linearity of the derivative operator a.y1 (t)+b.y 2 (t) satisfies the differential equation. Also a.y1 (t)+b.y 2 (t) satisfies the initial
conditions with t0 =t 1 . But by Uniquesess Theorem for differential equations, it should have a unique solution. Hence we have
y 3 (t)=a.y 1 (t)+b.y 2 (t). Thus we have established the linearity.
Now we prove shift invariance.
Suppose x 1 (t) is an arbitrary signal sich that x 1 (t)=0 for t<t 0 . Let x 2 (t)=x 1 (t-T) and let y 1 (t) and y 2 (t) be the system outputs
for x 1 (t) and x 2 (t) respectively. Then we have to show that y 2 (t)=y 1 (t-T).
We procees as we had done previously. y 1 (t-T) satisfies the differential equation because of the shift invariance of the derivative
operator. y 2 (t) satisfies the initial conditions with t0 as t0 +T and y 1 (t) satisfies the initial conditions with t0 as t0 . Form this it is
easy to see that y 1 (t-T) satisfies the initial conditions with t0 as t0 +T. Finally by invoking the uniqueness theorem we can
conclude that y 2 (t)=y 1 (t-T) which is what we sought to prove.
Also note that the above system is causal. This is clear from the following argument:
Consider two inputs p(t) and q(t) such that for t<T, p(t)=q(t). Let r(t) and s(t) be their respective outputs. Now let x(t)=p(t)q(t). Thus x(t)=0 for t<T. From the initial conditions we get output of x(t) as y(t)=0 for t<T. But from linearity property we have
y(t)=r(t)-s(t)=0 for t<T. Thus r(t) = s(t) for t<T and the system is causal.
Example: Consider the following RC system. If voltage across C is 2V initially, show that the system is not LSI.
If the capacitor has 2V initially across its terminals, then the above system is not initially at rest.
Let x 0 and x 0 be two inputs to the system as shown below. Now if the system was linear, then the output voltage across the
capacitor at time t=0 would have been 2V + 2V = 4V but the initial voltage across the capacitor will still be only 2V.
If you convolve x(t) with h(t) then you get the following:
Thus we see that though we cannot interpret the object h(t), its behavior under a convolution with x(t) leading to derivative can
be understood.
In the above analysis we have come across certain mathematical tools of interest know as singularity functions. Click on the
button below to learn more.
While a function is usually defined at every value of the independent variable, the priomary importance of the unit impulse is not
what it is at each value of t, but rather what it does under convolution. So from the point of view of the linear system analysis
we alternatively define the unit impulse as a signal for which
x(t) = x(t) *
All the properties of the unit impulse that we need can be obtained from the operational definition of the unit impulse.
Note:
The above definition of unit impulse follows from the fact that the impuse response of identity system is unit impulse itself and
the output of any input x(t) is the convolution of x(t) and unit impulse. But the output of identity system is the input x(t) itself
and hence
x(t) = x(t) *
Singularity functions are functions which can be defines operationally in terms of their behavior under convolution. Consider the
derivative system. The impulse response of this system is the derivative of unit impulse and it is called unit doublet. It is denoted
by u 1 (t). Its working definition is
dx(t)/dt=x(t)*u 1 (t), for any signal x(t).
u 1 (t)
Similarly we define u 2 (t), the second derivative of unit impulse response as
d 2 x(t)/dt 2 =x(t)*u2 (t)
Discrete Systems
Note: Realizability implies giving a physical structure to the solution with known elements. We shall see later why integer delay is
not exactly realizable in a continuous system.
Conclusion:
In this lecture you have learnt:
The derivative operator system is LSI.
In case of a derivative operator system, a bounded input does not lead to a bounded output which becomes obvious at
points where the derivative of the input signals tends to infinity. Thus the system is not stable.
The derivative operator system is invertible upto an additive constant.
The systems defined by linear constant coefficient differential equations for continuous variables (and for discrete
variables, the corresponding equations are called the linear constant coefficient difference equations) are causal.
Iinteger delay is not exactly realizable in a continuous system.
Congratulations, you have finished Lecture 11.
Joseph Fourier
Fourier Transform: Every periodic signal can be written as a summation of sinusoidal functions of frequencies which are multiples of a
constant frequency (known as fundamental frequency). This representation of a periodic signal is called the Fourier Series. An
aperiodic signal can always be treated as a periodic signal with an infinite period. The frequencies of two consecutive terms are
infinitesimally close and summation gets converted to integration. The resulting pattern of this representation of an aperiodic signal is
called the Fourier Transform. Signals Treated as Vectors Any vector in N-dimensional space can be fully specified by a set of N numbers
(i.e. its components in various directions). Similarly we can also treat signals in continuous and discrete times as special cases of vectors
with infinite dimensions. Why do we need signals to be treated as vectors?
The mathematical analysis of vectors is highly advanced compared to signals. Treating signals as vectors helps us to attribute many
additional properties to them. Moreover we do feel comfortable taking signals as vectors in a problem involving number of signals.
Countable Infinity:
A set is called countably
countable infinite set. We
have 1-1 correspondence
number can be taken as a
infinite if and only if its all elements have 1-1 correspondence with set of natural numbers or any other
can easily see set of integers satisfying this property. Now we can call a set countably infinite if its elements
with integers (it will ensure automatically that the condition be satisfied for natural numbers). Every rational
tuple of two integers (numerator and denominator) making the set of rational numbers also countably infinite.
Exercise: Prove that the set of real numbers is not countably infinite. Proof Suppose the set of real numbers is countably finite. Then
every real number if mapped injectively onto the set of natural numbers. Let r k, where k N be the k th real number. Now we construct a
real number r as follows: The integral part of r is 0. The k th decimal place of r is any integer that is different from the k th decimal place
of r k. This number r which we have constructed differs from every r k at the k th decimal place. This contradicts our assumption that the
set of real numbers is countably finite.
Note:
A Discrete Signal x[n] can be thought of as a " Vector " with countably infinite dimensions.
A Continuous Signal x(t) can be thought of as a vector with uncountably infinite dimensions.
Dot Product (Inner Product) of Vectors :
In simple language, the Dot product (Inner product or Scalar product) is a binary operation which takes two vectors and returns a scalar
quantity. The Dot product of two vectors X and Y, both of 'N' dimensions is a scalar which does not depend on the choice of the
orthogonal system with N directions. It is the Projection of one vector on the other i.e. component of one vector along another vector. By
its very definition, dot product of a vector with itself is always non-negative and is the square of its magnitude. Take 2 vectors, X = (
x[1], x[2], ... , x[N] ) & Y = ( y[1], y[2], ... , y[N] ). Here X and Y in general can be complex. Then the dot product of X with Y is given
by:
What is the purpose of taking complex conjugates of components of Y?
Inner product of a vector with itself must be non-negative by definition as any vector is wholly contained in itself. If X is any complex
vector this condition requires complement of Y to be taken in the above definition.
Conditions for a function to be an inner product in a vector space:
An operation <X, Y> between two vectors X and Y can be called an inner product if and only if it satisfies the following conditions:
Now lets define the inner product for continuous and discrete time as shown below.
Clearly, each of these definitions satisfies the necessary conditions for it to be described as an inner product.
Continuous Time: Consider X(t) and Y(t) as two signals in continuous time.
Compare this with the definition of dot product for two finite dimensional vectors
We will now introduce two new terms - "Eigenvalue" and "Eigensignal". These concepts will be used later along with the concept of
inner product of signals to introduce the Fourier series.
"Eigen" is a German word meaning "one's own". In the context of Signals & Systems, eigensignals and eigenvalues are described as
follows :Consider a system with impulse response h(t). A signal x(t) applied to this system produces an output y(t) which is same as the input
signal x(t) except for multiplication by a scalar. Then, the signal x(t) is known as an Eigensignal of the system and the multiplication
factor is called the Eigenvalue corresponding to the eigensignal. Mathematically,
Here, x(t) is the eigensignal and A is the eigenvalue corresponding to the eigensignal x(t). (Note that A is a constant) Complex
Exponential signal as an Eigensignal: Consider an LSI system with impulse response h(t). We will verify that the complex exponential
is an eigensignal of the LSI system. The output y(t) of the LSI system, corresponding to input x(t) =
can be obtained by
convolving the x(t) with the impulse response h(t).
Noting that
Note that stability of the LSI system guarantees convergence of y(t). Thus we have:
Hence, we have shown that complex exponential signals are eigensignals for the LSI systems (when they do produce a convergent
output). The constant
, for each fixed
, is the eigenvalue corresponding to the exponential signal. Now, the eigenvalue
can also be thought of as the projection of the impulse response h(t) of the system along the signal
inner product between the input signal
, i.e
This special property of the complex exponential function with respect to LSI systems is one of the inspirations for trying to represent
signals in terms of complex exponentials. We shall see soon the consequences of this property.
Conclusion:
In this lecture you have learnt:
Transforms look at signals from a domain other than the natural domain. Transforms are essential for understanding some
properties of a signal. The Fourier transform is an important transform to begin with.
A Discrete Signal x[n] can be thought of as a vector with countably infinite dimensions.
A Continuous Signal x(t) can be thought of as a vector with uncountably infinite dimensions.
We can define inner products for signals, and thus go on to define eigensignals and eigenvalues for a signal:
for continuous signals
for discrete signals
Such a representation of a periodic signal as a combination of complex exponentials of discrete frequencies, which are multiples of the
fundamental frequency of the signal, is known as the Fourier Series Representation of the signal.
Inner product
The set of periodic signals with period T form a vector space.
We shall first show these vectors are mutually orthogonal. In other words we show that:-
Thus, we have shown that this set of complex exponentials forms an orthogonal set in the vector space of all periodic signals with
period T. Indeed, if we restrict ourselves to a certain class of signals in this vector space (those that satisfy the Dirichlet Conditions,
which will be discussed in the next lecture), one can show that the above set of complex exponentials forms a basis for this class. i.e.:
signals in this class can be expressed as a linear combination of these complex exponentials. In other words, such signals permit a
Fourier Series representation.
Assuming the Fourier Series representation of a signal x(t), with period T exists, it is easy to find the Fourier Series coefficients, using
the orthogonality of the basis set of complex exponentials.
on both sides
be the Fourier series expansion corresponding to the periodic signal x(t) (i.e: the
Let
the formula in the previous lecture). Then the above summation may or may not converge to the actual signal x(t).
We shall discuss the convergence of the Fourier series representation of a periodic signal in two contexts, namely Pointwise
convergence and Convergence in squared norm. We will first see what each of these terms means and then discuss the conditions
under which each kind of convergence takes place.
For the subsequent discussion let,
Pointwise Convergence
Pointwise convergence implies the series converges to the original function at any point, i.e: the Fourier Series representation of a signal
x(t) is said to converge pointwise to the signal x(t) if:
i.e to say
Pointwise convergence implies convergence in squared norm. As convergence in squared norm is a more relaxed condition than pointwise
convergence, convergence in the squared norm sense covers a much larger domain of signals than pointwise convergence.
Finally, we now move on to the conditions for these forms of convergence.
Dirichlet Conditions For Pointwise Convergence
Consider the following 3 conditions that may be imposed on a periodic signal x(t) :
1) x(t) should be absolutely integrable over a period.
A signal that does not satisfy this condition is x(t) = tan(t) as:-
2) x(t) should have only a finite number of discontinuities over one period. Furthermore, each of these discontinuities must be finite.
An example of a function which has infinite number of discontinuities is illustrated below. The function is shown over one of the periods.
3) The signal x(t) should have only a finite number of maxima and minima in one period. An example of a function which has
infinite number of maxima and minima is: a periodic signal with period 1, defined on (0,1] as:
If the signal satisfies the above conditions, then at all points where the signal is continuous, the Fourier Series converges to the signal.
However, at points where the signal is discontinuous (Dirichlet conditions allow finite number of discontinuities in a period), the Fourier
Series converges to the average of the left and the right hand limits of the signal. Mathematically, at a point of discontinuity
In practice, the restrictions imposed on signals by the Dirichlet conditions are not very severe, and most of the signals we will deal with
satisfy these conditions.
Condition for convergence in squared norm sense
If, for a periodic signal x(t) with period T,
converges, then its Fourier Series converges to it in the squared norm sense.
As is expected, this is a far more relaxed constraint than the Dirichlet conditions.
At this point let us define some terms which will be of use to us later in the course:
is called the instantaneous power or energy density of the signal x(t).
, x(t) is said to be a finite energy signal, and the value of the integral is called the
Gibb's Phenomenon
We can approximate a signal having a Fourier Series expansion by taking a finite number of terms of the expansion.
i.e:
is also called a Partial Sum. We would obviously expect that as the number of terms taken is increased, this summation would
become a better and better approximation to x(t), i.e
Indeed this happens in regions of continuity of the original signal. However, at the points of discontinuity in the original signal, an
interesting phenomenon is observed. The partial sum oscillates near the point of discontinuity. We might expect these oscillations to
decrease as the number of terms taken is increased. But surprisingly, as the number of terms taken is increased, although these
oscillations get closer and closer to the point of discontinuity, their amplitude does not decrease to zero, but tends to a non zero limit.
This phenomenon is known as the Gibb's Phenomenon, after the mathematician who accounted for these oscillations.
The illustration below shows the various Fourier approximations of a periodic square wave.
Mathematically, this means if the periodic signal has discontinuities, its Fourier Series does not converge uniformly.
Conclusion:
Although this covers a broad class of functions, it puts a serious restriction on the function. That is, Periodicity . So the next question
that naturally pops up in one's mind is , "Can we extend our idea of the Fourier series so as to include non periodic functions ?" . This
precisely is our goal for this part , the basic inspiration being, an aperiodic signal may be looked at as a periodic signal with an infinite
period. Note what follows is not a mathematically rigourous excercise, but will help develop an intuition for the Fourier Transform for
aperiodic signals.
The Fourier Transform
Let's start with a simple example .Consider a following function, periodic with period T.
Clearly this is our familiar square wave . Let's see what happens to its Frequency domain representation as we let T to approach
infinity . We know that the Fourier coefficients of the above function can be written as
where
reduces to :
Now we know that every value of k in this equation gives us the coefficient corresponding to the
frequency of the signal. Let's plot the Frequency Domain representation of the signal, which we shall also call the spectrum of the signal.
Note the horizontal axis represents frequency, although the points marked indicate only scale.
Now let's double the time period .the expression for the Fourier coefficients will become :
as the time period T increases, the distance between the consecutive frequencies f 0 in the spectrum will reduce ,and we'll get the
following plot.
If we reduce the natural frequency by another half ( that is increase the time period by a factor of two ) , we'll get the following
frequency spectrum :
Notice as the period of the periodic signal is increased, the spacing between adjacent frequency components decreases.
Flash File
Finally when the period of the signal tends to infinity , i.e. the signal is aperiodic, the frequency spectrum becomes a continuous.
By looking at the plots we can infer that as we will increase the time period more and more, we'll get more and more closely placed
frequencies in the spectrum, i.e.: complex exponentials with closer and closer frequencies are required to represent the signal. Hence if
we let T to approach infinity we'll get frequency infinitesimally close to each other. This is same as saying that we'll get every
possible frequency and hence the whole continuous frequency axis. Thus our frequency spectrum will no more be a series , but will be a
continuous function. The representation will change from a summation over discrete frequencies to an integration over the entire
frequency axis. The function, which (like the Fourier Series coefficients) gives what is crudely the strength of each complex exponential in
the representation is formally called the Fourier Transform of the signal. The representation takes the form:
where X(f) is the Fourier Transform of x(t). Note the similarity of the above equation with the Fourier Series summation in light of
the preceding discussion.
This equation is called the Inverse Fourier Transform equation, x(t) being called the Inverse Fourier Transform of X(f).
Such a representation for an aperiodic signal exists, of course subject to some conditions, but we'll come to those a little later.
Recap:
Under certain conditions, an aperiodic signal x(t) has a Fourier transform X(f) and the two are related by:
(Fourier Transform equation)
Now, lets go on to the conditions for existence of the Fourier Transform. Again notice the similarity of these conditions with the Dirichlet
conditions for periodic signals.
Consider an aperiodic signal x(t). Its Fourier Transfrom exists (i.e the Transform integral converges) and
3 ) x(t) has only a finite number of discontinuities in any finite interval. For example the following function (the so called Dirichlet's
function ) will not satisfy this condition.
will converge to x(t) at all points of continuity of x(t). At points of discontinuity of x(t),
this integral converges to the average of the left hand limit and the right hand limit of x(t) at that point.
Conclusion:
In this lecture you have learnt:
An aperiodic signal may be looked at as a periodic signal with an infinite period .
We learnt what inverse Fourier transform is & derived its equation.
We saw Dirichlet Conditions for convergence of Fourier Transform.
Let us now gain some additional insight into the Fourier Transform using this system notion.
Inverse Fourier transformation would yield Y(t) ? Recall the Inverse Transformation equation above, and put
Y:
Therefore, y(-f ) is the Fourier transform of Y(t) (where Y(f) is the Fourier transform of y(t) ) ! This remarkable relationship between a
signal and its Fourier transform is called the Duality of the Fourier Transform. i.e:
Duality implies a very remarkable relationship between the Fourier transform and its inverse. Notice the relationship between the Fourier
Transform and the Fourier Inverse of X above:
This gives us a very important insight into the nature of the Fourier transform. We will use it to prove many dual relationships: if some
result holds for the Fourier Transform, a dual result will hold for the Inverse transform as well. We will encounter some examples soon.
2. Linearity
Both the Fourier transform and its inverse system are linear. Thus the Fourier transform of a linear combination of two signals is the
same linear combination of their respective transforms. The same, of-course holds for the Inverse Fourier transform as well.
3. Memory
The independent variable for the input and output signals in these systems is not the same, so technically we can't talk of memory with
respect to the Fourier transform and its inverse. But what we can ask is: if one changes a time signal locally, will only some
corresponding local part of the transform change? Not quite.
Introducing a local kink like in the above time-signal causes a large, spread-out distortion of the spectrum. In fact, the more local the
kink, the more spread-out the distortion!
By duality,one can say the same about the inverse Fourier transform.
I.e: if x(.) has a Fourier transform X(.), using Duality and the above discussion, we can say that introducing a local distortion in X(.)
will cause a wide-spread distortion in x(-.). But x(.) is also the inverse Fourier transform of this locally changed X(.). Thus introducing
a local kink in the spectrum of a signal changes it drastically.
4. Shift invariance
Again, we can't talk of shift variance/invariance with these systems as the independent variable for the input and output signals is not the
same. But we can examine what happens to the spectrum of a signal on time-shifting it, and vice-versa.
Notice that nowhere has the magnitude of X(f) changed. Only a phase (or argument) change that is linear in frequency has taken
place.
Let us, using Duality examine the effect of translating the spectrum on the time-signal.
5. Stability
Are our systems BIBO stable? i.e.: Will a bounded input necessarily give rise to a bounded output? No.
The integrals that describe the two systems need not converge for a bounded input signal. e.g.: they don't converge for a non-zero
constant input signal.
Now that we have come to the issue of the Fourier transform and the Inverse Fourier transform not converging for a constant input
signal, let us see what the Transform of the unit impulse is.
Note that the impulse, far from satisfying Dirichlet's conditions, is not even a function. It falls in the class of generalized functions. Thus
what we are doing is extending our idea of the Fourier Transform. Why? Because we will find it useful.
That is, the Fourier transform of the unit impulse is the identity function. Thus, even though the inverse equation does not converge for
the identity function, we say that that Fourier Transform of the unit impulse is the identity function.
Why stop here? Consistent with duality, we say that the Fourier Transform of the identity function is the unit impulse:
We will even apply the time-shift and frequency-shift properties we have just proved to make further generalizations:
Now since the Fourier transformation is linear, the above result can be used to obtain the Fourier Transform of the periodic signal x(t):
Therefore,
By putting this transform in inverse Fourier transform equation, one can indeed confirm that one obtains back the Fourier series
representation of x(t).
Thus, the Fourier transform of a periodic signal having the Fourier series coefficients
the fundamental frequency, the strength of the impulse at
This looks like:
being
Basic Properties of Fourier Transform
Consider a signal x(t) with Fourier transform X(f). We'll see what happens to the Fourier transform of x(t) on time-reversal and
conjugation. i.e:
Substitute
Therefore,
Therefore,
Applying this result to periodic signals (we have just seen their Fourier transform), you see that if
of a periodic signal x(t),
is the
is the
Starting with
Thus,
And, therefore,
is the
b) What can we say about the Fourier transform of a real signal x(t), with Fourier transform X(f) ?
If x(t) is real,
Conclusion:
In this lecture you have learnt:
Fourier transformation is linear .
Fourier transform of x(-t) is X(-f).
Fourier transform of conjugate of x(t) is conjugate of X(-f).
The Fourier transform of an even signal is even
The Fourier transform of a real signal is Conjugate Symmetric .
Why such a spectrum ? Because it's the simplest possible multi-valued function. Also, it is band-limited (i.e.: the spectrum is non-zero in
only a finite interval of the frequency axis), having a maximum frequency component f m . Band-limited signals will be of interest to us
later on.
x(t) is amplitude modulated with a carrier signal
Thus if
since
If ,
part of the transmitted spectrum and simply chops away the rest of the
Filters :
The simplest ideal filters aim at retaining a portion of the spectrum of the input in some pre-defined region of the frequency axis and
removing the rest.
A LOWPASS FILTER is a filter that passes low frequencies i.e. around f = 0 and rejects the higher ones, i.e: it multiplies the input
spectrum with the following:
A High pass filter passes high frequencies and rejects low ones by multiplying the input spectrum by:
A BANDPASS FILTER passes a band of frequencies and rejects both higher and lower than those in the band that is passed, thus
multiplying the input spectrum by:
A BANDSTOP FILTER stops or rejects a band of frequencies and passes the rest of the spectrum, thus multiplying the input spectrum
by:
How do these filters work? That is, what does multiplication of two signals in the frequency domain imply in the time domain?
If we multiply two Fourier transforms X(f) and H(f), let us see what the Inverse Fourier transform of this product is.
Consider the integral
We can interchange the order of integration, so long as the new double integral converges
we note that the term inside the bracket is just the inverse Fourier transform of X(f) evaluated at
Convolution theorem
It states:
If two signals x(t) and y(t) are Fourier Transformable, and their convolution is also Fourier Transformable, then the Fourier Transform of
their convolution is the product of their Fourier Transforms.
Thus we know:
The Convolution theorem says:
Applying duality on this result,
Parseval's theorem
We now prove another very important theorem using the Convolution Theorem. We first give its statement:
The Parseval's theorem states that the inner product between signals is preserved in going from time to the frequency domain.
i.e.
where X(f), Y(f), are the Fourier Transforms of x(t), y(t) respectively.
If we take x(t) = y(t),
This is interpreted physically as Energy calculated in the time domain is same as the energy calculated in the frequency
domain.
Hence Proved.
If the convolution between x(t) and some Fourier Transformable aperiodic signal h(t) converges, lets see what the
Fourier transform of x*h looks like (assuming it exists). Note x*h is also periodic with the same period as x(t) and
its Fourier transform is also then expected to be a train of impulses.
By the convolution theorem, the Fourier Transform of x*h is:
implying, the
Therefore, assuming a periodic signal x(t) has a Fourier series representation, and an aperiodic signal h(t) is Fourier transformable, if
x*h converges (and has a Fourier series representation), it is periodic with the same period as x(t) and its Fourier series coefficients are
the Fourier series coefficients of x(t) multiplied by the value of H(f) at that multiple of the fundamental frequency.
Conclusion:
In this lecture you have learnt:
Modulation refers to the process of embedding an information-bearing signal into a second carrier signal
A High pass filter, a bandpass filter,a bandstop filter are studied.
We saw the proof of the convolution theorem.
We obtained the Dual version of the Convolution Theorem .
Parseval's theorem's physical interpretation is as follows: Energy calculated in the time domain is same as the energy
calculated in the frequency domain .
and is defined as :
Note the definition holds even if T is not the smallest common period for x(t) and h(t) due to the division by T. Thus we don't need m and
n to be the smallest possible integers satisfying T 1 / T 2 = m / n in the process of finding T.
. Also, notice that the convolution is periodic with
Also, show for yourself that the periodic convolution is commutative, i.e:
period T 1 as well as T 2 . More on this later.
Fourier Transform of
Say x(t) is periodic with period T 1 and h(t) is periodic with period T 2 with T 1 / T 2 = m / n (where m and n are integers).
Thus m T 2 = n T 1 = T is a common period for the two.
We can expand x(t) and h(t) into Fourier Series with fundamental frequency
If one compares the Fourier co-efficients in these expansions with those in the expansions with the original fundamental frequencies, i.e:
, we find:
Now,
Their product can clearly be non-zero only when k is a multiple of m and n. Thus if p is the LCM (least common multiple) of m and n, we
have:
Parseval's Theorem
We now obtain the result equivalent to the Parseval's theorem we have already seen in the context of periodic signals.
Let x(t) and y(t) be periodic with a common period T.
we get:
Put t = 0, to get:
Compare this equation with the Parseval's theorem we had proved earlier.
If we take x = y, then T becomes the fundamental period of x and:
Note the left-hand side of the above equation is the power of x(t).
Note also that the periodic convolution of
the coefficients of x(t).
yields a periodic signal with Fourier coefficients that are the modulus square of
Then
represents the power of y(t), where T is a period common to x(t) and h(t).
If ,
is defined as:
( note that
Cross Correlation
The cross correlation between two signals x(t) and y(t) is defined as :
is the convolution of
then using the fact that the auto-correlation integral peaks at 0 , the cross correlation peaks at
If
It may be said that cross-correlation function gives a measure of resemblance between the shifted versions of signal x(t) and y(t). Hence
it is used to in Radar and Sonar applications to measure distances . In these systems, a transmitter transmits signals which on reflection
from a target are received by a receiver. Thus the received signal is a time shifted version of the transmitted signal . By seeing where the
cross-correlation of these two signals peaks, one can determine the time shift and hence the distance of the target.
The Fourier transform of
is of-course
Conclusion:
In this lecture you have learnt:
Periodic convolution or circular convolution of x(.) with h(.) is denoted by
=
Fourier Transform of
Parseval's theorem in the context of periodic signals is
and is defined as :
Objectives
In this lecture you will learn the following
Behaviour
Behaviour
Behaviour
Behaviour
Behaviour
Behaviour
of
of
of
of
of
of
the
the
the
the
the
the
Fourier
Fourier
Fourier
Fourier
Fourier
Fourier
Differentiation/Integration
Hence if
then
Now,
Hence if,
then,
eg :
let
impulse in frequency.
Example:
or
Hence, x(t) and |a| 1/2 x(at) have the same energy. Therefore such scaling is called energy normalized scaling of the independent
variable.
Time-shift
Recall, that if x(t) is periodic then X(f) is a train of impulses.
where
We know:
Thus if x(t) is periodic with period T , x( t - t0 ) has Fourier series coefficients
Differentiation
If the periodic signal is differentiable then
Thus if x(t) is periodic with period T , x'(t) has Fourier Series coefficients
If a > 0, x(at) is periodic with period ( T / a ) and now c k becomes Fourier coefficient corresponding to frequency
If a < 0, x(at) is periodic with period ( T / -a) and now c k becomes Fourier coefficient corresponding to frequency
Multiplication by t
Multiplication by t of-course will not leave a periodic signal periodic. But what we can do is, multiply by t in one period, and then consider
a periodic extension. i.e: x(t) is periodic with period T, we see what the Fourier series coefficients of y(t), defined as follows is:
in
Let
and
otherwise
Then
Similarly, let
Conclusion:
In this lecture you have learnt:
Properties
Properties
Properties
Properties
Properties
Properties
of
of
of
of
of
of
the
the
the
the
the
the
Fourier
Fourier
Fourier
Fourier
Fourier
Fourier
Objectives:
Scope of this lecture:
Modern Communication would not have been possible without the development of sampling theory. The sampling theory provides means and ways for
processing the Continuous Time (C.T.) data in digital domain. Thus sampling theorem provides the bridge between CT and DT signals. By sampling we mean
taking the instantaneous values of CT signal at a regular interval of time. The topics covered in this lecture are listed below:
The concept of sampling of a signal .
The notion of apriori information & its use to represent a signal economically .
The most common approach towards economical signal representation.
What is Sampling?
Sampling is a methodology of representing a signal with less than the signal itself.
We can do better than just describing a signal by specifying the value of the dependent variable for each possible value of the
independent variable. The concept is explained with the following examples where 'x(t)' is the dependent variable and 't' is the
independent variable.
Let
Here 'x(t)' is defined by a sinusoidal relation with a phase constant , amplitude and angular frequency. Now the knowledge of these
three parameters suffices to describe 'x(t)' completely. Thus we are able to compute 'x(t)' without depending on the independent
variable 't'.
Consider another example given below:
Here x(t) is a polynomial in 't' of degree 'N' and can be computed completely if we know the coefficients
Thus we observe that the apriori information we had that allowed us to represent these signals. In the first case we knew that 'x(t)' is a
pure sinusoid and in the second case we knew that it was a polynomial of degree 'N'.
Thus, as a method of using Apriori information available to represent a signal economically is one way of defining sampling.
). For 't1 , t2 & t3 ' values of 't' we get the following three independent equations. :
From the observed values of the signal x(t 1 ), x(t 2 ) and x(t 3 ) at t 1 , t 2 and t 3 , the parameters of the signal A o ,
and
can be
determined
Consider another example:
Let x(t) be a polynomial of order 'N' which is represented mathematically as shown below. It is further represented in the form of a
matrix where the LHS is the 'apriori' information.
Thus we observe that, this system can be solved as the determinant of the square matrix on the LHS so long as
Thus given the 'apriori' information, the entire information about the signal is contained in its value at N + 1 distinct points.
You have seen two examples, where 'apriori' information, and "samples" of a signal at certain values of the independent variable help us
reconstruct the signal completely.
But If you have no Apriori information you can do no better than to represent the signal as it is.
Even knowing about the continuity of a signal is 'apriori' information. Further we can talk of the relative measure of the 'apriori'
information. This can be done by observing the size of the set in which that signal occurs. The larger the set, the lesser the 'apriori'
information we have. For example, knowing that the signal is sinusoidal is much larger an 'apriori' information than knowing that it is
continuous as the set of sine functions is much smaller than the set of continuous functions.
The main challenge in sampling and reconstruction is to make the best use of 'apriori' information in order to represent a signal by its
samples most economically.
In the next lecture, we focus on a special class of signals those that are Band-limited (this is the 'apriori' information we shall have) and
see how such signals can be reconstructed from their samples.
Conclusion:
From this lecture you have learnt :
Sampling is a method of using 'apriori' information about a signal to represent it economically.
The most common approach in sampling and reconstruction is to describe the signal by specifying its value at selected points on
the time axis ('t') such that this and the 'apriori' information can be used to reconstruct the signal completely.
The main challenge in sampling & reconstruction is to make the best use of the apriori information available to represent a signal
most economically.
Band-limited signals:
A Band-limited signal is one whose Fourier Transform is non-zero on only a finite interval of the frequency axis.
Specifically, there exists a positive number B such that X(f) is non-zero only in
To start off, let us first make an observation about the class of Band-limited signals.
Lets consider a Band-limited signal x(t) having a Fourier Transform X(f).
Let the interval for which X(f) is non-zero be -B f B.
Then,
converges.
The RHS of the above equation is differentiable with respect to t any number of times as the integral is performed on a bounded domain
and the integrand is differentiable with respect to t. Further, in evaluating the derivative of the RHS, we can take
In general,
This implies that band limited signals are infinitely differentiable, therefore, very smooth .
We now move on to see how a Band-limited signal can be reconstructed from its samples.
Where
Now, Recall that the coefficients of the Fourier series for a periodic signal x(t) are given by :
Therefore, given that; y(t) is time-limited in [-T/2, T/2] and periodic, the entire information about y(t) is contained in just
equispaced samples of its Fourier transform! It is the dual of this result that is the basis of Sampling and Reconstruction of Bandlimited signals :Knowing the Fourier transform is limited to, say [-B, B], the entire information about the transform (and hence the signal) is
contained in just uniform samples of the (time) signal !
This time,
is the
is the
Fourier
series coefficient)
What is the Fourier inverse of
The Fourier inverse of
?
is
of
is
Thus we see that if we multiply the original Band-limited signal with a periodic train of impulses (period 1/2B, with
impulse at the origin of strength 1/2B ) we obtain a signal whose Fourier transform is a periodic extension of the original
? We need a mechanism that will blank out the spectrum of
in
spectrum. So how does one retrieve the original signal from
, i.e: multiply the spectrum with :
In other words, we need to feed
to an LSI system, the Fourier transform of whose impulse response is the above function (recall the
An LSI system with above type of impulse response is called an Ideal Low Pass Filter .
Flash File
Is it essential for the sampling rate to be greater than 2B, or is it acceptable to have a sampling rate of exactly 2B?
What will happen if the value of X(f) at -B and B are not zero?
will have values at B and -B different from those of X(f) (due to the
periodic expension). Thus the transform of the output of the ideal low pass filter will not match that of the original signal at -B and B.
While finite, point mismatches in the transform will not matter, problems arise if X(f) has impulses at B or -B. Then, the output of the
ideal low pass filter will be different from the original signal.
For example, consider sin(t). It has a bandwidth
signal has value zero at all multiples of
the Fourier Transform involved:
! You can't possibly reconstruct the signal from these samples. What went wrong? Lets look at
) of this signal is identically zero. Thus an ideal low pass filter cannot retrieve
This is why the Sampling theorem says one must use a sampling rate greater than 2B, where B is the Bandwidth of the signal. Say we
sample at a rate
Now, an appropriate Low-pass filter can give us back the original signal !
Conclusion:
In this lecture you have learnt:
Band-limited signals are infinitely differentiable and very smooth.
Given that 'x(t)' is Band-limited with its Fourier transform 'X(f)' being non-zero only in [-B,B] , we can say that
has a
spectrum that is the periodic extension of 'X(f)' with period 2B.
By passing
'x(t)'.
Lets look at the Impulse Response of this Ideal low pass filter, taking its height in [-B, B] to be 1. Using the formula for inverse Fourier
Transform we have :
(note that
Thusthe impulse response of an ideal low pass filter turns out to be a Sinc function, which looks like:
The signal x(t) and the signal (
, having strength
Lets look at the convolution of the impulse response h(t) of the Ideal low-pass filter with
When
is passed through a Low pass filter, the output which is the reconstructed signal is nothing but the sum of copies of the
Also observe that the h(t) is zero at all sample points (which are integral multiples of
x(t) can be visualized as a sum of the following signals :
if h(t) extends to
possible for an Ideal low-pass filter. In other words, unless one knows the entire
Note if h(t) had been finitely non-causal (say zero for all t less than some possible subject to a time-delay (of
).
It is unstable:
It can be shown that
diverges.
and we know that this series diverges. Hence it is established that the Ideal Low Pass Filter is unstable.
This implies that bounded input does not imply bounded output. Thus if we build an oscillator with Ideal Low pass Filter a bounded
input may result in an unstable output.
Conclusion:
In this lecture you have learnt:
Impulse response of an ideal low pass filter turns out to be a Sinc function.
When sampled signal is passed through a low pass filter, reconstructed output signal is nothing but the sum of copies of impulse
response h(t) shifted by integral multiples of
Problems with the ideal low pass filter :
1. It is infinitely non-causal.
2. It is unstable.
3. It is not a rational system.
But impulses are a mathematical concept and they cannot be realized in a real system. In practice we can best obtain a train of pulses
called a saw-tooth pulse. These pulses are generally used for creating a time-base for the operation of many electronic devices like the
CRO (Cathode Ray Oscilloscope).
Practical Implementation:
Lets see how the train of pulses of the following kind can be multiplied by a signal 'x(t)'.
Consider the circuit below.
The two pulse trains p 1 (t) and p 2 (t) are synchronized so that when one is high the other is low and vice verse as shown in the figure
below:
In the circuit when x(t) and p 1 (t) are multiplied we get the output. Thus we get the output when p 1 (t) is ON and it is zero when
p 2 (t) is ON.
You have just seen how we can multiply a signal x(t) with the following periodic pulse train p(t) to obtain the sampled signal
Now the train of pulses that we had used is shown below with respect to its amplitude and period.
Simplifying the above term we get the envelope of the coefficients as a sinc function:
Simplifying the above term we get the envelope of the coefficients as a sinc function:
Looking at the expression for the coefficients of the Fourier Series Expansion we observe that:
If
As
As
As
'p(t)' tends to the train of impulses we had started our discussion on sampling with. Notice then that the observations
above are consistent with this. The Fourier coefficients of the periodic train of impulses are indeed all constant and equal to the reciprocal
of the period of the impulse train.
We now see what happens to the spectrum of continuous time signal on multiplication with the train of pulses. Having obtained the
Fourier Series Expansion for the train of periodic pulses the expression for the sampled signal can be written as:
Taking Fourier transform on both sides and using the property of the Fourier transform with respect to translations in the frequency
domain we get:
This is essentially the sum of displaced copies of the original spectrum modulated by the Fourier series coefficients of the pulse train. If
'x(t)' is Band-limited so long as the the displaced copies in the spectrum do not overlap. For this the condition that 'f s ' is greater than
twice the bandwidth of the signal must be satisfied. The reconstruction is possible theoretically, using an Ideal low-pass filter as
shown below:
Thus the condition for faithful reconstruction of the original continuous time signal is :
where
Conclusion:
In this lecture you have learnt:
In practice a train of pulses is used for sampling a signal instead of a train of impulses.
Train of pulses p(t) is periodic and obeys Dirichlet's conditions. It can be represented as a Fourier series and is used in
deriving the condition for reconstruction of the original band-limited signal.
Any periodic signal whose Fourier series exists and has a non-zero average with fundamental frequency greater than twice the
bandwidth of the band-limited signal can be used to sample it and the original signal can be reconstructed using an ideal lowpass filter
Flash File
Example:Let us now also look at very special example, consider a disc rotating with a single radial line marked on the disc. The flashing strobe
acts as a sampling system, since it illuminates the disc for extremely brief time intervals at a periodic rate. When the strobe frequency is
much higher then the rotational speed of the disc, the speed of rotation of the disc is perceived correctly. When the strobe frequency
becomes equal to the rotational frequency, the line appears to be at same position. When the strobe frequency becomes less than twice
the rotational frequency, the rotation appears to be at a lower frequency than is actually the case. Furthermore due to phase reversal,
the disc will appear rotating in the reverse direction .This phenomenon is known as stroboscopic effect.
Advantages of aliasing :
1.
Conclusion:
In this lecture you have learnt:
Original signal cannot be reconstructed from undersampled signal because higher frequencies are reflected into lower frequencies
in the Fourier transform of the undersampled signal .
Stroboscopic effect helps in understanding undersampling.
Aliasing is not always undesirable . It has some advantages also.
How can we tackle the problem of Low pass Filter not being ideal ?
Normally we want ,
Next best thing that we can do is we can have some linear phase variation, i.e. constant time delay for all frequencies.
This is what we can do using Hold Filters, which is referred to as Zero - Order - Hold Sampling. It is a staircase approximation of the
analog signal.
How it works?
In practice, analog signals are sampled using zero - order - hold (ZOH) devices that hold a sample value constant until the next sample is
acquired. This is also called as flat - top sampling. This operation is equivalent to ideal sampling followed by a system whose impulse
response is a pulse of unit height and durationT s ( to stretch the incoming pulses ). This is illustrated in Figure below :
Reconstruction of signal in Zero Order Hold Filter
The analog Signal (continuous - time signal) is multiplied with a periodic impulse train, referred to as Sampling Function. A sampled
signal is then obtained as shown in figure below.
The ideally sampled signal x p(t) is the product of the impulse train p(t) and the analog signal x c(t) and is written as
The ZOH Sampled Signal x ZOH(t) can be regarded as the convolution of h o(t) and a sampled signal x p(t)
Distortion in Zero-order-hold sampling :
The transfer function H(f) of the zero - order - hold circuit is the Sinc function
There are two types of distortion :a) Aliased Component Distortion : Aliased Component distortion can be corrected, if required by cascading another better
lowpass filter.
b) Baseband Spectrum Distortion (Sinc Distortion) : Baseband Spectrum Distortion is corrected by an Equalizer. An
Equalizer is an LSI system with Fourier Transformable impulse response which acts like an inverse 1 / H ( f ) to another
LSI system, at least in a certain range of frequencies. Equalizer is also used to correct channel imperfections in a
communication system.
The higher the sampling rate f s, the less is the distortion in the spectral image X( f ) centered at origin.
An ideal lowpass filter with unity gain over -0.5 f s f 0.5 f s recovers the distorted signal.
To recover X( f ) with no amplitude distortion, we must use a compensating filter that negates the effects of the Sinc
distortion by profiling a concave shaped magnitude spectrum corresponding to the reciprocal of the Sinc function over the
principal period | f | 0.5 f s
The ideally sampled signal xp(t) is the product of the impulse train p(t) and the analog signal xc(t) and may be written as
The ZOH Sampled Signal xZOH (t) can be regarded as the convolution of ho(t) and a sampled signal xp(t)
Conclusion:
In this lecture you have learnt:
analog filters can NEVER give linear phase response .
Hence, we can design analog filters as near to an ideal filter in terms of magnitude response, but can not really make ideal filter .
Hold Filters can be used to get approximation by a Maximally Flat Sampling i.e f s >> 2f m .
Objectives:
Scope of this Lecture:
In this lecture we introduce concepts regarding Digital Signal Processing.
Definition of digital signal processing .
Advantages of digital signal processing.
To understand how DSP works .
What is DSP ?
Digital Signal Processing is used in wide variety of applications .
Digital : Operating by the use of discrete signals to represent data in the form of digits.
Signal : A variable parameter by which information is conveyed through an electronic circuit.
Processing : To perform operations on data according to need or instruction.
Hence,
Digital Signal Processing can be defined as :
"Changing or analysing information to a discrete sequences of numbers."
Two unique features that differentiates DSP from ordinary Digital Processing :
a) Signals from the real world.
b) Signals are discrete.
Why should we use DSP ?
a) Versatility :
Digital Systems can be reprogrammed.
Digital Systems can be ported to different hardware.
b) Repeatability :
Digital systems can be easily duplicated.
Digital systems do not depend on strict component tolerances.
Digital system responses do not drift with temperature.
c) Simplicity :
Some things can be done more easily digitally than with analogue systems.
Some common features :
They use a lot of maths (multiplying and adding signals).
They deal with signals that come from the real world.
How DSP works?
A continuous time signal is converted to a discrete time signal and then reprocessed to get continuous time signal. This is how the
sampling theorem is used in parctice. It forms the link between analog and digital signal processing, and allows us to use digital
techniques to manipulate analog signals.
Conclusion:
In this lecture you have learnt:
Digital Signal Processing can be defined as "Changing or analyzing information to a discrete sequences of numbers."
DSP is Versatile, Repeatable & Simple way of processing signals.
Sampling theorem forms the bases of DSP.
In DSP a continuous time signal is converted to a discrete time signal and then reprocessed to get continuous time signal.
Congratulations, you have finished Lecture 27.
Objectives:
Scope of this Lecture:
In the previous lecture we defined digital signal processing and understood its features. The general procedure is to convert the
Continuous Time signal into Discrete Time signal. Then we try to obtain back the original signal. In this lecture we will study the concepts
of Discrete time Fourier Transform and Signal Representation.
Representation of discrete time periodic signal .
Discrete Time Fourier Transform (DTFT) of an aperiodic discrete time signal .
Another way of representing DTFT of a periodic discrete time signal.
Properties of DTFT
where
. As the period
Flash File
is
The Fourier series representation of
Since
is :
period, so that
can be replaced by
Flash File
is an impulse at
with period
Consider a periodic sequence x[n] with period N and with fourier series representation
Then discrete time Fourier Transform of a periodic signal x[n] with period N can be written as :
the
is a train of impulses at
Properties of DTFT
Periodicity:
Linearity:
The DTFT is linear.
If
and
then
Stability:
The DTFT is an unstable system i.e the input x[n] gives an unbounded output.
Example :
If x[n] = 1 for all n
then DTFT diverges i.e Unbounded output.
Time Shifting and Frequency Shifting:
If,
then,
and,
Time expansion:
It is very difficult for us to define x[an] when a is not an integer. However if a is an integer other than 1 or -1 then the original signal is
not just speeded up. Since n can take only integer values, the resulting signal consists of samples of x[n] at an.
If k is a positive integer, and we define the signal
then
Convolution Property :
Let h[n] be the impulse response of a discrete time LSI system. Then the frequency response of the LSI system is
Now
and
If
then
Proof
Symmetry Property:
If
then
Proof
In particular,
Conclusion:
In this lecture you have learnt:
For a Discrete Time Periodic Signal the Fourier Coefficients are related as
DTFT is unstable which means that for a bounded 'x[n]' it gives an unbounded output.
We saw its time shifting & frequency shifting properties & also time scaling & frequency scaling.
Convolution Property for an LSI system is given as, if 'x[n]' is the input to a system with transfer function 'h[n] then the DTFT of
the output 'y[n]' is the multiplication of the DTFTs of 'x[n]' and 'h[n]'.
We saw symmetry properties and DTFT of cross-correlation between 'x[n]' and 'h[n]' .
Objectives:
Scope of this Lecture:
In the previous lectures we built up concepts of sampling , discrete time signal processing and Discrete Fourier Transform. The next
logical step is to study the Inverse Discrete Fourier Transform. In this lecture we indulge in the various IDFT related concepts.
The equation for Inverse Discrete Time Fourier Transform for a discrete periodic signal .
Inverse DTFT for the Cross-Correlation between 'x[n]' and 'h[n]'.
Parseval's Relation For discrete time periodic signals .
Inverse DTFT :
DTFT of a discrete periodic signal x[n] by period N is given by :
is periodic in
with period
Now ,
x[-n] is the nth Fourier series co-efficient of
Now,
Now inverse DTFT for the cross co-relation between sequences x[n] and h[n] can be written as :
i.e dot product of sequences x[n] and h[n] = dot product of DTFT's of x[n] and h[n] .
in particular put x[n] =h[n] ,then
Conclusion:
In this lecture you have learnt:
DTFT is periodic in
THE DTFT
with period
with period
.
.
Introduction
Till now we have been dealing with continuous and discrete domains . Then we studied the relationships involved using the transform
domains. A system actually operates in a natural domain but it can be well understood in transform domains . The advantage of
transform domains is that a few of the properties which may not be observed in natural domains are clear in transform domains. Most of
the LTI-Systems act in time domain but they are more clearly described in the frequency domain instead.
Till now ,we have seen the importance of Fourier analysis in solving many problems involving signals and LTI systems. Now, we shall deal
with signals and systems which do not have a Fourier transform.
But what was so special about Fourier transform in case of LSI systems?
We found that continuous-time Fourier transform (F.T.) is a tool to represent signals as linear combinations of complex exponentials. The
exponentials are of the form est with s=j and ejw is an eigen function of the LSI system. Also , we note that the Fourier Transform
only exists for signals which can absolutely integrated and have a finite energy.
This observation leads to generalization of continuous-time Fourier transform by considering a broader class of signals using the powerful
tool of "Laplace transform". It will be trivial to note that the L.T can be used to get the discrete-time representation using relevant
substitutions. This leads to a link with the Z-Transform and is very handy for a digital filter realization/designing. Also it will be helpful
to note that, the properties of Laplace Transform and Z-Transform are quite similar.
With this introduction let us go on to formally defining both Laplace and Z-transform.
Let
Where H(s) is known as the Laplace Transform of h(t). We notice that the limits are from [- to + ] and hence this transform is also
referred to as Bilateral or Double sided Laplace Transform. There exists a one-to-one correspondence between the h(t) and H(s) i.e. the
original domain and the transformed domain. Therefore L.T. is a unique transformation and the 'Inverse Laplace Transform' also exists.
e st is also an eigen function of the LSI systemonly if H(s) converges. The range of values for which
the expression described above is finite is called as the Region of Convergence (ROC). In this case, the region of convergence
Note that
is Re(s) > 0.
Thus, the Laplace transform has two parts which are , the expression and region of convergence respectively. The region of
convergence of the Laplace transform is essentially determined by Re(s). Here onwards we will consider trivial examples for a better
understanding of the ROC.
Observing the above equation closely, we realize that firstly H(s) converges if and only if
> 1) which means that the Real part of 's' is greater than '1'
This is what defines the " Region of Convergence " in an S-Complex Plane. The ROC of the Laplace Transform is always determined by
the Re(s). The ROC in general gives us an idea of the stability of a system and is also a representation of the poles-zero plot of a
system. It is essential to note that the ROC never includes poles.
=1/(s-1)
We observe that there is a single pole at s=1. Since the Region of Convergence cannot contain poles therefore ROC start from '1' and
tends outwards to infinity.
e st in physical systems:
We consider the real part of e st , where s = + j .
Such a response is visible in RLC (Resistance-Inductance and Capacitance) systems. It is not only visible in the electrical field but also in
other disciplines like mechanical field. In such cases the above expression is multiplied by a polynomial or a combination of such
expressions.
What is the need to consider region of convergence while determining the Laplace transform?
If we consider the signals e-at u(t) and -e-at u(-t) we note that although the signals are differing, their Laplace Transforms are identical
which is 1/( s+a). Thus we conclude that to distinguish L.T's uniquely their ROC's must be specified. Further from the ROC we can
define many important conclusions which
A few important properties of the ROC are listed below:
The ROC of F(S) consists of strips parallel to
We know that the ROC is dependent only on the real parts of 'S' which is '' , therefore the property.
If h(t) is a time limited signal and is Laplace Transformable, then its ROC will be the entire S-plane.
For example the ROC for
The region of convergence is always between two vertical lines in s-plane. These vertical lines need not be in finite
region. But note that the ROC is always simply-connected but not multiply-connected in the s-plane.
This fact can be explained by the following illustration:
Let H1(s) and H2(s) be the respective Laplace transforms of the first and second terms.H1(s) and H2(s) converge in the region
Re(s) > 1 and Re(s) < 1 respectively. But, h(t) doesn't have any Laplace transform due to no common ROC where both H1(s) and
H2(s) converge.
[ Thus the ROC of H2(s) is given by Re(s) < 1;provided Re(1-s) > 0]
Thus, two different functions may have same expressions but correspond to different ROC.
ROC's are given as:
Conclusion:
In lecture you have learnt:
is called the Laplace Transform of h(t) and the Region of Convergence (ROC) of the Laplace transform is
essentially determined by the real part of the complex number 's' denoted as Re(s).
Two different functions may have the same Laplace Transform so the only way to uniquely describe them is by the means of ROC.
The ROC consists of strips parallel to the
Congratulations, you have finished Lecture 30.
Z-transform
The response of a linear time-invariant system with impulse response h[n] to a complex exponential input of the form
represented in the following way :
where
In the complex z-plane , we take a circle with unit radius centered at the origin.
can be
and
. This ROC is referred to as left-half plane. When x(t) is two-sided i.e; of infinite extent
are finite and the ROC thus turns out to be a vertical strip in the s-plane.
Z-transform:
The ROC of X(z) of a two sided signal consists of a ring in the z-plane centered about the origin.
and
left-sided sequence. If x[n] is two-sided ;the ROC will consist of a ring with both
Conclusion:
Flash File
for a
If
then
with ROC containing
The ROC of X(s) is at least the intersection ofR1 and R2,which could be empty,in which case x(t) has no Laplace transform.
For z-transform :
If
with ROC = R2
then
with ROC containing
with ROC = R
then
with ROC = R.
Then
and hence
The ROC of sX(s) includes the ROC of X(s) and may be larger.
This property holds for z-transform as well.
3) Time Shift
For Laplace transform:
with ROC = R
If
then
with ROC = R
For z-transform:
with ROC = R
If
then
with ROC = R except for the possible addition or deletion of the origin or infinity
Because of
the multiplication by
for
poles will be introduced at z=0, which may cancel corresponding zeroes of X(z) at z=0. In this case
equals the ROC of X(z) but with the origin deleted. Similarly, if
4) Time Scaling
For Laplace transform:
with ROC=R.
If
then
with .
Let a
where
For z transform:
The continuous-time concept of time scaling does not directly extend to discrete time. However, the discrete time concept of time
expansion i.e. of inserting a number of zeroes between successive values of a discrete time sequence can be defined. The new sequence
can be defined as
x (k) [n] = x[n / k] if n is a multiple of k
= 0 if n is not a multiple of k
has k - 1 zeroes inserted between successive values of the original sequence. This is known as upsampling by k. If
x[n] X(z) with ROC = R
then x (k) [n] X(zk) with ROC = R 1/k i.e. z k
i.e. X(zk) = S x[n](z k) -n , < n <
= S x[n]z-nk, < n <
For Laplace transform
If
x(t) X(s) with ROC = R
then
et x(t) X(s - ) where Re(s - )
ROC(X(.))
For z-transform
If x[n] X(z) with ROC = R
then
n X(z/) 0 where z-1
Consider
ROC(X(.))
Conclusion:
In this lecture you have learnt:
If
1.
2.
3.
with ROC = R.
with ROC = R.
with
ROC(X(.))
ROC(H)
with ROC = R except for the possible addition or deletion of infinity from ROC.
2. The continuous-time concept of time scaling does not directly extend to discrete time.Read upsampling for the reason.
3. Other properties of z-transform are similar to that of Laplace transform.
Now 'C' is any vertical line in the s-plane that is parallel to the imaginary axis.
But we consider
the vertical strip at
From the above discussion it is clear that the LT reduces to FT when the complex variable only consists of the imaginary part . Thus LT
reduces to FT along the
(Imaginary axis).
Review:
We saw that if the imaginary axis lies in the Region of Convergence of 'X(s)' and the Laplace Transform is evaluated along it.
The result is the Fourier Transform of 'x(t)'.
Relationship between inverse Laplace Transform and inverse Fourier Transform
Similarly while evaluating the Inverse Laplace Transform of 'X(s)' if we take the line ' C ' to be the imaginary axis (provided it lies in
the Region of Convergence ). This is shown below as:
Thus above we notice that we get the Inverse Fourier Transform of 'X(f)' as expected.
This tells us that there is a close relationship between the Laplace Transform and the Fourier Transform. In fact the Laplace Transform is
a generalization of the Fourier Transform, that is, the Fourier Transform is a special case of the Laplace Transform only. The Laplace
Transform not only provides us with additional tools and insights for signals and systems which can be analyzed using the Fourier
Transform but they also can be applied in many important contexts in which Fourier Transform is not applicable. For example , the
Laplace Transform can be applied in the case of unstable signals like exponential signals growing with time but the Fourier Transform
cannot be applied to such signals which do not have finite energy.
Inverse Z - Transform
We know that there is a one to one correspondence between a sequence x[n] and its ZT which is X[z].
Obtaining the sequence 'x[n]' when 'X[z]' is known is called Inverse Z - Transform.
For a ready reference , the ZT and IZT pair is given below.
X[z] = Z { x[n] } Forward Z - Transform
x[n] = Z -1 { X[z] } Inverse Z - Transform
For a discrete variable signal x[n], if its z - Transform is X(z), then the inverse z - Transform of X(z) is given by
where ' C ' is any closed contour which encircles the origin and lies ENTIRELY in the Region of Convergence.
Similarly , on making the same substitution in the inverse z - Transform of X(z); provided the substitution is valid , that is, |z|=1lies in
the ROC.
Hence we conclude that the z -Transform is just an extension of the Discrete Time Fourier Transform. It can be
applied to a broader class of signals than the DTFT, that is, there are many discrete variable signals for which the
DTFT does not converge but the z-Transform does so we can study their properties using the z -Transform.
Examples:
Also we observe that the DTFT of the sequence does not exist since the summation
diverges. This example confirms that in some cases the z - Transform may exist but the DTFT may not.
Conclusion:
In this lecture you have learnt:
If the Laplace Transform of 'x(t)' is 'X(s)' , then the Inverse Laplace Transform of X(s) is given by
where 'C' is any vertical line in the s plane, that is, parallel to the imaginary axis.
Fourier Transform of 'x(t)' = Laplace Transform of 'x(t)' when s=jw i.e. if the imaginary axis lies in the Region of Convergence of
'X(s)' and the Laplace Transform is evaluated along it , then the result is the Fourier Transform of 'x(t)'.
Congratulations, you have finished Lecture 33.
Introduction
In theory one can find the Inverse Laplace and the Inverse z - Transform using the integral formula given previously but this procedure
will generally involve integration of complex functions which may become very difficult at times so we normally find the inverse transform
using our 'experience' or observation. That is, we try to split the given function (whose transform we have to calculate) into those
functions whose Inverse Transform we know beforehand; just as we do while integrating a function of real variables.
We will proceed in a step by step manner towards our goal of finding the Inverse Laplace and z - Transform of a given function We shall
focus in depth on the class of those systems only which have a rational system function. Let us first define what a rational system
function is.
A rational system function is a system function which is a rational function, i.e. a rational system function corresponding to a system can
be expressed as a ratio of two polynomials in s ( respectively z ).
A continuous variable (respectively discrete variable) LSI system with rational system function H(s) (respectively H(z) ) is called a
RATIONAL SYSTEM.
Rational systems are the most studied and used. Also, these are the best known realizable systems (whether in continuous or discrete
variable), that is, various techniques have been developed to implement rational systems in practical situations, but this is not the case
with other systems, i.e. those which don't have a rational system function. This is the main reason why we focus on systems having
rational system functions only.
Now we proceed to define the terms 'poles' and 'zeroes' of a rational system function.
Example:
Quite obviously, the solutions of N(s) = 0 make H(s) also equal to zero. Hence the term 'zero'.
Now we show one of the important properties of the Transforms, namely differentiation of Laplace and z - Transform with respect to s
and z respectively.This property comes in very useful while calculating the inverse Laplace and inverse z - Transform of a given function.
provided it exists. This can be thought of as taking the Laplace Transform of tx(t).
Hence we observe that, if
Similarly we have
Differentiation of z - Transform
Now for the discrete variable case, the z - Transform of x[n] is given by
On differentiating both sides w.r.t z and then multiplying both sides by -z, we get
Example:
In the first case x(t) is a right sided signal, and the Region of Convergence lies to the right side of a vertical line in the s - plane and in
the second case, x(t) is a left sided signal and the Region of Convergence lies to the left side of a vertical line in the s - plane.
As a general rule, we can say that for a right sided continuous variable signal, the ROC will be on the right side of a vertical line in the s
plane and similarly on the left side of a vertical line in the s plane for a left sided continuous variable signal.
In the first case, x[n] is a right sided signal and the Region of Convergence is the region in the z plane outside a circle and extending
upto infinity. In the second case, x[n] is a left sided signal and the Region of Convergence is the region lying inside a circle and including
z = 0.
As a general rule, we can say that for a right sided discrete variable signal, the ROC will be to the exterior of some circle in the z plane
and for a left sided discrete variable signal, it will be to the interior of some circle in the z plane.
The above example will be very useful while finding the inverse transform ( Laplace or z ) of a given rational function.
Hence , we get
Now we proceed on solving inverse Laplace and Z - transforms with multiple poles, after looking at simple pole cases of Laplace and z
transform. We shall make use of the differentiation properties of Laplace and z transform extensively for solving multiple pole cases. We
start by deriving the following relations.
Solution
Proof ( By Mathematical Induction )
Part ii) Consider that the above result holds true for M = K;
Solution:
Proof ( By Mathematical Induction )
Part ii) Consider that the above result holds true for M = k;
Now,
Taking z - inverse on both the sides, and using the above stated property we get
Conclusion:
In this lecture you have learnt:
A rational system function is a system function which can be expressed as a ratio of two polynomials in s ( respectively z )
A continuous variable (respectively discrete variable) LSI system with rational system function H(s) (respectively H(z) ) is called a
RATIONAL SYSTEM.
If we think of H(s) as
the solutions of N(s) = 0 are called the zeroes of H(s) ,and the solutions of D(s) = 0 are called the poles of H(s).
Examples:
1) Let us consider the function in s:
i.e.
As the ROC has not been specified. there are several different ROCs and correspondingly, several different system impulses.
Possible ROCs for the system with poles at s = -1and s = 2 and a zero at s = 1
Fig.a
Fig.c
system.
Conclusions:
Properties of certain class of systems can be explained simply in terms of the locations of the poles. Particularly, consider a causal LTI
system with a rational system function H(s). Since the system is causal, the ROC is to the right of the right most pole. Consequently, for
this system to be stable (i.e. for the ROC to include the j-axis), the right most pole of H(s) must be to the left of the j-axis. i.e.
Inverse Z - transform:
Consider an arbitrary rational z-transform:
Examples:
Example 1:
Consider the z transform
Example :
Consider the z transform
There are two poles one at z=1/4 and at z=1/3. The partial fraction expansion, expressed in polynomials in 1/z, is
Thus, x[n] is the sum of 2 terms, one with z - transform 1/[1-(1/4z)] and the other with z - transform 2/[1-(1/3z)]. Thus,
As the ROC is not mentioned, we get different inverses for different possible ROCs. We do not discuss causality and stability as this may
not be a system function. One possible inverse is worked out, the other two left as an exercise to the reader.
fig e:
fig f:
Conclusion:
In this lecture you have learnt:
If the system is causal then the ROC extends from the right most pole to infinity.
A system is stable if the ROC includes the imaginary axis and therefore the right most pole of 'H(s)' must be to the left of the
imaginary axis
A causal system with a rational function 'H(s)' is stable if and only if all poles of H(s) lie in the left-half of the s-plane and must
include the unit radius circle in the z-plane.
where H(s) is the system function (assuming system has a system function
with
for t > 0,
as
as
for
DISCRETE RATIONAL SYSTEM :
A Causal Discrete time LSI system has an impulse response h[n], this is zero for n<0 and thus it is right sided.h[n]=0 for all
n< 0 for causality system function H(z)
Example:
Let
for
is causal,
for
but consider
causal.
.
Exploring the convergence of Laplace transform of impulse response of a stable LSI system, we find that
as
(Stability))
In general, Re{s} = 0 lies in ROC is not sufficent condition to imply stability. But for rational systems Re{s} = 0 lies in ROC
stable.
Now, we will prove the above result .
Proof for sufficiency condition :For any system to be stable, poles can not lie in ROC.Thus, there should not be any poles on the (imaginary axis) Re(s)=0.
Suppose and are the poles of the system function H(s) where Re()<0 and Re( )>0.
Now consider, inverse transform of
system is
to be the inverse.
Thus, in a rational system, with ROC of the system function including Re(s)=0, the poles to the left of imaginary axis contribute rightsided exponentially decaying term and poles to the right of the imaginary axis contribute left-sided exponentially decaying term.
Poles to the right of imaginary axis contribute -P (t)et u(-t), where P (t) is a polynomial of degree k-1
k = order of pole at
in H(s)
sum of the absolute integrals of these terms (finite number because the system function is
rational)
Therefore, the system is stable.
Later, we shall prove the theorem ,that irrespective of the polynomial p(t),
order to justify the convergence of each absolute integral.
converges, in
We can represent graphically the system function by showing all poles(zero's of D(s) ) and zero's (zero's of H(s) ) and it's ROC in splane.
eg:
Representation of poles and zeros
Consider representation of the system function
--pole of order 2
-- pole of order 3
Recall that in a rational system, with ROC of the system function including Re(s)=0, the poles to the left of imaginary axis contribute
right-sided exponentially decaying term and poles to the right of the imaginary axis contribute left-sided exponentially decaying term.
Thus, as we have seen earlier, contributes a right handed decaying exponential and contributes a left handed decaying exponential
and the contributions of following terms in the denominator are
Theorem
Proof by induction:
Mathematical Induction on degree of polynomial
converges.
Base case: Suppose the statement is true for n=1 case we prove it is true for n=2 case.
converges for
apolynomial of degree k
converges there by
also converges.
Hence, proved.
Theorem 2
For a discrete rational system stability implies and is implied by the unit circle in the z plane belonging to the ROC of the system function.
Proof :(a) For the stability of the system function
If the discrete rational system is stable then
The z transform of the impulse response (or the system function ) converges for | z | = 1.
(b) For a stability to be implied by | z | =1 (the unit circle ) belonging to the ROC of the system function
A pole cannot lie on the unit circle | z | = 1 in a stable system.
when
is a pole of order 1 (
we have,
>1) .
( since
Therefore,
is a left sided exponentially decaying term (
is a right sided exponentially decaying term The contribution of
The contribution of
(possibly multiplied by a polynomial in n if the order of pole >1 )
possibly multipled by a polynomial in n if the order of the pole >1 )
n u[n]
-( n u[-n-1])
is
<1) and
Increasing the number of poles would not make any difference to the proof .
Now we know that
are polynomials in n.
Finite number of such terms is absolutely summable and hence the Impulse response is absolutely summable.
Therefore ,the system is stable.
The absolute summability of one sided terms of
Theorem 4
We prove summability of
Proof by Induction:
Induction on degree of polynomial
Base case: (k=1)
depends on summability of
by our assumption
(k-1).
THEOREM :
A neccesary and sufficient conditon for a continuos rational system to be a Causal and Stable is that all the poles must lie in the left half
plane, i.e. Re (s)< 0.
THEOREM :
A neccesary and sufficient conditon for a discrete rational system to be a Causal and Stable is that all the poles must lie inside the unit
circle, i.e.|z| < 1 .
System Defination of Causal Rational System and Linear Constant Coefficient Difference equation
(a) Continuos system :The system function can be written as ,
It is always possible to write the system function this way for a Causal rational discrete system .
Taking the inverse z-transform of the above equation we have ,
Conclusion :
In this lecture you have learnt:
Necessary and sufficient condition for causality in a continuous rational system : The region of convergence must include
.
Necessary and sufficient condition for causality in a discrete rational system: The region of convergence must include
In general, Re{s} = 0 lies in ROC is not sufficent condition to imply stability. But for rational
system is stable.
systems Re{s} = 0 lies in ROC
In a rational system, with ROC of the system function including Re(s)=0, the poles to the left of
imaginary axis contribute right-sided exponentially decaying term and poles to the right of the
imaginary axis contribute left-sided exponentially decaying term.
For a discrete rational system stability implies and is implied by the unit circle in the z plane
belonging to the ROC of the system function.
Congratulations, you have finished Lecture 36.