Sie sind auf Seite 1von 145

Module 1 : Signals In Natural Domain

Lecture 1 : Introduction
Objectives
In this lecture you will learn the following
First of all we will try to look into the formal definitions of the terms 'signals' and 'systems' and then go on further to introduce
to you some simple examples which may be better understood when seen from a signals and systems perspective.
We would even frame our main objectives in this course .

Introduction
The intent of this introduction is to give the reader an idea about Signals and Systems as a field of study and its applications. But we
must first, at least vaguely define what signals and systems are.
Signals are functions of one or more variables .
Systems respond to an input signal by producing an output signal .

Examples of signals include :


1.
2.
3.
4.

A
A
A
A

voltage signal: voltage across two points varying as a function of time.


force pattern: force varying as a function of 2-dimensional space.
photograph: color and intensity as a function of 2-dimensional space.
video signal: color and intensity as a function of 2-dimensional space and time.

Examples of systems include :


1. An oscilloscope: takes in a voltage signal, outputs a 2-dimensional image characteristic of the voltage signal.
2. A computer monitor: inputs voltage pulses from the CPU and outputs a time varying display.
3. An accelerating mass : force as a function of time may be looked at as the input signal, and velocity as a function of time as the
output signal.
4. A capacitance: terminal voltage signal may be looked at as the input, current signal as the output.
Examples of mechanical and electrical systems
You are surely familiar with many of these signals and systems and have probably analyzed them as well, but in isolation . For instance,
you must have studied accelerating masses in a mechanics course (see Fig (a)), and capacitances in an electrostatic course (see Fig (b)),
separately.

Fig (a)

Fig (b)

As you can see, there is a similarity in the way the input signal is related to the output signal. These similarities will interest us in this
course as we may be able to make inferences common to both these systems from these similarities.
We will develop very general tools and techniques of analyzing systems, independent of the actual context of their use. Our approach in
this course would be to define certain properties of signals and systems (inspired of course by properties real-life examples we have),
and then link these properties to consequences. These "links" can then be used directly in connection with a large variety of systems:
electrical, mechanical, chemical, biological knowing only how the input and output signal are related! Thus, our focus when dealing with
signals and systems will be on the relationship between the input and output signal and not really on the internals of the system.

Issues that will concern us in signals and systems include


1. Characterization (description of behavior) of systems and signals.
2. Design of systems with certain desired properties.
3. Modification of existing systems to our advantage.
With this introduction, let us go on to formally defining signals and systems.

Conclusion:
In this lecture you have learnt:
Signals are functions of one or more independent variables.
Systems are physical models which gives out an output signal in response to an input signals.
Trying to identify real-life examples as models of signals and systems, would help us in understanding the subject better.

Congratulations, you have finished Lecture 1.

Module 1 : Signals In Natural Domain


Lecture 2 : Description of Signals
Objectives
In this course you will learn the following
We will look into another definition of a signals in context to sets.
We will try to classify signals as continuous and discrete and then understand them with the help of some examples.
Discrete signals being a new topic to a beginner, we will try to understand them in depth and answer questions like :
why we need discrete signals, need the discrete variable be uniform, how do we order discrete tuples and how do we represent
discrete signals
What is a signal?
A signal, as stated before is a function of one or more independent variables. Recall that a function defines a correspondence between 2
sets, i.e.: corresponding to each element of one set (called the domain ), there exists a unique element of another set (called the codomain ) .

Notice that more than one element in the domain may correspond to the same element in the co-domain .
A function is also sometimes referred to as a mapping. Thus a signal may also be defined as a mapping from one set to another.
For example a speech signal would be mathematically represented by acoustic pressure as a function of time. Some more examples of
signals are voltage, current or power as functions of time. A monochromatic picture can be described as a signal which is mathematically
represented by brightness as a function of two spatial variables.
As mentioned earlier, there may be more than one independent variable. For example, the independent variable for a photograph is 2dimensional space (2 space variables). The variables may also be hybrid, say 2 space variables and 1 time variable (E.g.: a video signal).
Note: In this course, we shall focus our attention on signals of only one variable. Also, for convenience, we shall generally refer to the
independent variable as time. So don't let the recurring reference to time confuse you. It is symbolic for any independent variable you
care to choose.
Discrete-time signals
Discrete variables are those in which there exists a neighbourhood around each value in which no other value is present.
ntuitively, it means a variable like the natural numbers on the real line - we can isolate each instance of the discrete variable from the
other instances.
Why should we bother about discrete variables?
Discrete variables come up intrinsically in several applications. Take for example, the cost of gold in the market every day. The
dependent variable (cost) is a function of discrete time (incremented once every day). Another example is the marks scored by the
students in class. Here the dependent variable (marks) is a function of the discrete variable roll number. While it is perfectly fine to talk
about marks of 02007005, it makes no sense to talk of marks of roll no 02007011.67 - this system is inherently discrete.
Another point that should be noted here is that some results about signals and systems are common to both: continuous as well as
discrete signals, but can be grasped more intuitively in one case as compared to the other. So, we shall pursue the study of both these
cases simultaneously in this course.
Need the discrete variable be uniform?
No. though we imagine natural number or integers when we think of discrete signals, the points need not be equally spaced. For
example, if the markets remained closed on Sundays, we would not record a price for gold on that day - so the spacing between the
variables on this axis changes.
In most common cases, however, the independent variable is uniform - and throughout this course, we shall assume a uniform spacing of
the variable unless otherwise stated explicitly. This assumption makes the analysis more intuitive and also yields several good theorems
for our use, which we shall see as we proceed.

Discrete-time signals
Discrete variables are those in which there exists a neighborhood around each value in which no other value is present.
Intuitively, it means a variable like the natural numbers on the real line - we can isolate each instance of the discrete variable from the
other instances.

Why should we bother about discrete variables?


Discrete variables come up intrinsically in several applications. Take for example, the cost of gold in the market every day. The
dependent variable (cost) is a function of discrete time (incremented once every day). Another example is the marks scored by the
students in class. Here the dependent variable (marks) is a function of the discrete variable roll number. While it is perfectly fine to talk
about marks of 02007005, it makes no sense to talk of marks of roll no 02007011.67 - this system is inherently discrete.
Another point that should be noted here is that some results about signals and systems are common to both: continuous as well as
discrete signals, but can be grasped more intuitively in one case as compared to the other. So, we shall pursue the study of both these
cases simultaneously in this course.
Need the discrete variable be uniform?
No. though we imagine natural number or integers when we think of discrete signals, the points need not be equally spaced. For
example, if the markets remained closed on Sundays, we would not record a price for gold on that day - so the spacing between the
variables on this axis changes.
In most common cases, however, the independent variable is uniform - and throughout this course, we shall assume a uniform spacing of
the variable unless otherwise stated explicitly. This assumption makes the analysis more intuitive and also yields several good theorems
for our use, which we shall see as we proceed.
Do discrete signals necessarily come from continuous signals?
Although intuition may suggest so, this is not necessarily the case. In one of the example above - we considered the daily rate of gold.
Here, time is intrinsically a continuous variable, and we made a discrete variable by taking measurements after certain intervals.
However, the marks as a function of roll numbers intrinsically form a discrete system - there is no continuous axis of roll numbers.
Then how do we define the neighbourhood?
Okay, by now it may seem that we are hiding some details here - we defined a discrete variable as one in which no other value exists in
a certain neighbourhood of one. Now for roll numbers, a neighbourhood does not make sense. How do we formally define a discrete
signal?
A discrete variable is one which can ultimately be indexed by integers.

Examples of discrete variables


Now that we seem to have an intuitive understanding of what a discrete variable is, let us take some examples of discrete variables:
First, the simplest and most intuitive discrete set is the integer axis itself:

Then we can consider a set of tuples (a,b) such that a and b are both in the range 0 to 5 - how can we index them by integers?

Now lets come to something that is discrete alright, but not very intuitive about how we can index it - rational numbers:

We represent the rational numbers along the fourth quadrant, as y/x. The repeated areas (like 2/2, 3/3, 4/2 etc) are to be neglected,
hence are in gray. Then we go on indexing them diagonally as shown by the animation. Now, we go ahead another step - how do we
index a full plane?
Flash File

Note the method: we start in expanding circles from the origin. As soon as a circle cuts integer points, we pause and number the points
clockwise from the positive y axis. This method is by no means unique - but just one set of indexing is enough for us to call the system
discrete. Here we pause to note that although variables like the integer plane above can be indexed by integers, it is far more
convenient to use tuples of integers to index them. It can mathematically be proved that any finite set of integers {a 1 , a 2 , a 3 .... a n
} can be indexed by a single variable. We leave out the proof here, but the interested reader can find it in books on number theory.

Representation of discrete variables


Let us decide some conventions for use with discrete variables:
We shall mostly deal with time as the discrete variable, and shall denote it by n and keep t for continuous time.
We will enclose discrete variables in brackets [.] as opposed to parenthesis (.) for continuous variables.
A discrete signal is also called a sequence - the word coming from the familiar usage in mathematics.
We shall next discuss about systems.

Conclusion:
In this lecture you have learnt:
Thus a signal may also be defined as a mapping from one set (domain) to another (co domain).
Continuous-time signal means the mapping is defined over a continuum of values of the independent variable.
A discrete variable is one which can ultimately be indexed by integers (may also be in terms of tuples) .
We will enclose discrete variables in brackets [.] as opposed to parenthesis (.) for continuous variables.

Congratulations, you have finished Lecture 2.

Module 1 : Signals in Natural Domain


Lecture 3 : Description of Systems
Objectives
In this lecture you will learn the following
We will try understand ..."what are systems?" , with the help of some examples.
We will understand the meaning of the term 'System description' and even look at implicit and explicit descriptions of systems.
We will even look at the mapping involved in systems.
What is a system?
A signal was defined as a mapping from a set of the independent variable (domain) to the set of the dependent variable (co-domain). A
system is also a mapping, but across signals, or across mappings . That is, the domain set and the co-domain set for a system
are both sets of signals, and corresponding to each signal in the domain set, there exists a unique signal in the co-domain set.

In signals and systems terminology, we say; corresponding to every possible input signal, a system produces an output
signal.
In that sense, realize that a system, as a mapping is one step hierarchically higher than a signal. While the correspondence for a signal is
from one element of one set to a unique element of another, the correspondence for a system is from one whole mapping from a set of
mappings to a unique mapping in another set of mappings!
Examples of systems
Examples of systems are all around us. The speakers that go with your computer can be looked at as systems whose input is voltage
pulses from the CPU and output is music (audio signal). A spring may be looked as a system with the input , say, the longitudinal force
on it as a function of time, and output signal being its elongation as a function of time. The independent variable for the input and output
signal of a system need not even be the same.
Example of CRO

An input voltage signal f(t) is provided to the CRO by using a function generator. The CRO (the system) transforms this input function
into an image that is displayed on the CRO screen. The luminosity of every point on this display (i.e.value of the signal) is dependent on
the x and y coordinates.So, the output S(x, y) has it's independent variable as space, whereas the input independent variable is time.
In fact, it is even possible for the input signal to be continuous-time and the output signal to be discrete-time or vice-versa. For
example, our speech is a continuous-time signal, while a digital recording of it is a discrete-time signal! The system that converts any
one to the other is an example of this class of systems.
As these examples may have made evident, we look at many physical objects/devices as systems, by identifying some variation
associated with them as the input signal and some other variation associated with them as the output signal (the
relationship between these, that essentially defines the system depends on the laws or rules that govern the system) . Thus a
capacitance with voltage (as a function of time) considered as the input signal and current considered as the output signal is not the
same system as a capacitance with, say charge considered as the input signal and voltage considered as the output signal. Why?


The mappings that define the system are different in these two cases.

We shall next discuss what system description means.


System description
The system description specifies the transformation of the input signal to the output signal. In certain cases, a system has a closed form
description. E.g. the continuous-time system with description y(t) = x(t) + x(t-1); where x(t) is the input signal and y(t) is the output
signal. Not all systems have such a closed form description. Just as certain "pathological" functions can only be specified by tabulating
the value of the dependent variable against all values of the independent variable; some systems can only be described by tabulating the
output signal against all possible input signals.
Explicit and Implicit Description
When a closed form system description is provided, it may either be classified as an explicit description or an implicit one.
For an explicit description, it is possible to express the output at a point, purely in terms of the input signal. Hence, when the input is
known, it is easily possible to find the output of the system, when the system description is Explicit. In case of an Explicit description, it
is clear to see the relationship between the input and the output. e.g. y(t) = { x(t) } 2 + x(t-5).
In case the system has an Implicit description, it is harder to see the input-output relationship. An example of an Implicit description is
y(t) - y(t-1) x(t) = 1. So when the input is provided, we are not directly able to calculate the output at that instant (since, the output
at 't-1' also needs to be known). Although in this case also, there are methods to obtain the output based solely on the input, or, to
convert this implicit description into an explicit one. The description by itself however is in the implicit form.
The mapping involved in systems
We shall next discuss the idea of mapping in a system in a little more depth.

A signal maps an element in one set to an element in another. A system, on the other hand maps a whole signal in one set to a signal in
another. That is why a system is called a mapping over mappings. Therefore, the value of the output signal at any instant of time
(remember "time" is merely symbolic) in general depends on the whole input signal. Thus, even if the independent variable for the input
and output signal are the same (say time t), do not assume the value the output signal at, say t = 5 depends on only the value
of the input signal at t = 5.
For example, consider the system with description:

The output at, say t = 5 depends on the values of the input signal for all t <= 5.
Henceforth; we shall call systems with both input and output signal being continuous-time as continuous-time systems , and those
with both input and output signal being discrete-time as discrete-time systems. Those that do not fall into either of these classes (i.e.
input discrete-time and output continuous-time and vice-versa) we shall call hybrid systems. Now that the necessary introductions are
done, we can get on to system properties.

Recap
In last lecture you have learnt the following
A system is a mapping across signals, in other words mapping across mappings.
In signals and systems terminology, we say that corresponding to every possible input signal, a system produces an output
signal.
For an explicit description, it is possible to express the output at a point, purely in terms of the input signal.
In case the system has an Implicit description, when the input is provided, we may not be able to calculate the output directly, it
may need some mathematical induction to be done.

Congratulations, you have finished Lecture 3.

Module 1 : Signals in Natural Domain


Lecture 4 : Properties of Systems
Objectives
In this lecture you will learn the following
We shall look at different system properties like
memory
linearityshift-invariance
stability
causality
We shall even deduce some theorems based on the above properties.
Memory:
Memory is a property relevant only to systems whose input and output signals have the same independent variable. A system is said
to be memoryless if its output for each value of the independent variable is dependent only on the input signal at that
value of independent variable. For example the system with description : y(t) = 5x(t) ( y(t) is the output signal corresponding to
input signal x(t) ) is memoryless. In the physical world a resistor can be considered to be a memoryless system (with voltage considered
to be the input signal, current the output signal).
By definition, a system that does not have this property is said to have memory.
How can we identify if a system has memory?
For a memoryless system, changing the input at any instant can change the output only at that instant. If, in some case, a change in
input signal at some instant changes the output at some other instant, we can be sure that the system has memory.
Note: Consider a system whose output Y(t) depends on input X(t) as: Y(t) = X(t-5) + { X(t) - X(t-5) }
While at first glance, the system might appear to have memory, it does not. This brings us to the idea that given the description of a
system, it need not at all be the most economical one. The same system may have more than one description.
Examples:
Assume y[n] and y(t) are respectively outputs corresponding to input signals x[n] and x(t)
1. The identity system y(t) = x(t) is of-course Memoryless
2. System with description y[n] = x[n-5] has memory. The input at any "instant" depends on the input 5 "instants" earlier.
3. System with description

also has memory. The output at any instant depends on all past and present inputs.

Linearity:
Now we come to one of the most important and revealing properties systems may have - Linearity. Basically, the principle of linearity is
equivalent to the principle of superposition, i.e. a system can be said to be linear if, for any two input signals, their linear combination
yields as output the same linear combination of the corresponding output signals.
Definition:

(It is not necessary for the input and output signals to have the same independent variable for linearity to make sense. The definition for
systems with input and/or output signal being discrete-time is similar.)
Example of linearity
A capacitor, an inductor, a resistor or any combination of these are all linear systems, if we consider the voltage applied across them as
an input signal, and the current through them as an output signal. This is because these simple passive circuit components follow the
principle of superposition within their ranges of operation.

Additivity and Homogeneity:


Linearity can be thought of as consisting of two properties:
Additivity
A system is said to be additive if for any two input signals x 1 (t) and x 2 (t),

i.e. the output corresponding to the sum of any two inputs is the sum of the two outputs.
Homogeneity (Scaling)
A system is said to be homogenous if, for any input signal X(t),

i.e. scaling any input signal scales the output signal by the same factor.
To say a system is linear is equivalent to saying the system obeys both additivity and homogeneity.
a) We shall first prove homogeneity and additivity imply linearity.

b) To prove linearity implies homogeneity and additivity.


This is easy; put both constants equal to 1 in the definition to get additivity; one of them to 0 to get homogeneity.
Additivity and homogeneity are independent properties.
We can prove this by finding examples of systems which are additive but not homogeneous, and vice versa.
Again, y(t) is the response of the system to the input x(t).
Example of a system which is additive but not homogeneous:
[ It is homogeneous for real constants but not complex ones - consider

Example of a system which is homogeneous but not additive:


[From this example can you generalize to a class of such systems?]
Examples of Linearity:
Assume y[n] and y(t) are respectively outputs corresponding to input signals x[n] and x(t)
1) System with description y(t) = t . x(t) is linear.
Consider any two input signals, x 1 (t) and x 2 (t), with corresponding outputs y1 (t) and y2 (t).
a and b are arbitrary constants. The output corresponding to a.x 1 (t) + b.x2 (t) is
= t (a.x1 (t) + b.x2 (t))
= t.a.x 1 (t) + t.b.x 2 (t), which is the same linear combination of y1 (t) and y2 (t).
Hence proved.
2) The system with description

is not linear.

See for yourself that the system is neither additive, nor homogenous.
Show for yourself that systems with the following descriptions are linear:

Shift Invariance
This is another important property applicable to systems with the same independent variable for the input and output signal. We shall
first define the property for continuous time systems and the definition for discrete time systems will follow naturally.
Definition:Say, for a system, the input signal x(t) gives rise to an output signal y(t). If the input signal x(t - t 0 ) gives rise to output
y(t - t 0 ), for every t 0 , and every possible input signal, we say the system is shift invariant.
i.e. for every permissible x(t) and every t 0
In other words, for a shift invariant system, shifting the input signal shifts the output signal by the same offset.

Note this is not to be expected from every system. x(t) and x(t - t 0 ) are different (related by a shift, but different) input signals and a
system, which simply maps one set of signals to another, need not at all map x(t) and x(t - t 0 ) to output signal also shift by t 0

A system that does not satisfy this property is said to be shift variant.
Examples of Shift Invariance:
Assume y[n] and y(t) are respectively outputs corresponding to input signals x[n] and x(t)

Stability
Let us learn about one more important system property known as stability. Most of us are familiar with the word stability, which
intuitively means resistance to change or displacement. Broadly speaking a stable system is a one in which small inputs lead to
predictable responses that do not diverge, i.e. are bounded. To get the qualitative idea let us consider the following physical example.
Example
Consider an ideal mechanical spring (elongation proportional to tension). If we consider tension in the spring as a function of time as the
input signal and elongation as a function of time to be the output signal, it would appear intuitively that the system is stable. A small
tension leads only to a finite elongation.
There are various ideas/notions about stability not all of which are equivalent. We shall now introduce the notion of BIBO Stability, i.e.
BOUNDED INPUT-BOUNDED OUTPUT STABILITY.
Statement:

Note: This should be true for all bound inputs x(t)


It is not necessary for the input and output signal to have the same independent variable for this property to make sense. It is valid for
continuous time, discrete time and hybrid systems.
Examples
Consider systems with the following descriptions. y(t) is the output signal corresponding to the input signal x(t).

CONCLUSION
BIBO Stable system : In a BIBO stable system, every bounded input is assured to give a bounded output. An unbounded input can
give us either a bounded or an unbounded output, i.e. nothing can be said for sure.
BIBO Unstable system: In a BIBO unstable system, there exists at least one bounded input for which output is unbounded. Again,
nothing can be said about the system's response to an unbounded input.
Causality
Causality refers to cause and effect relationship (the effect follows the cause). In a causal system, the value of the output signal at any
instant depends only on "past" and "present" values of the input signal (i.e. only on values of the input signal at "instants" less than or
equal to that "instant"). Such a system is often referred to as being non-anticipative, as the system output does not anticipate future
values of the input (remember again the reference to time is merely symbolic). As you might have realized, causality as a property is
relevant only for systems whose input and output signals have the same independent variable. Further, this independent variable
must be ordered (it makes no sense to talk of "past" and "future" when the independent variable is not ordered).
What this means mathematically is that If two inputs to a causal (continuous-time) system are identical up to some time to, the
corresponding outputs must also be equal up to this same time (we'll define the property for continuous-time systems; the definition for
discrete-time systems will then be obvious).
Definition
Let x 1 (t) and x 2 (t) be two input signals to a system and y 1 (t) and y2 (t) be their respective outputs.
The system is said to be causal if and only if:

This of course is only another way of stating what we said before: for any t 0 : y( t 0 ) depends only on values of x(t) for t <= t 0
As an example of the behavior of causal systems, consider the figure below:


The two input signals in the figure above are identical to the point t = t 0 , and the system being a causal system, their corresponding
outputs are also identical till the point t = t 0 .
Examples of Causal systems
Assume y[n] and y(t) are respectively the outputs corresponding to input signals x[n] and x(t)
1. System with description y[n] = x[n-1] + x[n] is clearly causal, as output "at" n depends on only values of the input "at instants"
less than or equal to n ( in this case n and n-1 ).
2. Similarly, the continuous-time system with description

is causal, as value of output at any time t 0 depends on only

value of the input at t 0 and before.


3. But system with description y[n] = x[n+1] is not causal as output at n depends on input one instant later.
Note:
If you think the idea of non-causal systems is counter intuitive, i.e: if you think no system can "anticipate the future", remember the
independent variable need not be time. Visualizing non-causal systems with , say one-dimensional space as the independent variable is
not difficult at all ! Even if the independent variable is time, we need not always be dealing with real-time, i.e. with the time axes of the
input and output signals synchronized. The input signal may be a recorded audio signal and the output may be the same signal played
backwards. This is clearly not causal !

Deductions from System Properties


Now that we have defined a few system properties, let us see how powerful inferences can be drawn about systems having one or more
of these properties.
Theorem Statement: If a system is additive or homogeneous, then x(t)=0 implies y(t)=0.
Proof:

This completes the proof.

Theorem:
Statement :If a causal system is either additive or homogeneous ,then y(t) can not be non zero before x(t) is non-zero .
Proo f:
Say x(t) = 0 for all t less than or equal to t 0 .
We have to show that the system response y(t) = 0 for all t less than or equal to t 0 .
Since the system is either additive or homogeneous the response to the zero input signal is the zero output signal. The zero input signal
and x(t) are identical for all t less than or equal to t 0 .
Hence, from causality, their output signals are identical for all t less than or equal to t 0 .
We conclude the discussion on system properties by noting that this is not an end, but merely a beginning! Through much of our further
discussions, we will be looking at an important class of systems - Linear Shift-Invariant (LSI) Systems.

Conclusion:
In this lecture you have learnt:
A system is said to be memoryless, if its output for each value of the independent variable is dependent only on the value of the
input signal at that value of the independent variable.
The principle of linearity is equivalent to the principle of superposition, i.e. a system can be said to be linear if, for any two input
signals, their linear combination yields as output the same linear combination of the corresponding output signals.
To say a system is linear is equivalent to saying that the system obeys both additivity and homogeneity.
Say, for a system, the input signal x(t) gives rise to an output signal y(t), and it is said to be shift invariant if the input signal
x(t - t 0 ) gives rise to the output y(t - t 0 ), for every t 0, and every possible input signal.
A system in which a bounded input leads to a bounded output is said to be BIBO stable.
In a causal system, the value of the output signal at any instant depends only on the "past" and "present" values of the input
signal and/or "past" values of the output signal.
If a system is additive or homogeneous, then x(t)=0 implies that y(t)=0.
If a causal system is either additive or homogeneous ,then y(t) can not be non zero before x(t) is non-zero.

Congratulations, you have finished Lecture 4.

Module 1 : Signals in Natural Domain


Lecture 5 : Discrete-Time Convolution
Objectives
In this lecture you will learn the following
We shall look into the properties of systems satisfying both linearity and shift invariance i.e. LSI (Linear shift invariant) systems.
We shall define the term "Impulse response" in context to LSI systems.
We shall learn Convolution , an operation which helps us find the output of the LTI system given the impulse response and the
input signal.
[NOTE:- In the following sections we will be using LSI and LTI interchangeably.LTI is infact a special case of LSI. In LTI we consider
shifts in time.]
Discrete time convolution
As the name suggests the two basic properties of a LTI system are:
1) Linearity
A linear system (continuous or discrete time ) is a system that possesses the property of SUPERPOSITION. The principle of superposition
states that the response of sum of two or more weighted inputs is the sum of the weighted responses of each of the signals.
Mathematically
y[n] = S a k yk[n] = a 1 y1 [n] + a 2 y2 [n] + ......
Superposition combines in itself the properties of ADDITIVITY and HOMOGENEITY. This is a powerful property and allows us to evaluate
the response for an arbitrary input, if it can be expressed as a sum of functions whose responses are known.
2) Time Invariance
It allows us to find the response to a function which is delayed or advanced in time; but similar in shape to a function whose response is
known.
Given the response of a system to a particular input, these two properties enable us to find the response to all its delays or advances
and their linear combination.
Discrete Time LTI Systems
Consider any discrete time signal x[n]. It is intuitive to see how the signal x[n] can be represented as sum of many delayed/advanced
and scaled Unit Impulse Signals.
Flash File

Mathematically, the above function can be represented as

More generally any discrete time signal x[n] can be represented as

The above expression corresponds to the representation of any arbitrary sequence as a linear combination of shifted Unit Impulses
which are scaled by x[n]. Consider for example the Unit Step function. As shown earlier it can be represented as

Now if we knew the response of a system for a Unit Impulse Function, we can obtain the response of any arbitrary input. To see why this
is so, we invoke the properties of Linearity, Homogeneity ( Superposition ) and Time Invariance.

The left hand side can be identified as any arbitrary input, while the right hand side can be identified as the total output to the signal.
The total response of the system is referred to as the CONVOLUTION SUM or superposition sum of the sequences x[n] and h[n]. The
result is more concisely stated as y[n] = x[n] * h[n], where

Therefore, as we said earlier a LTI system is completely characterized by its response to a single signal i.e. response to the Unit Impulse
signal.
Example Related to Discrete Time LTI Systems
Flash File

Recall that the convolution sum is given by

Now we plot x[k] and h[n-k] as functions of k and not n because of the summation over k. Functions x[k] and h[k] are the same as
x[n] and h[n] but plotted as functions of k. Then, the convolution sum is realized as follows
1. Invert h[k] about k=0 to obtain h[-k].

2. The function h[n-k] is given by h[-k] shifted to the right by n (if n is positive) and to the left (if n is negative). It may appear
contradictory but think a while to verify this (note the sign of the independent variable).
In the figure below n=1

3. Multiply x[k] and h[n-k] for same coordinates on the k axis. The value obtained is the response at n i.e. Value of y[n] at a particular
n the value chosen in step 2. Now we demonstrate the entire procedure taking n=0,1 thereby obtaining the response at n=0,1. The input
signal x[n] and for this example is taken as :

Case 1: For n=0

Remember the independence axis has k as the independent variable. Then taking the product x[k] h[-k] for same k and summing it we
get the value of the response at n=0.
Let h[-k] = g[k]
y[0] = ...........x[-1]g[-1] + x[0] g[0] +............... = (-2) (1) +(1) (2) = 0
Case 2: For n=1

h[1-k] =g[k]
y[1] = .........+ x[0]g[0] + x[1]g[1] + ............. = (1)(1) +(2)(2) = 5
The values are the same as that obtained previously.
The total response referred to as the Convolution sum need not always be found graphically. The formula can directly be applied if the
input and the impulse response are some mathematical functions. We show this by an example next.


Example

Find the total response when the input function is

. And the impulse response is given by

Applying the convolution formula we get

We now give an alternative method for calculating the convolution of the given signal x[n] and the response to the unit impulse function.
Let us see how convolution output is the sum of weighted and shifted instances of the impulse response.
Let the given signal x[n] be

Let the Impulse Response be

Now we break the signal in its components i.e. expressed as a sum of unit impulses scaled and delayed or advanced appropriately.
Simultaneously we show the output as sum of responses of unit impulses function scaled by the same multiplying factor and appropriately
delayed or advanced.

Summing the left and the right hand sides of the above figures we get the input x[n] and the total response on the left and the right
sides respectively. Thus we see the graphical analog the above formula.

The total response referred to as the Convolution sum need not always be found graphically. The formula can directly be applied if the
input and the impulse response are some mathematical functions. We show this by a example.


Conclusion:
In this lecture you have learnt:
The two basic properties of LTI systems are linearity and shift-invariance. It is completely characterised by its impulse
response.
Any discrete time signal x[n] can be represented as a linear combination of shifted Unit Impulses scaled by x[n].
The unit step function can be represented as sum of shifted unit impulses.
The total response of the system is referred to as the CONVOLUTION SUM or superposition sum of the sequences x[n] and h[n].
The result is more concisely stated as y[n] = x[n] * h[n].
The convolution sum is realized as follows
1. Invert h[k] about k=0 to obtain h[-k].
2. The function h[n-k] is given by h[-k] shifted to the right by n (if n is positive) and to the left (if n is negative) (note the sign of
the independent variable).
3. Multiply x[k] and h[n-k] for same coordinates on the k axis. The value obtained is the response at n i.e. Value of y[n] at a
particular n the value chosen in step 2.
Congratulations, you have finished Lecture 5.

Module 1 : Signals in Natural Domain


Lecture 6 : Basic Signals in Detail
Objectives
In this lecture you will learn the following
We shall look at some of the basic signals namely .
Unit impulse function
Unit step function
Their relation in both continuous and discrete domain
We shall even look at the Sifting property of the unit impulse function.
Basic Signals in detail
We now introduce formally some of the basic signals namely
1) The Unit Impulse function.
2) The Unit Step function
These signals are of considerable importance in signals and systems analysis. Later in the course we will see these signals as the building
blocks for representation of other signals. We will cover both signals in continuous and discrete time. However, these concepts are easily
comprehended in Discrete Time domain, so we begin with Discrete Time Unit Impulse and Unit Step function.
The Discrete Time Unit Impulse Function: This is the simplest discrete time signal and is defined as

The Discrete Time Unit Step Function u[n]: It is defined as

Unit step in terms of unit impulse function


Having studied the basic signal operations namely Time Shifting, Time Scaling and Time Inversion it is easy to see that

similarly,

Summing over we get

Looking directly at the Unit Step Function we observe that it can be constructed as a sum of shifted Unit Impulse Functions

The unit function can also be expressed as a running sum of the Unit Impulse Function


We see that the running sum is 0 for n < 0 and equal to 1 for n >= 0 thus defining the Unit Step Function u[n].
Sifting property
Consider the product

. The delta function is non zero only at the origin so it follows the signal is the same as

More generally
It is important to understand the above expression. It means the product of a given signal x[n] with the shifted Unit Impulse Function is
equal to the time shifted Unit Impulse Function multiplied by x[k]. Thus the signal is 0 at time not equal to k and at time k the
amplitude is x[k]. So we see that the unit impulse sequence can be used to obtain the value of the signal at any time k. This is called
the Sampling Property of the Unit Impulse Function. This property will be used in the discussion of LTI systems. For example consider the
product

. It gives

Likewise, the product x[n] u[n] i.e. the product of the signal u[n] with x[n] truncates the signal for n < 0 since u[n] = 0 for n <0

Similarly, the product x[n] u[n-1] will truncate the signal for n < 1.

Now we move to the Continuous Time domain. We now introduce the Continuous Time Unit Impulse Function and Unit Step
Function.
Continuous time unit step and unit impulse functions
The Continuous Time Unit Step Function: The definition is analogous to its Discrete Time counterpart i.e.
u(t) = 0, t < 0
= 1, t 0

The unit step function is discontinuous at the origin.


The Continuous Time Unit Impulse Function: The unit impulse function also known as the Dirac Delta Function, was first defined by Dirac
as

In the strict mathematical sense the impulse function is a rather delicate concept. The Impulse function is not an ordinary function. An
ordinary function is defined at all values of t. The impulse function is 0 everywhere except at t = 0 where it is undefined. This difficulty is
resolved by defining the function as a GENERALIZED FUNCTION. A generalized function is one which is defined by its effect on
other functions instead of its value at every instant of time.
Analogy from discrete domain
We will see that the impulse function is defined by its sampling property. We shall develop the theory by drawing analogy from the
Discrete Time domain. Consider the equation

The discrete time unit step function is a running sum of the delta function. The continuous time unit impulse and unit step function are
then related by

The continuous time unit step function is a running integral of the delta function. It follows that the continuous time unit impulse can be
thought of as the derivative of the continuous time unit step function.

Now here arises the difficulty. The unit Step function is not differentiable at the origin. We take a different approach. Consider the signal
whose value increases from 0 to 1 in a short interval of time say delta. The function u(t) can be seen as the limit of the above signal as
delta tends to 0. Given this definition of Unit Step function we look into its derivative. The unit impulse function can be regarded as a
rectangular pulse with a width of
and height (1 /
). As
tends to 0 the function approaches the Unit Impulse function and its
derivative becomes narrower and higher and eventually a pulse of infinitesimal width of infinite height. All throughout the area under the
. In effect the delta function has no duration but unit area. Graphically the function
pulse is maintained at unity no matter the value of
is denoted as spear like symbol at t = 0 and the "1" next to the arrow indicates the area of the impulse. After this discussion we have still
not cleared the ambiguity regarding the value or the shape of the Unit Impulse Function at t = 0. We were only able to derive that the
the effective duration of the pulse approaches zero while maintaining its area at unity. As we said earlier an Impulse Function is a
Generalized Function and is defined by its effect on other functions and not by its value at every instant of time. Consider the product of
an impulse function and a more well behaved continuous function. We will take the impulse function as the limiting case of a rectangular
and height (1/ ) as earlier. As evident from the figure the product function is 0 everywhere except in the small
pulse of width
interval. In this interval the value of x(t) can be assumed to be constant and equal to x(0). Thus the product function is equal to the
tends to 0 the product tends to x(0) times the impulse function.
function scaled by a value equal to x(0). Now as

i.e. The area under the product of the signal and the unit impulse function is equal to the value of the signal at the point of impulse. This
is called the Sampling Property of the Delta function and defines the impulse function in the generalized function approach. As in discrete
time

Or more generally,

Also the product x(t)u(t) truncates the signal for t < 0.


Conclusion:
In this lecture you have learnt:
The unit impulse function is defined as:

The unit step function is defined as:

Sifting Property: The product of a given signal x[n] with the shifted Unit Impulse Function is equal to the time shifted unit
Impulse Function multiplied by x[k].

Remember generalized functions.


Congratulations, you have finished Lecture 6.

Module 1 : Signals in Natural Domain


Lecture 7 : Linear Shift Invariant Systems
Objectives
In this lecture you will learn the following
Linear Shift-Invariant systems, and their importance
The discrete time unit impulse
Signals as a linear combination of shifted unit impulses
The unit impulse response
Obtaining an arbitrary response from the unit impulse response for LSI systems
Linear Shift-Invariant systems:
Linear Shift-Invariant systems, called LSI systems for short, form a very important class of practical systems, and hence are of interest
to us. They are also referred to as Linear Time-Invariant systems, in case the independent variable for the input and output signals is
time. Remember that linearity means that is y1 (t) and y2 (t) are responses of the system to signals x 1 (t) and x 2 (t) respectively, then
the response to ax 1 (t) + bx 2 (t) is ay1 (t) + by 2 (t).
Shift invariance implies that the response of the system to x 1 (t - t 0 ) is given by y1 (t - t 0 ) for all values of t and t 0 . Linear systems are
of interest to us for primarily two reasons: first, several real-life systems can be well approximated by linear systems. Second, linear
systems come with several properties which make their analysis simple. Similarly, shift-invariant systems allow us to use simpler math to
analyse the system. As we proceed with our analysis, we will point out cases where some results (which are rather intuitive) are valid for
only LSI systems.
The unit impulse (discrete time):
How do we go on with studying the responses of systems to various signals? It would be great if we can study the response of the
system to one (or a few) signal(s) and predict the responses to all signals. It turns out that LSI systems can in fact be treated in such
manner. The signal whose response we study is the unit impulse signal. If we know the response of the system to the unit impulse
(called, for obvious reasons, the unit impulse response), then the system is completely characterized - we can find the response of the
system to all possible inputs. This follows rather intuitively in discrete signals, so let us begin our analysis with discrete signals. In
discrete signals, the unit impulse is a signal which has zero values everywhere except at one point, where its values is 1. Typically, this
point is taken to be the origin (n=0).

The unit impulse is denoted by the Greek letter delta

. For example, the above impulses are denoted by

respectively.

Note: We are towards invoking shift invariance of the system here - we have shifted the signal
We can thus use
of

and

by 4 units.

to pick up a certain point from a discrete signal: suppose our signal x[n] is multiplied by

then the value

is zero at all point except n=k. At this point, the value of x 1 [k] equals the value x[k].

Now, we can express any discrete signal as a sum of several such terms:

This may seem redundant now, but later we shall find this notation useful when we take a look at convolutions etc. Here, we also want to
introduce a convention for denoting discrete signals. For example, the signal x[n] and its representation are shown below :

The number below the arrow shows the starting point of the time sequence, and the numbers above are the values of the dependent
variable at successive instants from then onwards. We may not use this too much on the web site, but this turns out to be a convenient
notation on paper.
The unit impulse response:
The response of a system to the unit impulse is of importance, for as we shall show below, it characterizes the LSI system completely.
Let us consider the following system and calculate the unit step response to it: y[n] = x[n] - 2x[n-1] + 3x[n-2]. Now, we apply a unit
step x[n]=d[n] to the system and calculate the response :

x[n]

x[n-1]

x[n-2]

y[n]

..., -1

-2

3, ...

The graphical calculation and the response are as follows :


Arbitrary input signals:
Now let us consider some other input, say x[0]=1, x[1]=1 and x=0 for n other than 0 and 1. What will be the response of the above
LSI system to this input? We calculate the response in a table as below

y[n] = x[n] - 2x[n-1] + 3x[n-2]

x[n]=

y1 [n] from

y2 [n] from -

y[n] = y1 [n] + y2 [n]

..., -1

-2

-1

-2

4, ...

Ah! What we have actually done, is applied the additive (linear), homogenous (linear) and shift invariance properties of the system to get
. The second signal is derived
the output. First, we decomposed the input signal as a sum of known signals: first being the unit step
from the unit step by shifting it by 1. Thus, our input signal is as shown in the figure below. Then, we invoke the LSI properties of the
system to get the responses to the individual signals: the first calculation is show above, while the calculation of response for
shown below.

is

Finally, we add the two responses to get the response y[n] of the system to the input x[n]. The image below shows the final response
with an alternative method of calculating it:


This brings us up to the concept of convolutions, covered in detail in a later section.

Conclusion:
In this lecture you have learnt:
Discrete time LSI systems and their importance
The discrete time unit impulse as a building block
Expressing signals as a linear combination of shifted unit impulses
What is the unit impulse response?
Expressing arbitrary responses as a linear combination of shifted unit impulse responses

Congratulations, you have finished Lecture 7.

Module 1 : Signals in Natural Domain


Lecture 8 : Classification of Systems
Objectives
In this lecture you will learn the following
We shall classify systems under the following categories and tabulate their system properties
Continuous-time systems
Discrete-time systems
Hybrid : Continuous-Discrete systems
Hybrid : Discrete-Continuous systems

Properties of discrete variable systems


We have classified systems into three classes - Continuous-time systems, Discrete-time systems and Hybrid systems. Now that we have
introduced some system properties, let us see what properties are relevant to which classes of systems.
Let us first consider examples of different classes of systems.

Continuous-time systems
Continuous-Continuous systems

Discrete-time systems
Discrete-Discrete systems

1.Tree swaying in the wind:


Wind - described by its speed, direction - is a
continuous-time input.
Movement of branches is continuous-time
output signal.

1.Logic circuits:
Discrete logic inputs are processed to give
discrete logic outputs.

Hybrid systems
Continuous-Discrete systems

Hybrid systems
Discrete-Continuous systems

1.Eye: sees continuous image, but sends a


discrete map to the brain

1.Brain : gets a discrete map from the eye,


and completes a smooth, continuous picture

2.Computer microphone: Sampler converts a


continuous time signal into a discrete time
signal.(Sampler forms an important system
in todays digital world - we shall look at this
in great detail later in the course)

2.Computer speaker and sound card - a


digital music output given by the computer is
smoothed out and played as a continuous
waveform.

Properties of systems
In early parts of this course, we shall concern ourselves with mainly the first two classes, viz. Continuous-time and Discrete-time
systems, but later we shall also deal with Hybrid systems as well. So, we find it worthwhile here to take a look at what properties the
systems of various classes can have:

Property

Memory

Continuous input Continuous output

Discrete inputDiscrete output

Continuous- Discrete input/


Discrete- Continuous output

Yes

Yes

No

If input and output are


of the same type

If input and output are of


the same type

However, we can define a


restricted version of memory if
there is a correspondence in the
input and output variables (e.g.:
continuous and discrete time)

Yes

Yes

No

If input and output are


of the same type

If input and output are of


the same type

A restricted version of causality


can be defined: If the inputs
are same upto an instant
corresponding to a discrete
variable, then the outputs of a
causal system are same

Yes

Yes

No

If input and output are


of the same type

If input and output are of


the same type

We can define shift invariance in


cases where the inputs are
shifted
by
certain
quanta
corresponding to the spacing in
discrete variables.

Stability

Yes

Yes

Yes

Linearity

Yes

Yes

Yes

Causality

Shift invariance
(Time
invariance)

Note that this is a table of properties which the system can have; they are not necessary properties of a system. Hence, we can find a
Continuous-time system that is stable (though there may be Continuous-time systems which are unstable), but it is impossible to apply
the concept of memory to a discrete-continuous system without modifying the concept itself.

Conclusion:
In this lecture you have learnt:
Memory, causality and shift invariance are defined only if the input and ouput signals are of the same type i.e. both continuous or
discrete.
Stability and linearity do not require the input and output signals to be of the same type.
Congratulations, you have finished Lecture 8.

Module 1 : Signals in Natural Domain


Lecture 9 : Continuous LTI Systems
Objectives
In this lecture you will learn the following
We shall derive the response of a LTI system for any arbitrary continuous input x(t) by expressing x(t) in terms of impulses.
We shall mathematically calculate the 'impulse response' of the RC (resistive and capacitive) system.
We shall understand the use of generalized functions
We shall find the response for an arbitrary continuous time signal as the superposition of scaled and shifted pulses using
convolution integral.
We shall also look into convolution done in graphical manner.

Continuous Time LTI Systems


In this section our goal is to derive the response of a LTI system for any arbitrary continuous input x(t). In complete analogy with the
discussion on Discrete time analysis we begin by expressing x(t) in terms of impulses. In discrete time we represented a signal in terms
of scaled and shifted unit impulses. In continuous time, however the unit impulse function is not an ordinary function (i.e. it is not
defined at all points and we prefer to call the unit impulse function a "mathematical object"), it is a generalized function ( it is defined by
its effect on other signals) .
Recall the previous discussion on the development of the unit impulse function. It can be regarded as the idealization of a pulse of width
and height 1/

One can arrive at an expression for an arbitrary input, say x(t) by scaling the height of the rectangular impulse by a factor such that it's
value at t coincides with the value of x(t) at the mid-point of the width of the rectangular impulse. The entire function is hence divided
into such rectangular impulses which give a close approximation to the actual function depending upon how small the interval is taken to
be. For example let x(t) be a signal. It can be approximated as :

The given input x(t) is approximated with such narrow rectangular pulses, each scaled to the appropriate value of x(t) at the
corresponding t (which lies at the midpoint of the base of width

. This is called the staircase approximation of x(t). In the limit as

) approaches zero, the rectangular pulse becomes finer in width and the function x(t) can be represented in terms of
the pulse-width (
impulses by the following expression,

This summation is an approximation. As


approaches zero, the approximation increases in accuracy and when delta becomes
infinitesimally small, this error becomes zero and the above summation is converted into the following integral expression.

For example, take x(t) = u(t)

since u(t) = 0 for t < 0 and u(t) = 1 for t > 0. In complete analogy with the development on sampling property of discrete unit
impulse we have,

This is known as Sifting Property of the continuous time impulse. Note that the unit impulse puts unit area into zero width.

The Convolution Integral


We now want to find the response for an arbitrary continuous time signal as the superposition of scaled and shifted pulses just as we did
for discrete time signal. For a continuous LSI system, let h(t) be the response to the unit impulse signal. Then,

by shift invariance,

by homogeneity,

by additivity, ( Note : We can perform additivity on infinite terms only if the sum/integral converges. )

This is known as the continuous time convolution of x(t) and h(t). This gives the system response y(t) to the input x(t) in terms of unit
impulse response h(t). The convolution of two signals h(t) and x(t) will be represented symbolically as

where as previously seen,

To explain this graphically,


Consider the following input which (as explained above) can be considered to be an approximation of a series of rectangular impulses.
And it can be represented using the convolution sum as

Hence, by merely knowing the impulse response one can predict the response of the signal x(t) by using the given formula for
convolution.
RC System
Consider a RC system consisting of a resistor and a capacitor. We have to find out what the response of this system is to the unit
impulse

First let us understand the response to


If the input is the unit step function u(t) then the output of the system will be
S(t). Hence we can say that the response to

will be given as follows:

Now as

the response of

will be equal to h(t)

. Let us call this output of the system

Taking limit as

on both sides and using

we get

By the sifting property we get

Hence if we are given the unit step responseu(t) we have been able to calculate the continuous impulse response of the system. Next
we shall see how we can get the unit step response from the impulse response of the same system.

Impulse response of RC system


We have seen how we could calculate the impulse response from the unit step response of a system. Now we shall calculate the unit step
response, or in general the response to any input signal, given its impulse response. We shall use convolution to obtain the required
result
The unit impulse

, when fed into the RC system gives the corresponding impulse response h(t), which in this case is given by

We shall now find the response to the input signal u(t).


The convolution of an input signal x(t) and the impulse response of a system h(t) is given by the formula:

But in this case x(t) = u(t), and so the output signal y(t) will be given by :

Now we have

if and only if .

In all other cases

Hence the given equation for y(t) will now simplify to :

Solving which we get ,

which is the response to the unit step function.


Hence we have now shown how to calculate the impulse response given the unit step response and also any response given the impulse
response. Moreover we can now say that given either the unit step response or the impulse response we can calculate the response to
any other input signals.

Convolution Operation
Flash File

We now interpret the convolution (x*h)(t) as the common (shaded) area enclosed under the curves x(v) and h(t-v) as v varies over
the entire real axis.
x(v) is the given input function, with the independent variable now called v. h(t-v) is the impulse response obtained by inverting h(v)
and then shifting it by t units on the v-axis.
As t increases clearly h(t-v) can be considered to be a train moving towards the right, and at each point on the v-axis, the area under
the product x(v) and h(t-v) is the value of y(t) at that t.
Conclusion:
In this lecture you have learnt:
The given input x(t) is approximated with narrow rectangular pulses, each scaled to the appropriate value of x(t) at the
corresponding t (which lies at the midpoint of the base of width d). This is called the staircase approximation of x(t).
By merely knowing the impulse response one can predict the response of the signal x(t) by using the given formula for
convolution.
If we are given unit-step response, we can calculate unit-impulse response by differentiating the unit-step response .
If we are given unit-impulse response, we can calculate unit-step response by taking running integral of unit-impulse response .
The convolution (x*h)(t) is the common (shaded) area enclosed under the curves x(v) and h(t-v)as v varies over the entire real
axis.
As t increases, h(t-v) can be considered to be a train moving towards the right and at each point on the v -axis the common area
under the product x(v) and h(t-v) is the value of y(t) at that t.

Congratulations, you have finished Lecture 9.

Module 1 : Signals in Natural Domain


Lecture 10 : Properties of LTI Systems
Objectives
In this lecture you will learn the following
We shall look into the properties of convolution (as shown below) in both continuous and discrete domain
Associative
Commutative
Distributive properties
As a LTI system is completely specified by its impulse response, we look into the conditions on the impulse response for the LTI
system to obey properties like memory, stability, invertibility, and causality.
Properties of LTI System
In the preceding chapters,we have already derived expressions for discrete as well as continuous time convolution operations.
Discrete :

Continuous :

We shall now discuss the important properties of convolution for LTI systems.
1) Commutative property :
By the commutative property,the following equations hold true :
a) Discrete time:

Proof : We know that

Hence we make the following substitution (n - k = l )


The above expression can be written as

So it is clear from the derived expression that

Note :
1. 'n' remains constant during the convolution operation so 'n' remains constant in the substitution n-k = l even as 'k' and 'l' change.
2. l goes from

to +

, this would not have been so had 'k' been bounded.( e.g :- 0 < k < 11 would make n < l < n 11)

b) Continuous Time:

Proof:

Thus we proved that convolution is commutative in both discrete and continuous variables.
Thus the following two systems : One with input signal x(t)and impulse response h(t) and the other with input signal h(t) and impulse
response x(t) both give the same output y(t).

2) Distributive Property :
By this property we mean that convolution isdistributive over addition.
a) Discrete :
b) Continuous :
A parallel combination of LTI systems can be replaced by an equivalent LTI system which is described by the sum of the individual
impulse responses in the parallel combination.

3) Associative property
a) Discrete time :

Proof : We know that

Making the substitutions: p = k ; q = (l - k) and comparing the two equations makes our proof complete.
Note: As k and l go from

to +

independently of each other, so do p and q, however p depends on k, and q depends on l and k.


b) Continuous time :

Lets substitute

The Jacobian for the above transformation is

Doing some further algebra helps us see equation (2) transforming into equation (1) ,i.e. essentially they are the same. The limits are
also the same. Thus the proof is complete.
Implications
This property (Associativity) makes the representation y[n] = x[n]*h[n] *g[n]unambiguous.
From this property, we can conclude that the effective impulse response of acascaded LTIsystem is given by the convolution of their
individual impulse responses.

Consequently the unit impulse response of a cascaded LTI system is independent of the order in which the individual LTI systems are
connected.
Note :All the above three properties are certainly obeyed by LTI systemsbuthold for non-LTI systems in, as seen from the following
example:

4) LTI systems and Memory


Recall that a system is memoryless if its output depends on the current input only. From the expression :

It is easily seen that y[n] depends only on x[n]if and only if

Hence


5) Invertibility :
A system is said to be invertible if there exists an inverse system which when connected in series with the original system produces an
output identical to the input.
We know that

6) Causality :
a) Discrete time:
{ By Commutative Property }

In order for a discrete LTI system to be causal, y[n] must not depend on x[k] for k > n. For this to be true h[n-k]'s corresponding to the x[k]'s
for k > n must be zero. This then requires the impulse response of a causal discrete time LTI system satisfy the following conditions :

Essentially the system output depends only on the past and the present values of the input.
Proof : ( By contradiction )
Let in particular h[k] is not equal to 0, for some k<0

So we need to prove that for all x[n] = 0, n < 0, y[0] = 0

Now we take a signal defined as

This signal is zero elsewhere. Therefore we get the following result :

{Refer the eqn. above}

We have come to the result that y[0]


x[k] unless h[k] = 0 for k < 0

0, for the above assumption.

our assumption stands void. So we conclude that y[n] cannot be independent of

Note : Here we ensured a non-zero summation by choosing x[n-k]'s as conjugate of h[k]'s.


b) Continuous time :

In order for a continuous LTI system to be causal, y(t) must not depend on x(v) for v > t . For this to be true h(t-v)s corresponding to the x(v)s for v > t
must be zero.
This then requires the impulse response of a causal continuous time LTI system satisfy the following conditions :

As stated before in the discrete time analysis,the system output depends only on the past and the present values of the input.
Proof : ( By contradiction )
Suppose, there exists

> 0, such that h(-

0.

Now consider
Since,

System is not causal, a contradiction.Hence,

7) Stability :
A system is said to be stable if its impulse response satisfies the following criterion :

Theorem:
Stability

, in the Discrete domain, OR

Stability

, in the Continuous domain.

Proof of sufficiency:
Suppose

We have

If x[n] is bounded i.e.

, then:

But as
Proof of Necessity:
Take any n.

If | h[k] | = 0, then x[n-k] is bounded with bound 0

Then,

Hence

But since the system is stable

Hence if y[n] is bounded then the condition

, which in turn implies that


must hold.

Hence Proved

A similar proof follows in continuous time when you replace

by integral

Conclusion:
In this lecture you have learnt:
Convolution obeys commutative, distributive (over addition) and associative properties in both continuous and discrete
domains.
Commutativity implies the system with input signal x(t) and impulse response h(t) and the other with input signal h(t) and impulse
response x(t) both give the same output y(t).
Distributivity implies a parallel combination of LTI systems can be replaced by an equivalent LTI system which is described by
the sum of the individual impulse responses in the parallel combination.
Associativity implies the unit impulse response of a cascaded LTI system is independent of the order in which the individual LTI
systems are connected.
A system is memoryless if and only if h[n] = 0 for all non-zero n .
LTI system is invertible if the the convolution of the impulse response and its inverse results in unit impulse
For a causal discrete time LTI system, h[n] = 0 for all n<0. (similarly for continuous time)
For a stable system ,the impulse response must be absolutely integrable.
Congratulations, you have finished Lecture 10.

Module 1 : Signals in Natural Domain


Lecture 11 : Differential and Difference Equations
Objectives

In this lecture you will learn the following


We shall look at the systems described by differential and difference equations.
We shall look at the properties of the Derivative Operator system
We shall look at the systems defined by linear constant coefficient differential equations for continuous variables (and for
discrete variables, the corresponding equations are called the linear constant coefficient difference equations).
We shall even look at the impulse response of a differentiator.
We shall also look at systems defined by integer derivative and integer delay very briefly.
Differential and Difference Equations
We look at a special class of LSI systems that are frequently encountered in real life applications, namely those that are
described by differential and difference equations. We first consider the derivative operator as a system:

The above system is an LSI system.


Proof:
The given system is obviously linear due to the linearity of the derivative operator. Also shift invariance can be easily shown as
below:
Let h(t) = t-t 0 and g(t) = x(t-t 0 ) = x(h(t)). Let the output of the system to x(t) be y(t). ( y(t) = d x(t) / dt )
Now g'(t) = x'(h(t)).h'(t) = x'(t-t 0 )
Thus input x(t-t 0 ) gives an output y(t-t 0 ).
Hence the system is LSI.
Note : Also it is now clearly seen that if the input to an LSI system is differentiated, then the output of that system is also
differentiated. This property may be proved by taking the limit of the expression: {x(t+h)-x(t)}/h as 'h' tends to zero and using
the linearity and shift-invariance of LSI systems.
Properties of the Derivative Operator system:
1. Cascade of systems: Suppose we give the output of the derivative operator system as input to another LSI system 'A'. Let
y(t) be the output of the combined system for some given input x(t). Now suppose we give x(t) as input to the system 'A' first
and then pass its output to the derivative operator system. Let the final output now be z(t). Then from the property that
cascading of LSI systems is independent of the order of cascading, we get y(t) = z(t).

2. Memory of the System: The system obviously possesses memory as the derivative operator requires a certain interval length
to be defined in.
3. Causality of the System: To answer this we must consider the left, right and center derivates separately. Clearly the left
derivative is causal while the center and right derivatives may or may not be so. However for a differentiable function, all the
three derivatives being equal, the system is indeed causal.
4. Stability of the System: Consider the input signal shown below. Clearly we see that a bounded input does not lead to a
bounded output which becomes obvious at points where the derivative of the input signals tends to infinity. Thus the system is
not stable.

x(t)

Exercise: Give an example of a bounded input signal such that its derivative is not bounded as time tends to infinity?
Consider x(t) = sin (t 2 )..............................(bounded)
Then x'(t) = 2.t.cos(t 2 )..........................(unbounded)
5. Invertibility of the System: Is the derivative operator invertible? No, because when we consider the class of constants as input
then the output is always zero. Thus the derivative operator is not one is to one. However the system is invertible upto an
additive constant.
Linear Constant Coefficient Differential and Difference Equations
Equations of the form shown below are called linear constant coefficient differential equations:

The above description is in the implicit form. Hence it does not yield a unique interpretation. But we can make the system LSI by
adding the following conditions:
1. Interpret the equation as holding for all time.
2. If we are concerned with only limited interval of time, then impose zero initial conditions.
In order to solve a differential equation we must specify one or more auxilary conditions. Auxilary conditions are required to
characterize the system completely. Differnent choices for the auxilary conditions can lead to different relationships between the
input and output. We want the sytem to be LSI and hence we specify initial rest conditions.
We Specify the initial rest conditions as follows:
For t<=t 0 ,if x(t)=0 then we assume that y(t)=0 and therefore the response for t>t 0 can be calculated from the differential
equation with initial conditions

Note: It should be noted that in the initial rest conditions,t 0 is not a fixed point in time but rather depends on the input x(t). We
now prove that for the initial rest conditions the system is indeed LSI:
We first prove linearity. Suppose x 1 (t) and x 2 (t) are two arbitrary signals such that x 1 (t)=0 for t<t 1 , x 2 (t)=0 for t<t 2 . Let
y<sub>1</sub>(t) and y 2 (t) be the system output for x 1 (t) and x 2 (t) respectively. Then we have to prove the system output
for the input x 3 (t)=a.x 1 (t)+b.x 2 (t) is y 3 (t)=a.y 1 (t)+b.y 2 (t).
Without loss of generality we can assume that t1 <t 2 . Using the initial rest conditions we see that for y 3 (t), t0 =t 1 . Due to the
linearity of the derivative operator a.y1 (t)+b.y 2 (t) satisfies the differential equation. Also a.y1 (t)+b.y 2 (t) satisfies the initial
conditions with t0 =t 1 . But by Uniquesess Theorem for differential equations, it should have a unique solution. Hence we have
y 3 (t)=a.y 1 (t)+b.y 2 (t). Thus we have established the linearity.
Now we prove shift invariance.
Suppose x 1 (t) is an arbitrary signal sich that x 1 (t)=0 for t<t 0 . Let x 2 (t)=x 1 (t-T) and let y 1 (t) and y 2 (t) be the system outputs
for x 1 (t) and x 2 (t) respectively. Then we have to show that y 2 (t)=y 1 (t-T).
We procees as we had done previously. y 1 (t-T) satisfies the differential equation because of the shift invariance of the derivative
operator. y 2 (t) satisfies the initial conditions with t0 as t0 +T and y 1 (t) satisfies the initial conditions with t0 as t0 . Form this it is
easy to see that y 1 (t-T) satisfies the initial conditions with t0 as t0 +T. Finally by invoking the uniqueness theorem we can
conclude that y 2 (t)=y 1 (t-T) which is what we sought to prove.
Also note that the above system is causal. This is clear from the following argument:
Consider two inputs p(t) and q(t) such that for t<T, p(t)=q(t). Let r(t) and s(t) be their respective outputs. Now let x(t)=p(t)q(t). Thus x(t)=0 for t<T. From the initial conditions we get output of x(t) as y(t)=0 for t<T. But from linearity property we have
y(t)=r(t)-s(t)=0 for t<T. Thus r(t) = s(t) for t<T and the system is causal.
Example: Consider the following RC system. If voltage across C is 2V initially, show that the system is not LSI.

Take the following RC System:

If the capacitor has 2V initially across its terminals, then the above system is not initially at rest.
Let x 0 and x 0 be two inputs to the system as shown below. Now if the system was linear, then the output voltage across the
capacitor at time t=0 would have been 2V + 2V = 4V but the initial voltage across the capacitor will still be only 2V.

Hence the system is not LTI.


For discrete variables, the corresponding equation is called the linear constant coefficient difference equation. Instead of
derivatives we have delays as shown below.

The above system is causal too.


Impulse response of a differentiator:

If you convolve x(t) with h(t) then you get the following:

Thus we see that though we cannot interpret the object h(t), its behavior under a convolution with x(t) leading to derivative can
be understood.

In the above analysis we have come across certain mathematical tools of interest know as singularity functions. Click on the
button below to learn more.
While a function is usually defined at every value of the independent variable, the priomary importance of the unit impulse is not
what it is at each value of t, but rather what it does under convolution. So from the point of view of the linear system analysis
we alternatively define the unit impulse as a signal for which
x(t) = x(t) *

, for any x(t)

All the properties of the unit impulse that we need can be obtained from the operational definition of the unit impulse.
Note:
The above definition of unit impulse follows from the fact that the impuse response of identity system is unit impulse itself and
the output of any input x(t) is the convolution of x(t) and unit impulse. But the output of identity system is the input x(t) itself
and hence
x(t) = x(t) *

Singularity functions are functions which can be defines operationally in terms of their behavior under convolution. Consider the
derivative system. The impulse response of this system is the derivative of unit impulse and it is called unit doublet. It is denoted
by u 1 (t). Its working definition is
dx(t)/dt=x(t)*u 1 (t), for any signal x(t).

u 1 (t)
Similarly we define u 2 (t), the second derivative of unit impulse response as
d 2 x(t)/dt 2 =x(t)*u2 (t)

It is easy to see that u 2 (t)=u 1 (t) * u 1 (t).


In general u k(t), k N is defined as
u k(t)=u 1 (t)*.........*u1 (t), k times
by u 0 (t).

Using the above notation we denote


We denote the running integral of

by u -1 (t), which is the unit step function. Similarly:

u -2 (t)=running integral of u -1 (t)=t . u(t)


in general
u -n (t) = tn-1 /(n-1)! . u(t)
all u -n are well defined for n N

Comparison between Continuous and Discrete Systems


Let us first understand what is meant by integer derivative and integer delay.
1. Integer Derivative: Given x(t) we find dx(t)/dt.
2. Integer Delay: Given x[t] we find x[t-N], where N is an integer.
Continous Systems

Discrete Systems

Integer derivative is exactly realizable

Integer derivative is not realizable

Integer delay is not exactly realizable

Integer delay is realizable

Note: Realizability implies giving a physical structure to the solution with known elements. We shall see later why integer delay is
not exactly realizable in a continuous system.
Conclusion:
In this lecture you have learnt:
The derivative operator system is LSI.
In case of a derivative operator system, a bounded input does not lead to a bounded output which becomes obvious at
points where the derivative of the input signals tends to infinity. Thus the system is not stable.
The derivative operator system is invertible upto an additive constant.
The systems defined by linear constant coefficient differential equations for continuous variables (and for discrete
variables, the corresponding equations are called the linear constant coefficient difference equations) are causal.
Iinteger delay is not exactly realizable in a continuous system.
Congratulations, you have finished Lecture 11.

Module 2 : Signals in Frequency Domain


Lecture 12 : Introduction to Transformations
Objectives
In this lecture you will learn the following
Why do we need transforms?
What is the Fourier transform?
Signals treated as vectors.
Vectors of countable and uncountable infinite dimensions
Transforms as an inner product.
Eigensignals and Eigenvalues of a system
Introduction : A very basic concept in Signal and System analysis is Transformation of signals. It involves a whole new paradigm of
viewing signals in a context different from the natural domain of their occurence. For example, the transformation of a signal from the
time domain into a representation of the frequency components and phases is known as Fourier analysis. Why do we need
transformations?
We can't analyse all the signals that we want to, in their existing domain. Transforming a signal means looking at a signal from a
different angle so as to gain new insight into many properties of the signal that may not be very evident in their natural domain.
Transformation is usually implemented on an independent variable.
Examples:
1. A doctor examines a patient's heart beat, which is a function of time in real world but is represented as a function of space for easier
diagnosis of the problem. The job is done by ECG ( Electro-Cardio-Gram ) which shows the variations in the pulse rate in spatial
coordinates.
2. For a musician, all the ragas played are actually in time domain but frequency is more important for him than time. Why frequency has
more value must be somewhat intuitive as the variations in sound are due to change in frequency.
3. For a circuit, the input and output signals are functions of time. If we need to study or monitor these signals, we use an Oscilloscope to
display these signals using spatial coordinates.

Joseph Fourier

Fourier Transform: Every periodic signal can be written as a summation of sinusoidal functions of frequencies which are multiples of a
constant frequency (known as fundamental frequency). This representation of a periodic signal is called the Fourier Series. An
aperiodic signal can always be treated as a periodic signal with an infinite period. The frequencies of two consecutive terms are
infinitesimally close and summation gets converted to integration. The resulting pattern of this representation of an aperiodic signal is
called the Fourier Transform. Signals Treated as Vectors Any vector in N-dimensional space can be fully specified by a set of N numbers
(i.e. its components in various directions). Similarly we can also treat signals in continuous and discrete times as special cases of vectors
with infinite dimensions. Why do we need signals to be treated as vectors?
The mathematical analysis of vectors is highly advanced compared to signals. Treating signals as vectors helps us to attribute many
additional properties to them. Moreover we do feel comfortable taking signals as vectors in a problem involving number of signals.
Countable Infinity:
A set is called countably
countable infinite set. We
have 1-1 correspondence
number can be taken as a

infinite if and only if its all elements have 1-1 correspondence with set of natural numbers or any other
can easily see set of integers satisfying this property. Now we can call a set countably infinite if its elements
with integers (it will ensure automatically that the condition be satisfied for natural numbers). Every rational
tuple of two integers (numerator and denominator) making the set of rational numbers also countably infinite.

Exercise: Prove that the set of real numbers is not countably infinite. Proof Suppose the set of real numbers is countably finite. Then
every real number if mapped injectively onto the set of natural numbers. Let r k, where k N be the k th real number. Now we construct a
real number r as follows: The integral part of r is 0. The k th decimal place of r is any integer that is different from the k th decimal place
of r k. This number r which we have constructed differs from every r k at the k th decimal place. This contradicts our assumption that the
set of real numbers is countably finite.

Note:
A Discrete Signal x[n] can be thought of as a " Vector " with countably infinite dimensions.
A Continuous Signal x(t) can be thought of as a vector with uncountably infinite dimensions.
Dot Product (Inner Product) of Vectors :
In simple language, the Dot product (Inner product or Scalar product) is a binary operation which takes two vectors and returns a scalar
quantity. The Dot product of two vectors X and Y, both of 'N' dimensions is a scalar which does not depend on the choice of the
orthogonal system with N directions. It is the Projection of one vector on the other i.e. component of one vector along another vector. By
its very definition, dot product of a vector with itself is always non-negative and is the square of its magnitude. Take 2 vectors, X = (
x[1], x[2], ... , x[N] ) & Y = ( y[1], y[2], ... , y[N] ). Here X and Y in general can be complex. Then the dot product of X with Y is given

by:
What is the purpose of taking complex conjugates of components of Y?
Inner product of a vector with itself must be non-negative by definition as any vector is wholly contained in itself. If X is any complex
vector this condition requires complement of Y to be taken in the above definition.
Conditions for a function to be an inner product in a vector space:
An operation <X, Y> between two vectors X and Y can be called an inner product if and only if it satisfies the following conditions:

Now lets define the inner product for continuous and discrete time as shown below.
Clearly, each of these definitions satisfies the necessary conditions for it to be described as an inner product.
Continuous Time: Consider X(t) and Y(t) as two signals in continuous time.

Provided the integral exists


Discrete Time: Consider X[n] and Y[n] as two signals in discrete time.

Compare this with the definition of dot product for two finite dimensional vectors
We will now introduce two new terms - "Eigenvalue" and "Eigensignal". These concepts will be used later along with the concept of
inner product of signals to introduce the Fourier series.
"Eigen" is a German word meaning "one's own". In the context of Signals & Systems, eigensignals and eigenvalues are described as
follows :Consider a system with impulse response h(t). A signal x(t) applied to this system produces an output y(t) which is same as the input
signal x(t) except for multiplication by a scalar. Then, the signal x(t) is known as an Eigensignal of the system and the multiplication
factor is called the Eigenvalue corresponding to the eigensignal. Mathematically,

Here, x(t) is the eigensignal and A is the eigenvalue corresponding to the eigensignal x(t). (Note that A is a constant) Complex
Exponential signal as an Eigensignal: Consider an LSI system with impulse response h(t). We will verify that the complex exponential
is an eigensignal of the LSI system. The output y(t) of the LSI system, corresponding to input x(t) =
can be obtained by
convolving the x(t) with the impulse response h(t).

Noting that

can be moved outside the integral, we get

Note that stability of the LSI system guarantees convergence of y(t). Thus we have:

Hence, we have shown that complex exponential signals are eigensignals for the LSI systems (when they do produce a convergent
output). The constant
, for each fixed
, is the eigenvalue corresponding to the exponential signal. Now, the eigenvalue
can also be thought of as the projection of the impulse response h(t) of the system along the signal
inner product between the input signal

, i.e

may be regarded as the

and the impulse response h(t).

This special property of the complex exponential function with respect to LSI systems is one of the inspirations for trying to represent
signals in terms of complex exponentials. We shall see soon the consequences of this property.
Conclusion:
In this lecture you have learnt:
Transforms look at signals from a domain other than the natural domain. Transforms are essential for understanding some
properties of a signal. The Fourier transform is an important transform to begin with.
A Discrete Signal x[n] can be thought of as a vector with countably infinite dimensions.
A Continuous Signal x(t) can be thought of as a vector with uncountably infinite dimensions.
We can define inner products for signals, and thus go on to define eigensignals and eigenvalues for a signal:
for continuous signals
for discrete signals

The Complex Exponential signal


is an Eigensignal to a stable LSI system. The Eigenvalue of this signal can be represented
as an inner product of the impulse response and the eigensignal

Congratulations, you have finished Lecture 12.

Module 2 : Signals in Frequency Domain


Lecture 13 : Fourier Series Representation of Periodic Signals
Objectives
In this lecture you will learn the following
Fourier Series representation of Periodic Signals
Set of periodic signals as a vector space
Orthogonality of
Frequency Domain Representation.
Fourier Series representation of Periodic Signals
Consider a periodic signal x(t) with fundamental period T, i.e.

Then the fundamental frequency of this signal is

Under certain conditions, a periodic signal x(t) with period T can be


defined as the reciprocal of the fundamental period, so that
expressed as a linear combination of sinusoidal signals of discrete frequencies, which are multiples of the fundamental
frequency of x(t). Further, sinusoidal signals are conveniently represented in terms of complex exponential signals. Hence, we can
express the periodic signal in terms of complex exponentials, i.e.

Such a representation of a periodic signal as a combination of complex exponentials of discrete frequencies, which are multiples of the
fundamental frequency of the signal, is known as the Fourier Series Representation of the signal.
Inner product
The set of periodic signals with period T form a vector space.

We define the following inner product:

And the norm or magnitude of the signal is defined as:-

Now we consider the set of vectors

, that belong to this vector space. (note

We shall first show these vectors are mutually orthogonal. In other words we show that:-

Further, you may verify :

Thus, we have shown that this set of complex exponentials forms an orthogonal set in the vector space of all periodic signals with
period T. Indeed, if we restrict ourselves to a certain class of signals in this vector space (those that satisfy the Dirichlet Conditions,
which will be discussed in the next lecture), one can show that the above set of complex exponentials forms a basis for this class. i.e.:
signals in this class can be expressed as a linear combination of these complex exponentials. In other words, such signals permit a
Fourier Series representation.

Assuming the Fourier Series representation of a signal x(t), with period T exists, it is easy to find the Fourier Series coefficients, using
the orthogonality of the basis set of complex exponentials.

Taking inner product with

on both sides

Frequency Domain Representation


From the above discussion, we can say that a periodic signal whose Fourier Series Expansion exists, can be represented uniquely in
terms of it's Fourier co-efficients. These co-efficients correspond to a particular multiples of the fundamental frequency of the signal.
Thus, the signal may be equivalently represented as a discrete signal on the frequency axis:
This is called the Frequency domain representation of the signal.
We next discuss the conditions under which the Fourier Expansion is valid.
Conclusion:

In this lecture you have learnt:


A representation of a periodic signal as a combination of complex exponentials of discrete frequencies, which are multiples of the
fundamental frequency of the signal, is known as the Fourier Series Representation of the signal.
Set of periodic signals, with period T are a vector space
Orthogonality of
Calculating the Fourier series coefficients for a periodic signal.
Frequency Domain Representation of a periodic signal

Congratulations, you have finished Lecture 14.

Module 2 : Signals in Frequency Domain


Lecture 14 : Convergence of Fourier Series and Gibb's Phenomenon
Objectives
In this lecture you will learn the following
To study convergence in 2 different contexts.
Dirichlet Conditions For Pointwise Convergence .
Condition for convergence in squared norm .
To understand Gibb's Phenomenon

be the Fourier series expansion corresponding to the periodic signal x(t) (i.e: the

Let

's are as calculated by

the formula in the previous lecture). Then the above summation may or may not converge to the actual signal x(t).
We shall discuss the convergence of the Fourier series representation of a periodic signal in two contexts, namely Pointwise
convergence and Convergence in squared norm. We will first see what each of these terms means and then discuss the conditions
under which each kind of convergence takes place.
For the subsequent discussion let,

Pointwise Convergence
Pointwise convergence implies the series converges to the original function at any point, i.e: the Fourier Series representation of a signal
x(t) is said to converge pointwise to the signal x(t) if:

i.e to say

Convergence in squared norm


The Fourier Series representation is said to converge in the sense of squared norm to the signal x(t) if

Pointwise convergence implies convergence in squared norm. As convergence in squared norm is a more relaxed condition than pointwise
convergence, convergence in the squared norm sense covers a much larger domain of signals than pointwise convergence.
Finally, we now move on to the conditions for these forms of convergence.
Dirichlet Conditions For Pointwise Convergence
Consider the following 3 conditions that may be imposed on a periodic signal x(t) :
1) x(t) should be absolutely integrable over a period.
A signal that does not satisfy this condition is x(t) = tan(t) as:-

2) x(t) should have only a finite number of discontinuities over one period. Furthermore, each of these discontinuities must be finite.
An example of a function which has infinite number of discontinuities is illustrated below. The function is shown over one of the periods.

3) The signal x(t) should have only a finite number of maxima and minima in one period. An example of a function which has
infinite number of maxima and minima is: a periodic signal with period 1, defined on (0,1] as:

If the signal satisfies the above conditions, then at all points where the signal is continuous, the Fourier Series converges to the signal.
However, at points where the signal is discontinuous (Dirichlet conditions allow finite number of discontinuities in a period), the Fourier
Series converges to the average of the left and the right hand limits of the signal. Mathematically, at a point of discontinuity

In practice, the restrictions imposed on signals by the Dirichlet conditions are not very severe, and most of the signals we will deal with
satisfy these conditions.
Condition for convergence in squared norm sense
If, for a periodic signal x(t) with period T,

converges, then its Fourier Series converges to it in the squared norm sense.

As is expected, this is a far more relaxed constraint than the Dirichlet conditions.
At this point let us define some terms which will be of use to us later in the course:
is called the instantaneous power or energy density of the signal x(t).

If x(t) is periodic with period T, and

, x(t) is called a finite power signal, and the value of the

integral is called the power of the signal.


(Thus we can say, if a periodic signal has finite power, we are guaranteed of convergence in squared norm of its Fourier Series)
If x(t) is non-periodic, and

, x(t) is said to be a finite energy signal, and the value of the integral is called the

energy of the signal.


We now discuss another aspect of the convergence of the Fourier series, the Gibb's Phenomenon

Gibb's Phenomenon
We can approximate a signal having a Fourier Series expansion by taking a finite number of terms of the expansion.
i.e:

is an approximation to the periodic signal x(t).

is also called a Partial Sum. We would obviously expect that as the number of terms taken is increased, this summation would
become a better and better approximation to x(t), i.e

would approach x(t) uniformly.

Indeed this happens in regions of continuity of the original signal. However, at the points of discontinuity in the original signal, an
interesting phenomenon is observed. The partial sum oscillates near the point of discontinuity. We might expect these oscillations to
decrease as the number of terms taken is increased. But surprisingly, as the number of terms taken is increased, although these
oscillations get closer and closer to the point of discontinuity, their amplitude does not decrease to zero, but tends to a non zero limit.
This phenomenon is known as the Gibb's Phenomenon, after the mathematician who accounted for these oscillations.
The illustration below shows the various Fourier approximations of a periodic square wave.

Mathematically, this means if the periodic signal has discontinuities, its Fourier Series does not converge uniformly.
Conclusion:

In this lecture you have learnt:


We have discussed the convergence of the Fourier series representation of a periodic signal in two contexts, namely Pointwise
convergence and Convergence in squared norm .
Dirichlet Conditions For Pointwise Convergence are: (a)absolute integrabilty (b) finite number of discontinuities over one period (c)
finite number of extremas over one period
If a signal is a finite power signal then it is convergent in squared norm
The partial sum oscillates near the point of discontinuity. These oscillations do not decrease as the number of terms taken is
increased. But in reality, as the number of terms taken is increased, although these oscillations get closer and closer to the point of
discontinuity, their amplitude does not decrease to zero, but tends to a non zero limit. This phenomenon is known as the Gibb's
Phenomenon.

Congratulations, you have finished Lecture 15.

Module 2: Signals in Frequency Domain


Lecture 15 : Fourier Transform
Objectives
In this lecture you will learn the following
Fourier Series extended to aperiodic signals
Inverse Fourier transform
The Fourier and inverse Fourier transform pair
Dirichlet Conditions for convergence of Fourier Transform.
A representation for aperiodic signals
We have already seen that a broad class of functions (which satisfy the Dirichlet's conditions) can be written in the form of Fourier series
.That is ,for a periodic function x(t) satisfying the Dirichlet's conditions , we may say
where

Although this covers a broad class of functions, it puts a serious restriction on the function. That is, Periodicity . So the next question
that naturally pops up in one's mind is , "Can we extend our idea of the Fourier series so as to include non periodic functions ?" . This
precisely is our goal for this part , the basic inspiration being, an aperiodic signal may be looked at as a periodic signal with an infinite
period. Note what follows is not a mathematically rigourous excercise, but will help develop an intuition for the Fourier Transform for
aperiodic signals.
The Fourier Transform
Let's start with a simple example .Consider a following function, periodic with period T.

Clearly this is our familiar square wave . Let's see what happens to its Frequency domain representation as we let T to approach
infinity . We know that the Fourier coefficients of the above function can be written as

where

= 1. For T = 4, the expression

reduces to :

Now we know that every value of k in this equation gives us the coefficient corresponding to the

multiple of the fundamental

frequency of the signal. Let's plot the Frequency Domain representation of the signal, which we shall also call the spectrum of the signal.
Note the horizontal axis represents frequency, although the points marked indicate only scale.

Now let's double the time period .the expression for the Fourier coefficients will become :

Since frequency is reciprocal of time period

as the time period T increases, the distance between the consecutive frequencies f 0 in the spectrum will reduce ,and we'll get the
following plot.

If we reduce the natural frequency by another half ( that is increase the time period by a factor of two ) , we'll get the following
frequency spectrum :

Notice as the period of the periodic signal is increased, the spacing between adjacent frequency components decreases.

Flash File

Finally when the period of the signal tends to infinity , i.e. the signal is aperiodic, the frequency spectrum becomes a continuous.
By looking at the plots we can infer that as we will increase the time period more and more, we'll get more and more closely placed
frequencies in the spectrum, i.e.: complex exponentials with closer and closer frequencies are required to represent the signal. Hence if
we let T to approach infinity we'll get frequency infinitesimally close to each other. This is same as saying that we'll get every
possible frequency and hence the whole continuous frequency axis. Thus our frequency spectrum will no more be a series , but will be a
continuous function. The representation will change from a summation over discrete frequencies to an integration over the entire
frequency axis. The function, which (like the Fourier Series coefficients) gives what is crudely the strength of each complex exponential in
the representation is formally called the Fourier Transform of the signal. The representation takes the form:

where X(f) is the Fourier Transform of x(t). Note the similarity of the above equation with the Fourier Series summation in light of
the preceding discussion.
This equation is called the Inverse Fourier Transform equation, x(t) being called the Inverse Fourier Transform of X(f).
Such a representation for an aperiodic signal exists, of course subject to some conditions, but we'll come to those a little later.

The Fourier Transform equation


The Fourier Transform of a function x(t) can be shown to be:

This equation is called the Fourier Transform equation.


(As a convention, we generally use capital letters to denote the Fourier transform)
Obviously not in all cases are we guaranteed that the integral on right hand side will converge.
We'll next discuss the conditions for the Fourier Transform of an aperiodic signal to exist.

Recap:
Under certain conditions, an aperiodic signal x(t) has a Fourier transform X(f) and the two are related by:
(Fourier Transform equation)

( Inverse Fourier Transform equation)

Now, lets go on to the conditions for existence of the Fourier Transform. Again notice the similarity of these conditions with the Dirichlet
conditions for periodic signals.

Dirichlet Conditions for convergence of Fourier Transform

Consider an aperiodic signal x(t). Its Fourier Transfrom exists (i.e the Transform integral converges) and

converges to x(t), except at points of discontinuity provided:

1) x(t) is absolutely integrable . i.e:

2) x(t) has only a finite number of extrema in any finite interval.


For example,

does not satisfy this condition in, say (0,1).

3 ) x(t) has only a finite number of discontinuities in any finite interval. For example the following function (the so called Dirichlet's
function ) will not satisfy this condition.

These 3 conditions satisfied,

will converge to x(t) at all points of continuity of x(t). At points of discontinuity of x(t),

this integral converges to the average of the left hand limit and the right hand limit of x(t) at that point.

Conclusion:
In this lecture you have learnt:
An aperiodic signal may be looked at as a periodic signal with an infinite period .
We learnt what inverse Fourier transform is & derived its equation.
We saw Dirichlet Conditions for convergence of Fourier Transform.

Congratulations, you have finished Lecture 16.

Module 2 : Signals in Frequency Domain


Lecture 16 : Fourier Transform as a System
Objectives
In this lecture you will learn the following
The Fourier Transform and the Inverse Fourier Transform as systems
Duality of the Fourier Transform
Properties of the Fourier Transform as an LTI system
The Fourier Transform and the Inverse Fourier Transform as systems
The Fourier Transform and The Inverse Fourier transform may be looked at as system transformations. One system for instance takes in
a time signal and outputs its Fourier transform, another takes a frequency domain signal (or a spectrum) and produces the corresponding
time-domain signal.

Let us now gain some additional insight into the Fourier Transform using this system notion.

1. Duality of the Fourier Transform

Notice a certain symmetry in these two system transformations.


Say y(t) has a Fourier Transform Y(f), then :

What is the transform of Y(t) ? Or, which signal on

Inverse Fourier transformation would yield Y(t) ? Recall the Inverse Transformation equation above, and put

in the equation for

Y:

Therefore, y(-f ) is the Fourier transform of Y(t) (where Y(f) is the Fourier transform of y(t) ) ! This remarkable relationship between a
signal and its Fourier transform is called the Duality of the Fourier Transform. i.e:

Duality implies a very remarkable relationship between the Fourier transform and its inverse. Notice the relationship between the Fourier
Transform and the Fourier Inverse of X above:

This gives us a very important insight into the nature of the Fourier transform. We will use it to prove many dual relationships: if some
result holds for the Fourier Transform, a dual result will hold for the Inverse transform as well. We will encounter some examples soon.

2. Linearity
Both the Fourier transform and its inverse system are linear. Thus the Fourier transform of a linear combination of two signals is the
same linear combination of their respective transforms. The same, of-course holds for the Inverse Fourier transform as well.

3. Memory
The independent variable for the input and output signals in these systems is not the same, so technically we can't talk of memory with
respect to the Fourier transform and its inverse. But what we can ask is: if one changes a time signal locally, will only some
corresponding local part of the transform change? Not quite.

Introducing a local kink like in the above time-signal causes a large, spread-out distortion of the spectrum. In fact, the more local the
kink, the more spread-out the distortion!
By duality,one can say the same about the inverse Fourier transform.
I.e: if x(.) has a Fourier transform X(.), using Duality and the above discussion, we can say that introducing a local distortion in X(.)
will cause a wide-spread distortion in x(-.). But x(.) is also the inverse Fourier transform of this locally changed X(.). Thus introducing
a local kink in the spectrum of a signal changes it drastically.

4. Shift invariance
Again, we can't talk of shift variance/invariance with these systems as the independent variable for the input and output signals is not the
same. But we can examine what happens to the spectrum of a signal on time-shifting it, and vice-versa.

Notice that nowhere has the magnitude of X(f) changed. Only a phase (or argument) change that is linear in frequency has taken
place.

Let us, using Duality examine the effect of translating the spectrum on the time-signal.


5. Stability
Are our systems BIBO stable? i.e.: Will a bounded input necessarily give rise to a bounded output? No.
The integrals that describe the two systems need not converge for a bounded input signal. e.g.: they don't converge for a non-zero
constant input signal.
Now that we have come to the issue of the Fourier transform and the Inverse Fourier transform not converging for a constant input
signal, let us see what the Transform of the unit impulse is.
Note that the impulse, far from satisfying Dirichlet's conditions, is not even a function. It falls in the class of generalized functions. Thus
what we are doing is extending our idea of the Fourier Transform. Why? Because we will find it useful.

That is, the Fourier transform of the unit impulse is the identity function. Thus, even though the inverse equation does not converge for
the identity function, we say that that Fourier Transform of the unit impulse is the identity function.

Why stop here? Consistent with duality, we say that the Fourier Transform of the identity function is the unit impulse:

We will even apply the time-shift and frequency-shift properties we have just proved to make further generalizations:

Conclusion: In this lecture you have learnt:


We looked at The Fourier Transform and The Inverse Fourier transform as system transformations .
We took a look over dual nature of Fourier transform.
Both the Fourier transform and its inverse system are linear.
We examined the properties like 'memory' & 'shift invariance' & stability of these systems.
Congratulations, you have finished Lecture 16.

Module 2 : Signals in Frequency Domain


Lecture 17 : Fourier Transform of periodic signals and some Basic Properties of Fourier Transform
Objectives
In this lecture you will learn the following
Fourier Transform of Periodic signals
Fourier transform of x(-t)
Fourier transform of conjugate of x(t)
The Fourier transform of an even signal
The Fourier transform of a real signal

Fourier Transform of Periodic signals.


We know the Fourier transform of the signal that assumes the value 1 identically is the dirac-delta function.

By the property of translation in the frequency domain, we get:

This is the result we will make use of in this section.


Suppose x(t) is a periodic signal with the period T, which admits a Fourier Series representation. Then,

Now since the Fourier transformation is linear, the above result can be used to obtain the Fourier Transform of the periodic signal x(t):

Therefore,

By putting this transform in inverse Fourier transform equation, one can indeed confirm that one obtains back the Fourier series
representation of x(t).

Thus, the Fourier transform of a periodic signal having the Fourier series coefficients
the fundamental frequency, the strength of the impulse at
This looks like:

being

is a train of impulses, occurring at multiples of


Basic Properties of Fourier Transform
Consider a signal x(t) with Fourier transform X(f). We'll see what happens to the Fourier transform of x(t) on time-reversal and
conjugation. i.e:

Now, we are aware that

Transform X'(f) of x(-t) is:

Substitute

Therefore,

Therefore,

Applying this result to periodic signals (we have just seen their Fourier transform), you see that if
of a periodic signal x(t),

is the

is the

Fourier Series co-efficient

Fourier series co-efficient of x(-t).


is related to that of x(t).

Now lets see how the Fourier Transform of

Starting with

taking conjugates, we get :

Thus,

And, therefore,

Applying this in the context of periodic signals, we see that if


is the

Fourier series co-efficient of

is the

Fourier Series co-efficient of a periodic signal x(t), then

Let us look at some simple consequences of these properties:


a) What can we say about the Fourier transform of an even signal x(t) (with Fourier transform X(f) ) ?
x(-t) has Fourier transform X(-f). As x(t) is real, x(t) = x(-t), implying, X(f) = X(-f).
Thus, the Fourier transform of an even signal is even. Similarly, you can show the Fourier transform of an odd signal is odd.

b) What can we say about the Fourier transform of a real signal x(t), with Fourier transform X(f) ?
If x(t) is real,

Thus the Fourier transform of a real signal is Conjugate Symmetric.

Conclusion:
In this lecture you have learnt:
Fourier transformation is linear .
Fourier transform of x(-t) is X(-f).
Fourier transform of conjugate of x(t) is conjugate of X(-f).
The Fourier transform of an even signal is even
The Fourier transform of a real signal is Conjugate Symmetric .

Congratulations, you have finished Lecture 17.

Module 2 : Signals in Frequency Domain


Lecture 18 : The Convolution Theorem
Objectives
In this lecture you will learn the following
We shall prove the most important theorem regarding the Fourier Transform- the Convolution Theorem
We are going to learn about filters.
Proof of 'the Convolution theorem for the Fourier Transform'.
The Dual version of the Convolution Theorem
Parseval's theorem
The Convolution Theorem
We shall in this lecture prove the most important theorem regarding the Fourier Transform- the Convolution Theorem. It is this theorem
that links the Fourier Transform to LSI systems, and opens up a wide range of applications for the Fourier Transform. We shall inspire its
importance with an application example.
Modulation
Modulation refers to the process of embedding an information-bearing signal into a second carrier signal. Extracting the informationbearing signal is called demodulation. Modulation allows us to transmit information signals efficiently. It also makes possible the
simultaneous transmission of more than one signal with overlapping spectra over the same channel. That is why we can have so many
channels being broadcast on radio at the same time which would have been impossible without modulation
There are several ways in which modulation is done. One technique is amplitude modulation or AM in which the information signal is used
to modulate the amplitude of the carrier signal. Another important technique is frequency modulation or FM, in which the information
signal is used to vary the frequency of the carrier signal. Let us consider a very simple example of AM.
Consider the signal x(t) which has the spectrum X(f) as shown :

Why such a spectrum ? Because it's the simplest possible multi-valued function. Also, it is band-limited (i.e.: the spectrum is non-zero in
only a finite interval of the frequency axis), having a maximum frequency component f m . Band-limited signals will be of interest to us
later on.
x(t) is amplitude modulated with a carrier signal

, then the amplitude modulated signal

Thus if

since

If ,

the Fourier transform of the amplitude modulated signal is:

At the receiving end, in order to demodulate the signal, we multiply it again by

Now, all we need is something that keeps the

part of the transmitted spectrum and simply chops away the rest of the

spectrum. Such a device is called an ideal low-pass filter.

Filters :
The simplest ideal filters aim at retaining a portion of the spectrum of the input in some pre-defined region of the frequency axis and
removing the rest.
A LOWPASS FILTER is a filter that passes low frequencies i.e. around f = 0 and rejects the higher ones, i.e: it multiplies the input
spectrum with the following:

A High pass filter passes high frequencies and rejects low ones by multiplying the input spectrum by:

A BANDPASS FILTER passes a band of frequencies and rejects both higher and lower than those in the band that is passed, thus
multiplying the input spectrum by:

A BANDSTOP FILTER stops or rejects a band of frequencies and passes the rest of the spectrum, thus multiplying the input spectrum
by:

How do these filters work? That is, what does multiplication of two signals in the frequency domain imply in the time domain?

If we multiply two Fourier transforms X(f) and H(f), let us see what the Inverse Fourier transform of this product is.
Consider the integral

Let us replace H(f) by

This makes the integral,

We can interchange the order of integration, so long as the new double integral converges
we note that the term inside the bracket is just the inverse Fourier transform of X(f) evaluated at

Thus the integral simplifies to

What we have just proved is called the

which is simply the convolution of h(t) with x(t) !

Convolution theorem

for the Fourier Transform.

It states:
If two signals x(t) and y(t) are Fourier Transformable, and their convolution is also Fourier Transformable, then the Fourier Transform of
their convolution is the product of their Fourier Transforms.

Dual of the convolution theorem


We now apply the Duality of the Fourier Transform to the Convolution Theorem to get another important theorem.
Let x(t) and y(t) be two Fourier transformable signals, with Fourier transforms X(f) and Y(f) respectively. Assume X(f)*Y(f) is Fourier
Invertible. We now find its inverse.
.

What does Duality tell us? If

Thus we know:
The Convolution theorem says:
Applying duality on this result,

Thus we get the Dual version of the Convolution Theorem:


If x(t) and y(t) are Fourier Transformable, and x(t) y(t) is Fourier Transformable , then its Fourier Transform is the convolution of the
Fourier Transforms of x(t) and y(t). i.e:

Parseval's theorem
We now prove another very important theorem using the Convolution Theorem. We first give its statement:
The Parseval's theorem states that the inner product between signals is preserved in going from time to the frequency domain.
i.e.

where X(f), Y(f), are the Fourier Transforms of x(t), y(t) respectively.
If we take x(t) = y(t),

This is interpreted physically as Energy calculated in the time domain is same as the energy calculated in the frequency
domain.

|X(.)|2 is called the Energy Spectral Density.


Proof:

Hence Proved.

Convolution between a periodic and an aperiodic signal


We now apply the Convolution theorem to the special case of convolution between a periodic and an aperiodic signal. (Note
convolutions between periodic signals do not converge, we'll address that issue after this.)
Recall: If a periodic signal x(t) with period T obeys the Dirichlet conditions for a Fourier Series representation, then,

and its Fourier Transform is given by

If the convolution between x(t) and some Fourier Transformable aperiodic signal h(t) converges, lets see what the
Fourier transform of x*h looks like (assuming it exists). Note x*h is also periodic with the same period as x(t) and
its Fourier transform is also then expected to be a train of impulses.
By the convolution theorem, the Fourier Transform of x*h is:

implying, the

Fourier series co-efficient of x*h is

Therefore, assuming a periodic signal x(t) has a Fourier series representation, and an aperiodic signal h(t) is Fourier transformable, if
x*h converges (and has a Fourier series representation), it is periodic with the same period as x(t) and its Fourier series coefficients are
the Fourier series coefficients of x(t) multiplied by the value of H(f) at that multiple of the fundamental frequency.

Conclusion:
In this lecture you have learnt:
Modulation refers to the process of embedding an information-bearing signal into a second carrier signal
A High pass filter, a bandpass filter,a bandstop filter are studied.
We saw the proof of the convolution theorem.
We obtained the Dual version of the Convolution Theorem .
Parseval's theorem's physical interpretation is as follows: Energy calculated in the time domain is same as the energy
calculated in the frequency domain .

Congratulations, you have finished Lecture 18.

Module 2 : Signals in Frequency Domain


Lecture 19 : Periodic Convolution and Auto-Correlation
Objectives
In this lecture you will learn the following
To look at a modified definition of convolution for periodic signals
Circular convolution
Parseval's theorem
Convolution theorem in the context of periodic convolution.
Auto correlation
Cross correlation
Periodic Convolution
We have applied the convolution theorem to convolutions involving:
(i) two aperiodic signals
(ii) one aperiodic and one periodic signal.
But, convolutions between periodic signals diverge, and hence the convolution theorem cannot be applied in this context. However a
modified definition of convolution for periodic signals whose periods are rationally related is found useful. We look at this definition
now. Later, we will prove a result similar to the Convolution theorem in the context of periodic signals.
Consider the following signals
x(t) periodic with period T 1 and h(t) periodic with period T 2 where T 1 and T 2 are rationally related.
Let T 1 / T 2 = m / n (where m and n are integers)
Hence, m T 2 = n T 1 = T is a common period for both x(t) and h(t).
Periodic convolution or circular convolution of x(.) with h(.) is denoted by

and is defined as :

Note the definition holds even if T is not the smallest common period for x(t) and h(t) due to the division by T. Thus we don't need m and
n to be the smallest possible integers satisfying T 1 / T 2 = m / n in the process of finding T.
. Also, notice that the convolution is periodic with

Also, show for yourself that the periodic convolution is commutative, i.e:
period T 1 as well as T 2 . More on this later.

Fourier Transform of
Say x(t) is periodic with period T 1 and h(t) is periodic with period T 2 with T 1 / T 2 = m / n (where m and n are integers).
Thus m T 2 = n T 1 = T is a common period for the two.
We can expand x(t) and h(t) into Fourier Series with fundamental frequency

If one compares the Fourier co-efficients in these expansions with those in the expansions with the original fundamental frequencies, i.e:

, we find:

Now,

But then, we have seen that

can be non-zero only when k is a multiple of n, and

can be non-zero only when k is a multiple of m.

Their product can clearly be non-zero only when k is a multiple of m and n. Thus if p is the LCM (least common multiple) of m and n, we
have:

What can we make out of this?


The Fourier Transform of the circular convolution has impulses at all (common) frequencies where the Fourier transforms of x(t) and h(t)
have impulses. The circular convolution therefore "picks out" common frequencies, at which the spectra of x(t) and h(t) are non-zero and
the strength of the impulse at that frequency is the product of the strengths of the impulses at that frequency in the original two spectra.
This result is the equivalent of the Convolution theorem in the context of periodic convolution.

Parseval's Theorem
We now obtain the result equivalent to the Parseval's theorem we have already seen in the context of periodic signals.
Let x(t) and y(t) be periodic with a common period T.

Applying the Convolution theorem equivalent we have just proved on

we get:

Put t = 0, to get:

Compare this equation with the Parseval's theorem we had proved earlier.
If we take x = y, then T becomes the fundamental period of x and:

Note the left-hand side of the above equation is the power of x(t).
Note also that the periodic convolution of
the coefficients of x(t).

yields a periodic signal with Fourier coefficients that are the modulus square of

Another important result


If,

Then

represents the power of y(t), where T is a period common to x(t) and h(t).

If ,

Applying the Parseval's theorem to y,

The Auto-correlation and the Cross-correlation.


Proceeding with our work on the Fourier transform, let us define two important functions, the Auto-correlation and the Cross-correlation.
Auto Correlation
has Fourier series coefficients that the modulus square of the Fourier series

You have seen that for a Periodic signal y(t),


coefficients of y(t).
Lets look at an equivalent situation with aperiodic signals, i.e:
Assume that
then
Notice that
Since
We have,
Using the dual of the convolution theorem,

The auto-correlation of x(t), denoted by

is defined as:

Its Spectrum is the modulus square of the spectrum of x(t).


It can also be interpreted as the projection of x(t) on its own shifted version, shifted back by an interval t'.
It can be shown that

( note that

is nothing but the energy in the signal x(t)

Cross Correlation
The cross correlation between two signals x(t) and y(t) is defined as :

Note that the cross-correlation

is the convolution of

then using the fact that the auto-correlation integral peaks at 0 , the cross correlation peaks at

If

It may be said that cross-correlation function gives a measure of resemblance between the shifted versions of signal x(t) and y(t). Hence
it is used to in Radar and Sonar applications to measure distances . In these systems, a transmitter transmits signals which on reflection
from a target are received by a receiver. Thus the received signal is a time shifted version of the transmitted signal . By seeing where the
cross-correlation of these two signals peaks, one can determine the time shift and hence the distance of the target.
The Fourier transform of

is of-course

Conclusion:
In this lecture you have learnt:
Periodic convolution or circular convolution of x(.) with h(.) is denoted by
=

Fourier Transform of
Parseval's theorem in the context of periodic signals is

Auto correlation is defined as

Cross correlation is defined as

Congratulations, you have finished Lecture 19.

and is defined as :

Module 2 : Signals in Frequency Domain


Lecture 20 : Properties of Fourier Transform

Objectives
In this lecture you will learn the following
Behaviour
Behaviour
Behaviour
Behaviour
Behaviour
Behaviour

of
of
of
of
of
of

the
the
the
the
the
the

Fourier
Fourier
Fourier
Fourier
Fourier
Fourier

Transform w.r.t. differentiation and integration


Transform w.r.t. scaling of the independent variable by a real constant a.
Series w.r.t. time shifting
Series w.r.t. differentiation
Series w.r.t. scaling of the independent variable
Series w.r.t. multiplication by t

Differentiation/Integration

Hence if

then

Now,

Hence if,

then,

The inverse operation of taking the derivative is running the integral :

eg :

let

This causes problem when

impulse in frequency.

Example:

Scaling of the independent variable by a real constant a


When a > 0 or a < 0

or

Hence the scaling of the independent variable is a self-dual operation.


Consider

Hence, x(t) and |a| 1/2 x(at) have the same energy. Therefore such scaling is called energy normalized scaling of the independent
variable.

Properties of Fourier Series.


Using the properties we just proved for the Fourier Transform, we state now the corresponding properties for the Fourier series.

Time-shift
Recall, that if x(t) is periodic then X(f) is a train of impulses.
where
We know:
Thus if x(t) is periodic with period T , x( t - t0 ) has Fourier series coefficients

Differentiation
If the periodic signal is differentiable then

Thus if x(t) is periodic with period T , x'(t) has Fourier Series coefficients

Scaling of the independent variable

If a > 0, x(at) is periodic with period ( T / a ) and now c k becomes Fourier coefficient corresponding to frequency

If a < 0, x(at) is periodic with period ( T / -a) and now c k becomes Fourier coefficient corresponding to frequency

Multiplication by t
Multiplication by t of-course will not leave a periodic signal periodic. But what we can do is, multiply by t in one period, and then consider
a periodic extension. i.e: x(t) is periodic with period T, we see what the Fourier series coefficients of y(t), defined as follows is:
in
Let

and

otherwise

Then

Note the k th Fourier series co-efficient of x(t) is

Similarly, let

Therefore, k th Fourier series coefficient of

This idea is not of much use without knowledge of


Conclusion:
In this lecture you have learnt:
Properties
Properties
Properties
Properties
Properties
Properties

of
of
of
of
of
of

the
the
the
the
the
the

Fourier
Fourier
Fourier
Fourier
Fourier
Fourier

Transform w.r.t. differentiation and integration


Transform w.r.t. scaling of the independent variable by a real constant a.
Series w.r.t. time shifting
Series w.r.t. differentiation
Series w.r.t. scaling of the independent variable
Series w.r.t. multiplication by t

Congratulations, you have finished Lecture 20.

Module 3 : Sampling & Reconstruction


Lecture 21: Sampling

Objectives:
Scope of this lecture:
Modern Communication would not have been possible without the development of sampling theory. The sampling theory provides means and ways for
processing the Continuous Time (C.T.) data in digital domain. Thus sampling theorem provides the bridge between CT and DT signals. By sampling we mean
taking the instantaneous values of CT signal at a regular interval of time. The topics covered in this lecture are listed below:
The concept of sampling of a signal .
The notion of apriori information & its use to represent a signal economically .
The most common approach towards economical signal representation.

What is Sampling?
Sampling is a methodology of representing a signal with less than the signal itself.
We can do better than just describing a signal by specifying the value of the dependent variable for each possible value of the
independent variable. The concept is explained with the following examples where 'x(t)' is the dependent variable and 't' is the
independent variable.
Let
Here 'x(t)' is defined by a sinusoidal relation with a phase constant , amplitude and angular frequency. Now the knowledge of these
three parameters suffices to describe 'x(t)' completely. Thus we are able to compute 'x(t)' without depending on the independent
variable 't'.
Consider another example given below:

Here x(t) is a polynomial in 't' of degree 'N' and can be computed completely if we know the coefficients

Thus we observe that the apriori information we had that allowed us to represent these signals. In the first case we knew that 'x(t)' is a
pure sinusoid and in the second case we knew that it was a polynomial of degree 'N'.
Thus, as a method of using Apriori information available to represent a signal economically is one way of defining sampling.

A Common Approach for Signal Representation:


The approach most often used to economically represent a signal is to look at the values of the dependent variable as a set of properly
chosen values of the independent variable such that these 'tuples' and the 'apriori' information can be used to reconstruct the signal
completely.
Lets say we know that some signal 'x(t)' is a pure sinusoid described by the three quantities amplitude (A o ) , angular frequency (
),and phase constant (

). For 't1 , t2 & t3 ' values of 't' we get the following three independent equations. :

From the observed values of the signal x(t 1 ), x(t 2 ) and x(t 3 ) at t 1 , t 2 and t 3 , the parameters of the signal A o ,

and

can be

determined
Consider another example:
Let x(t) be a polynomial of order 'N' which is represented mathematically as shown below. It is further represented in the form of a
matrix where the LHS is the 'apriori' information.

Thus we observe that, this system can be solved as the determinant of the square matrix on the LHS so long as

Thus given the 'apriori' information, the entire information about the signal is contained in its value at N + 1 distinct points.

You have seen two examples, where 'apriori' information, and "samples" of a signal at certain values of the independent variable help us
reconstruct the signal completely.
But If you have no Apriori information you can do no better than to represent the signal as it is.
Even knowing about the continuity of a signal is 'apriori' information. Further we can talk of the relative measure of the 'apriori'
information. This can be done by observing the size of the set in which that signal occurs. The larger the set, the lesser the 'apriori'
information we have. For example, knowing that the signal is sinusoidal is much larger an 'apriori' information than knowing that it is
continuous as the set of sine functions is much smaller than the set of continuous functions.
The main challenge in sampling and reconstruction is to make the best use of 'apriori' information in order to represent a signal by its
samples most economically.
In the next lecture, we focus on a special class of signals those that are Band-limited (this is the 'apriori' information we shall have) and
see how such signals can be reconstructed from their samples.

Conclusion:
From this lecture you have learnt :
Sampling is a method of using 'apriori' information about a signal to represent it economically.
The most common approach in sampling and reconstruction is to describe the signal by specifying its value at selected points on
the time axis ('t') such that this and the 'apriori' information can be used to reconstruct the signal completely.
The main challenge in sampling & reconstruction is to make the best use of the apriori information available to represent a signal
most economically.

Congratulations, you have finished Lecture 21.

Module 3 : Sampling and Reconstruction


Lecture 22 : Sampling and Reconstruction of Band-Limited Signals
Objectives
Scope of this lecture:
If a Continuous Time (C.T.) signal is to be uniquely represented and recovered from its samples, then the signal must be band-limited.
Further we have to realize that the samples must be sufficiently close and the Sampling Rate must bear certain relation with the highest
frequency component of the original signal. In this lecture, we'll see:
A note about Band-limited signals.
The analyticity of time-limited and band-limited signals.
Reconstruction of Band-limited signals - The Shannon-Whiltaker-Nyquist Sampling Theorem

Band-limited signals:
A Band-limited signal is one whose Fourier Transform is non-zero on only a finite interval of the frequency axis.
Specifically, there exists a positive number B such that X(f) is non-zero only in

. B is also called the Bandwidth of the signal.

To start off, let us first make an observation about the class of Band-limited signals.
Lets consider a Band-limited signal x(t) having a Fourier Transform X(f).
Let the interval for which X(f) is non-zero be -B f B.

Then,

converges.

The RHS of the above equation is differentiable with respect to t any number of times as the integral is performed on a bounded domain
and the integrand is differentiable with respect to t. Further, in evaluating the derivative of the RHS, we can take

In general,

This implies that band limited signals are infinitely differentiable, therefore, very smooth .
We now move on to see how a Band-limited signal can be reconstructed from its samples.

Reconstruction of Time-limited Signals


Consider first a signal y(t) that is time-limited, i.e. it is non-zero only in [-T/2, T/2].
Its Fourier transform Y(f) is given by:

inside the integral.

Where

is the periodic extension of y(t) as shown

Now, Recall that the coefficients of the Fourier series for a periodic signal x(t) are given by :

Comparing (1) and (2), you will find

That is, the Fourier Transform of the periodic signal

is nothing but the samples of the original transform.

Therefore, given that; y(t) is time-limited in [-T/2, T/2] and periodic, the entire information about y(t) is contained in just
equispaced samples of its Fourier transform! It is the dual of this result that is the basis of Sampling and Reconstruction of Bandlimited signals :Knowing the Fourier transform is limited to, say [-B, B], the entire information about the transform (and hence the signal) is
contained in just uniform samples of the (time) signal !

Reconstruction of Band-limited signals


Let us now apply the dual reasoning of the previous discussion to Band-limited signals.
x(t) is Band-limited, with its Fourier transform X(f) being non-zero only in [-B, B]. The dual reasoning of the discussion in previous
slide will imply that we can reconstruct X(f) perfectly in [-B, B] by using only the samples x( n / 2B ). Let's see how.

This time,

is the

Fourier series co-efficient of

, the periodic extension of X(f).

(Fourier series in f -- fundamental period is 2B and

is the

Fourier

series coefficient)
What is the Fourier inverse of
The Fourier inverse of

?
is

. Therefore, the Fourier inverse

of

is

Thus we see that if we multiply the original Band-limited signal with a periodic train of impulses (period 1/2B, with
impulse at the origin of strength 1/2B ) we obtain a signal whose Fourier transform is a periodic extension of the original
? We need a mechanism that will blank out the spectrum of
in
spectrum. So how does one retrieve the original signal from
, i.e: multiply the spectrum with :


In other words, we need to feed

to an LSI system, the Fourier transform of whose impulse response is the above function (recall the

convolution theorem), i.e: one whose impulse response is:

An LSI system with above type of impulse response is called an Ideal Low Pass Filter .

The Sampling Theorem


On the basis of our discussion so far, we may state formally the Sampling Theorem.
Shannon-Whiltaker-Nyquist Sampling Theorem:
A band-limited signal with band-width B may be reconstructed perfectly from its samples, if the signal is sampled uniformly at a rate
greater than 2B.
Here's and overview of the derivation of sampling theorem:

Flash File

Is it essential for the sampling rate to be greater than 2B, or is it acceptable to have a sampling rate of exactly 2B?
What will happen if the value of X(f) at -B and B are not zero?

will have values at B and -B different from those of X(f) (due to the

periodic expension). Thus the transform of the output of the ideal low pass filter will not match that of the original signal at -B and B.
While finite, point mismatches in the transform will not matter, problems arise if X(f) has impulses at B or -B. Then, the output of the
ideal low pass filter will be different from the original signal.
For example, consider sin(t). It has a bandwidth
signal has value zero at all multiples of
the Fourier Transform involved:

. Say we sample the signal at a rate

. What happens to all our samples? The

! You can't possibly reconstruct the signal from these samples. What went wrong? Lets look at

Note that the periodic extension (taking period to be

) of this signal is identically zero. Thus an ideal low pass filter cannot retrieve

this spectrum from its periodic extension.

This is why the Sampling theorem says one must use a sampling rate greater than 2B, where B is the Bandwidth of the signal. Say we
sample at a rate

. What is the Fourier transform of

Now, an appropriate Low-pass filter can give us back the original signal !

Conclusion:
In this lecture you have learnt:
Band-limited signals are infinitely differentiable and very smooth.
Given that 'x(t)' is Band-limited with its Fourier transform 'X(f)' being non-zero only in [-B,B] , we can say that

has a

spectrum that is the periodic extension of 'X(f)' with period 2B.

By passing
'x(t)'.

through an appropriate Ideal Low-pass filter one can obtain back

Shannon-Whiltaker-Nyquist Sampling Theorem:


A band-limited signal with band-width 'B' may be reconstructed perfectly from its samples, if the signal is sampled at a rate
greater than '2B'.

Congratulations, you have finished Lecture 22.

Module 3 : Sampling & Reconstruction


Lecture 23 : Low pass filter
Objectives:
Scope of lecture:
In the previous lecture we mentioned that the ideal low pass filter can be used to recover the original continuous time signal from its
samples. In this lecture, we will derive its impulse response, and see how exactly the original signal can be recovered. We'll be covering:
The impulse response of an ideal low pass filter.
Reconstruction of a band-limited signal by a low pass filter.
Problems with the ideal low pass filter.

The Ideal Low-Pass Filter


In this lecture, we examine the Ideal low pass filter and the process of reconstruction of a Band-limited signal.
Let us first see another way of interpreting the action of the Ideal low-pass filter.
IMPULSE RESPONSE OF IDEAL LOW PASS FILTER:
The Frequency response (the Fourier transform of the impulse response of an LSI system is also called its frequency response) of an ideal
low pass filter which allows a bandwidth B, is a rectangle extending from -B to +B, having a constant height as shown in the figure.

Lets look at the Impulse Response of this Ideal low pass filter, taking its height in [-B, B] to be 1. Using the formula for inverse Fourier
Transform we have :

(note that

Thusthe impulse response of an ideal low pass filter turns out to be a Sinc function, which looks like:

Reconstruction of a signal by low pass filter :


Consider a signal x(t) having bandwidth less than B.
We sample x(t) at a rate 2B and pass

into an Ideal low-pass filter of bandwidth B.


The signal x(t) and the signal (

) obtained by multiplying the signal by a periodic train of impulses, separated by

, having strength

1 are shown below.

What happens when

is fed into the LSI system?

Lets look at the convolution of the impulse response h(t) of the Ideal low-pass filter with

where we have seen

When

is passed through a Low pass filter, the output which is the reconstructed signal is nothing but the sum of copies of the

impulse response h(t) shifted by integral multiples of

and multiplied by the value of x(t) at the corresponding integral multiple of

Also observe that the h(t) is zero at all sample points (which are integral multiples of
x(t) can be visualized as a sum of the following signals :

) except at zero. Thus, the reconstruction of

Problems with the IDEAL LOW PASS FILTER


It is infinitely Non-Causal:
The impulse response of the ideal low pass filter extends to
y(t) corresponding to input signal x(t) is given by :

The value of y at any t depends on values of x all the way to

. If the impulse response is denoted by h(t), the output signal

if h(t) extends to

possible for an Ideal low-pass filter. In other words, unless one knows the entire
Note if h(t) had been finitely non-causal (say zero for all t less than some possible subject to a time-delay (of

. Thus realization in real time is not

, reconstruction cannot be done.

), then real time realization would have been

).

It is unstable:
It can be shown that

diverges.

Challenging Problem: Prove that Ideal filter is unstable.


Proof:

which is greater than

and we know that this series diverges. Hence it is established that the Ideal Low Pass Filter is unstable.
This implies that bounded input does not imply bounded output. Thus if we build an oscillator with Ideal Low pass Filter a bounded
input may result in an unstable output.

The system is not rational:


That means, it is not exactly realizable with simple well known elements .
We will get back to how these problems are tackled a little later. In the next lecture, we move on to the problem of impulses not
being physically realizable.

Conclusion:
In this lecture you have learnt:
Impulse response of an ideal low pass filter turns out to be a Sinc function.
When sampled signal is passed through a low pass filter, reconstructed output signal is nothing but the sum of copies of impulse
response h(t) shifted by integral multiples of
Problems with the ideal low pass filter :
1. It is infinitely non-causal.
2. It is unstable.
3. It is not a rational system.

Congratulations, you have finished Lecture 23.

and multiplied by the value of x(t) at the corresponding integral multiple of

Module 3 : Sampling & Reconstruction


Lecture 24 : Realistic sampling of signals
Objective:
Scope of this lecture:
In the previous lecture we have seen the application of a train of pulses in obtaining a sampled version of the Continuous Time signal
'x(t)'. Here you will see in reality that how a train of pulses gets multiplied with the C.T. signal and thus resulting in the sampled signal.
Realistic sampling of signals by train of pulses.
Method of generating train of pulses.
Fourier transform of sampled signal
.
To find condition for the train of pulses used for sampling.

Realistic sampling of signals:


Our goal of achieving a sampled signal is possible by the multiplication of the original C.T. signal with the generated train of pulses.
Now these two signals are multiplied practically with the help of a multiplier as shown in the schematic below. In our analysis so far, this
is how we imagined sampling of a signal.

But impulses are a mathematical concept and they cannot be realized in a real system. In practice we can best obtain a train of pulses
called a saw-tooth pulse. These pulses are generally used for creating a time-base for the operation of many electronic devices like the
CRO (Cathode Ray Oscilloscope).

Practical Implementation:
Lets see how the train of pulses of the following kind can be multiplied by a signal 'x(t)'.
Consider the circuit below.

The two pulse trains p 1 (t) and p 2 (t) are synchronized so that when one is high the other is low and vice verse as shown in the figure
below:

In the circuit when x(t) and p 1 (t) are multiplied we get the output. Thus we get the output when p 1 (t) is ON and it is zero when
p 2 (t) is ON.
You have just seen how we can multiply a signal x(t) with the following periodic pulse train p(t) to obtain the sampled signal
Now the train of pulses that we had used is shown below with respect to its amplitude and period.

Fourier series representation of p(t)


Now the Fourier Series Representation of 'p(t)' is given as:

Where the Fourier Coefficients of the series are defined as:

For the constant term (k = 0) in the Fourier Series expansion is:

In general we can represent k th coefficient as:

Simplifying the above term we get the envelope of the coefficients as a sinc function:

Simplifying the above term we get the envelope of the coefficients as a sinc function:

Lets have a look at the envelope of


which is shown as below:

Looking at the expression for the coefficients of the Fourier Series Expansion we observe that:
If

is large then there are few samples in the main lobe.

As

increases then the main lobe broadens.

As

As

coefficients become constant ( they tend to

) as the central lobe tends to infinity.

'p(t)' tends to the train of impulses we had started our discussion on sampling with. Notice then that the observations

above are consistent with this. The Fourier coefficients of the periodic train of impulses are indeed all constant and equal to the reciprocal
of the period of the impulse train.

The Fourier Transform of the Sampled Signal

We now see what happens to the spectrum of continuous time signal on multiplication with the train of pulses. Having obtained the
Fourier Series Expansion for the train of periodic pulses the expression for the sampled signal can be written as:

Taking Fourier transform on both sides and using the property of the Fourier transform with respect to translations in the frequency
domain we get:

This is essentially the sum of displaced copies of the original spectrum modulated by the Fourier series coefficients of the pulse train. If
'x(t)' is Band-limited so long as the the displaced copies in the spectrum do not overlap. For this the condition that 'f s ' is greater than
twice the bandwidth of the signal must be satisfied. The reconstruction is possible theoretically, using an Ideal low-pass filter as
shown below:

Thus the condition for faithful reconstruction of the original continuous time signal is :

where

is the bandwidth of the

original band-limited signal.

A General Case for the train of pulses.


Till now we have studied sampling using a rectangular train of pulses which permits the faithful reconstruction of the original signal. This
might lead us to question whether the train of pulses needs to be rectangular. Will, say a train of triangular pulses have the same effect
as the periodic rectangular train of pulses?

The answer is YES.


Let us look more closely into our analysis of sampling using a rectangular train of pulses. This signal had a Fourier series representation
and multiplication of the band-limited signal with it gave rise to a signal x s(t). The spectrum of this signal had periodic repetitions of
the original spectrum modulated by the Fourier series coefficients of the train of pulses. But this much would hold even if the rectangular
pulse train were replaced by any periodic signal (whose Fourier series exists) with the same period.
The Fourier series coefficients would definitely change but we are interested only in the central copy. As long as that is non-zero we can
still reconstruct the signal by passing it through an ideal low-pass filter. The constant Fourier series co-efficient is proportional to the
average value of the periodic signal. Thus, any periodic signal, whose Fourier series exists, and has a non-zero average, with
fundamental frequency greater than twice the bandwidth of the band-limited signal can be used to sample it; and the original signal can
be reconstructed using an ideal low-pass filter.
Of course, if the periodic signal used has a zero average, like the one shown below, an ideal low-pass filter cannot be used for
reconstruction.

Conclusion:
In this lecture you have learnt:
In practice a train of pulses is used for sampling a signal instead of a train of impulses.
Train of pulses p(t) is periodic and obeys Dirichlet's conditions. It can be represented as a Fourier series and is used in
deriving the condition for reconstruction of the original band-limited signal.
Any periodic signal whose Fourier series exists and has a non-zero average with fundamental frequency greater than twice the
bandwidth of the band-limited signal can be used to sample it and the original signal can be reconstructed using an ideal lowpass filter

Congratulations, you have finished Lecture 24.

Module 3 : Sampling and Reconstruction


Lecture 25 : Aliasing (Under Sampling)
Objectives:
Scope of this lecture:
In the previous lecture we studied that a train of pulses which obey the Dirichelet's Conditions are generally used for sampling a signal.
We learnt the conditions necessary for the reconstruction of the original signal. In this lecture we will study the concept of Aliasing the
problems associated with improper sampling frequency selections.
To study what happens when sampling rate is less than or equal to twice the bandwidth of the originalsignal which is also called
as the Aliasing effect of under sampling .
To understand stroboscopic effect .
Advantages of aliasing

ALIASING EFFECT OF UNDERSAMPLING


We have seen how by sampling a Band-limited signal at a rate greater than twice the bandwidth of the signal, it is possible to
reconstruct the original signal. But what happens if the sampling rate is less than (or equal to) twice the bandwidth of the band-limited
signal?
The different translated versions of the original spectrum overlap in the spectrum of the sampled signal. This effect is called aliasing. If
we attempt to reconstruct the original signal using a low-pass filter, we might get a signal completely different from the original signal.
Lets take an example.

Flash File

Example:Let us now also look at very special example, consider a disc rotating with a single radial line marked on the disc. The flashing strobe
acts as a sampling system, since it illuminates the disc for extremely brief time intervals at a periodic rate. When the strobe frequency is
much higher then the rotational speed of the disc, the speed of rotation of the disc is perceived correctly. When the strobe frequency
becomes equal to the rotational frequency, the line appears to be at same position. When the strobe frequency becomes less than twice
the rotational frequency, the rotation appears to be at a lower frequency than is actually the case. Furthermore due to phase reversal,
the disc will appear rotating in the reverse direction .This phenomenon is known as stroboscopic effect.
Advantages of aliasing :

1.

Can be made to use

,i.e. carrier frequency for transmission & use Band Filter.

2. We can use frequency of any multiple of


3. Also in this case modulation by

do not need pulses with average value zero.


Conclusion:
In this lecture you have learnt:
Original signal cannot be reconstructed from undersampled signal because higher frequencies are reflected into lower frequencies
in the Fourier transform of the undersampled signal .
Stroboscopic effect helps in understanding undersampling.
Aliasing is not always undesirable . It has some advantages also.

Congratulations, you have finished Lecture 25.

Module 3 : Sampling & Reconstruction


Lecture 26 : Ideal low pass filter
Objectives:
Scope of this Lecture:
We saw that the ideal low pass filter can be used to reconstruct the original Continuous time signal from its samples. However, due to
non-availability of an ideal low pass filter and problems associated with it, we look at an alternative method of reconstruction using a
Zero-Order-Hold filter.
How to tackle the problem of low pass filter not being ideal .
To know about Zero Order sampling .
To study Response of the Hold Filter .
To study the types of distortions in the Hold Filter.
Reconstruction of Signal in zero order Hold Filter.

How can we tackle the problem of Low pass Filter not being ideal ?

=Magnitude Response of the filter


=Phase Response

Normally we want ,

to be zero .(This is what we want IDEALLY)

Next best thing that we can do is we can have some linear phase variation, i.e. constant time delay for all frequencies.

Unfortunaltely analog filters can NEVER give linear phase response.


We can design analog filters as near to an ideal filter in terms of magnitude response, but can not really make ideal filter.
How can we solve this problem?
What we have to do is to get a Maximally Flat Sampling, i.e. fs>>2fm

This is what we can do using Hold Filters, which is referred to as Zero - Order - Hold Sampling. It is a staircase approximation of the
analog signal.

How it works?
In practice, analog signals are sampled using zero - order - hold (ZOH) devices that hold a sample value constant until the next sample is
acquired. This is also called as flat - top sampling. This operation is equivalent to ideal sampling followed by a system whose impulse
response is a pulse of unit height and durationT s ( to stretch the incoming pulses ). This is illustrated in Figure below :


Reconstruction of signal in Zero Order Hold Filter
The analog Signal (continuous - time signal) is multiplied with a periodic impulse train, referred to as Sampling Function. A sampled
signal is then obtained as shown in figure below.

The ideally sampled signal x p(t) is the product of the impulse train p(t) and the analog signal x c(t) and is written as

The ZOH Sampled Signal x ZOH(t) can be regarded as the convolution of h o(t) and a sampled signal x p(t)


Distortion in Zero-order-hold sampling :
The transfer function H(f) of the zero - order - hold circuit is the Sinc function

Since the spectrum of the ideally sampled signal is


The spectrum of the zero- order - hold sampled signal x ZOH(t) is given by the product

This spectrum is illustrated in Figure shown below :

Figure : Spectrum of a zero - order - hold sampled signal


The term sinc( f / f s ) attenuates the spectral images X( f - k f s ) and causes their distortion.


There are two types of distortion :a) Aliased Component Distortion : Aliased Component distortion can be corrected, if required by cascading another better
lowpass filter.
b) Baseband Spectrum Distortion (Sinc Distortion) : Baseband Spectrum Distortion is corrected by an Equalizer. An
Equalizer is an LSI system with Fourier Transformable impulse response which acts like an inverse 1 / H ( f ) to another
LSI system, at least in a certain range of frequencies. Equalizer is also used to correct channel imperfections in a
communication system.
The higher the sampling rate f s, the less is the distortion in the spectral image X( f ) centered at origin.
An ideal lowpass filter with unity gain over -0.5 f s f 0.5 f s recovers the distorted signal.

To recover X( f ) with no amplitude distortion, we must use a compensating filter that negates the effects of the Sinc
distortion by profiling a concave shaped magnitude spectrum corresponding to the reciprocal of the Sinc function over the
principal period | f | 0.5 f s

Figure : Spectrum of a filter that compensates for Sinc distortion


The magnitude spectrum of the compensating filter is given by

Reconstruction of signal in Zero Order Hold Filter


The analog Signal (continuous - time signal) is multiplied with a periodic impulse train, referred to as Sampling Function. A
sampled signal is then obatained as shown in figure below.

The ideally sampled signal xp(t) is the product of the impulse train p(t) and the analog signal xc(t) and may be written as

The ZOH Sampled Signal xZOH (t) can be regarded as the convolution of ho(t) and a sampled signal xp(t)

Conclusion:
In this lecture you have learnt:
analog filters can NEVER give linear phase response .
Hence, we can design analog filters as near to an ideal filter in terms of magnitude response, but can not really make ideal filter .
Hold Filters can be used to get approximation by a Maximally Flat Sampling i.e f s >> 2f m .

There are 2 types of distortion: Baseband Spectrum Distortion (Sinc Distortion)


& Aliased Component Distortion.
The ZOH Sampled Signal x ZOH(t) can be regarded as the convolution of h o(t) and a sampled signal x p(t).

Congratulations, you have finished Lecture 26.

Module 3 : Sampling and Reconstruction


Lecture 27 : Digital signal processing

Objectives:
Scope of this Lecture:
In this lecture we introduce concepts regarding Digital Signal Processing.
Definition of digital signal processing .
Advantages of digital signal processing.
To understand how DSP works .

What is DSP ?
Digital Signal Processing is used in wide variety of applications .
Digital : Operating by the use of discrete signals to represent data in the form of digits.
Signal : A variable parameter by which information is conveyed through an electronic circuit.
Processing : To perform operations on data according to need or instruction.
Hence,
Digital Signal Processing can be defined as :
"Changing or analysing information to a discrete sequences of numbers."
Two unique features that differentiates DSP from ordinary Digital Processing :
a) Signals from the real world.
b) Signals are discrete.
Why should we use DSP ?
a) Versatility :
Digital Systems can be reprogrammed.
Digital Systems can be ported to different hardware.
b) Repeatability :
Digital systems can be easily duplicated.
Digital systems do not depend on strict component tolerances.
Digital system responses do not drift with temperature.
c) Simplicity :
Some things can be done more easily digitally than with analogue systems.
Some common features :
They use a lot of maths (multiplying and adding signals).
They deal with signals that come from the real world.
How DSP works?
A continuous time signal is converted to a discrete time signal and then reprocessed to get continuous time signal. This is how the
sampling theorem is used in parctice. It forms the link between analog and digital signal processing, and allows us to use digital
techniques to manipulate analog signals.

Conclusion:
In this lecture you have learnt:
Digital Signal Processing can be defined as "Changing or analyzing information to a discrete sequences of numbers."
DSP is Versatile, Repeatable & Simple way of processing signals.
Sampling theorem forms the bases of DSP.
In DSP a continuous time signal is converted to a discrete time signal and then reprocessed to get continuous time signal.
Congratulations, you have finished Lecture 27.

Module 3 : Sampling and Reconstruction


Lecture 28 : Discrete time Fourier transform and its Properties

Objectives:
Scope of this Lecture:
In the previous lecture we defined digital signal processing and understood its features. The general procedure is to convert the
Continuous Time signal into Discrete Time signal. Then we try to obtain back the original signal. In this lecture we will study the concepts
of Discrete time Fourier Transform and Signal Representation.
Representation of discrete time periodic signal .
Discrete Time Fourier Transform (DTFT) of an aperiodic discrete time signal .
Another way of representing DTFT of a periodic discrete time signal.
Properties of DTFT

Representation of Discrete periodic signal.


A periodic discrete time signal x[n] with period N can be represented as a Fourier series:

where

Here the summation ranges over any consecutive N integers of x[n],


where N is the period of the discrete time signal x[n].
Here equation (i) is called the Synthesis Equation and equation (ii) is called the Analysis Equation.
Now since x[n] is periodic with period N; the Fourier series coefficients are related as;

Discrete Time Fourier Transform of an aperiodic discrete time signal


Given a general aperiodic signal
can construct a periodic signal
identical to

. As the period

of finite duration, that is; for some integer N,


for which
,

is one period. As we chose period N to be larger than the duration of


for any finite value of n.

Flash File

. From this aperiodic signal we


,

is


The Fourier series representation of

Since

is :

over a period that includes the interval

period, so that

can be replaced by

, it is convenient to choose the interval of summation to be this

in the summation. Therefore,

Flash File

Another way of representing DTFT of a periodic discrete signal


In continuous time, the fourier transform of
discrete time fourier transform is periodic in

is an impulse at
with period

.However in discrete time ,for signal


. The DTFT of

i.e Fourier Transform can be written as :

Consider a periodic sequence x[n] with period N and with fourier series representation

Then discrete time Fourier Transform of a periodic signal x[n] with period N can be written as :

the

is a train of impulses at


Properties of DTFT
Periodicity:

Linearity:
The DTFT is linear.
If

and

then

Stability:
The DTFT is an unstable system i.e the input x[n] gives an unbounded output.
Example :
If x[n] = 1 for all n
then DTFT diverges i.e Unbounded output.
Time Shifting and Frequency Shifting:
If,

then,

and,

Time and Frequency Scaling:


Time reversal
Let us find the DTFT of x[-n]


Time expansion:
It is very difficult for us to define x[an] when a is not an integer. However if a is an integer other than 1 or -1 then the original signal is
not just speeded up. Since n can take only integer values, the resulting signal consists of samples of x[n] at an.
If k is a positive integer, and we define the signal

then

Convolution Property :
Let h[n] be the impulse response of a discrete time LSI system. Then the frequency response of the LSI system is

Now

and

If

then

Proof

now put n-k =m, for fixed k,

This is a very useful result.

Symmetry Property:
If

then

Proof

Furthermore if x[n] is real then,

The DTFT of Cross-Correlation Sequence between x[n] and h[n]


If the DTFT of the cross correlation sequence between x[n] and h[n] exists then,

In particular,

Conclusion:
In this lecture you have learnt:
For a Discrete Time Periodic Signal the Fourier Coefficients are related as

DTFT is unstable which means that for a bounded 'x[n]' it gives an unbounded output.
We saw its time shifting & frequency shifting properties & also time scaling & frequency scaling.
Convolution Property for an LSI system is given as, if 'x[n]' is the input to a system with transfer function 'h[n] then the DTFT of
the output 'y[n]' is the multiplication of the DTFTs of 'x[n]' and 'h[n]'.

We saw symmetry properties and DTFT of cross-correlation between 'x[n]' and 'h[n]' .

Congratulations, you have finished Lecture 28.

Module 3 : Sampling and Reconstruction


Lecture 29 : Inverse Discrete Time Fourier Transform

Objectives:
Scope of this Lecture:
In the previous lectures we built up concepts of sampling , discrete time signal processing and Discrete Fourier Transform. The next
logical step is to study the Inverse Discrete Fourier Transform. In this lecture we indulge in the various IDFT related concepts.
The equation for Inverse Discrete Time Fourier Transform for a discrete periodic signal .
Inverse DTFT for the Cross-Correlation between 'x[n]' and 'h[n]'.
Parseval's Relation For discrete time periodic signals .

Inverse DTFT :
DTFT of a discrete periodic signal x[n] by period N is given by :

is periodic in

with period

The Fourier coefficients of this periodic function are given by

The above equation is referred as Inverse DTFT equation .

Inverse DTFT equation :

Now ,
x[-n] is the nth Fourier series co-efficient of

Now,

Now inverse DTFT for the cross co-relation between sequences x[n] and h[n] can be written as :

Put m=0, then

i.e dot product of sequences x[n] and h[n] = dot product of DTFT's of x[n] and h[n] .
in particular put x[n] =h[n] ,then

The above is Parseval's relation for discrete time periodic signals .

Conclusion:
In this lecture you have learnt:
DTFT is periodic in
THE DTFT

with period

and also , the I-DTFT is periodic in


.

Parseval's Relation for Discrete periodic signals :

Congratulations, you have finished Lecture 29.

with period

.
.

Module 4 : Laplace and Z Transform


Lecture 30 : Laplace Transform
Objectives:
Scope of this lecture:
Laplace Transform is a powerful tool for analysis and design of Continuous Time signals and systems. The Laplace Transform differs
from Fourier Transform because it covers a broader class of CT signals and systems which may or may not be stable. It can also be used
for obtaining solutions to integro-differential equations of C.T. systems.
First ,we shall look at the definition of Laplace Transform .
Then we will understand the meaning of ROC (Region of Convergence) and the need to consider it.
We will solve sufficient examples for an in depth understanding of concepts covered.

Introduction
Till now we have been dealing with continuous and discrete domains . Then we studied the relationships involved using the transform
domains. A system actually operates in a natural domain but it can be well understood in transform domains . The advantage of
transform domains is that a few of the properties which may not be observed in natural domains are clear in transform domains. Most of
the LTI-Systems act in time domain but they are more clearly described in the frequency domain instead.
Till now ,we have seen the importance of Fourier analysis in solving many problems involving signals and LTI systems. Now, we shall deal
with signals and systems which do not have a Fourier transform.

But what was so special about Fourier transform in case of LSI systems?
We found that continuous-time Fourier transform (F.T.) is a tool to represent signals as linear combinations of complex exponentials. The
exponentials are of the form est with s=j and ejw is an eigen function of the LSI system. Also , we note that the Fourier Transform
only exists for signals which can absolutely integrated and have a finite energy.
This observation leads to generalization of continuous-time Fourier transform by considering a broader class of signals using the powerful
tool of "Laplace transform". It will be trivial to note that the L.T can be used to get the discrete-time representation using relevant
substitutions. This leads to a link with the Z-Transform and is very handy for a digital filter realization/designing. Also it will be helpful
to note that, the properties of Laplace Transform and Z-Transform are quite similar.
With this introduction let us go on to formally defining both Laplace and Z-transform.

Definition of Laplace transform:


The response of a Linear Time Invariant system with impulse response h(t) to a complex exponential input of the form est can be
represented in the following way :

Let

Where H(s) is known as the Laplace Transform of h(t). We notice that the limits are from [- to + ] and hence this transform is also
referred to as Bilateral or Double sided Laplace Transform. There exists a one-to-one correspondence between the h(t) and H(s) i.e. the
original domain and the transformed domain. Therefore L.T. is a unique transformation and the 'Inverse Laplace Transform' also exists.

e st is also an eigen function of the LSI systemonly if H(s) converges. The range of values for which
the expression described above is finite is called as the Region of Convergence (ROC). In this case, the region of convergence
Note that

is Re(s) > 0.
Thus, the Laplace transform has two parts which are , the expression and region of convergence respectively. The region of
convergence of the Laplace transform is essentially determined by Re(s). Here onwards we will consider trivial examples for a better
understanding of the ROC.

Example of Laplace Transform


Consider that the impulse response h(t) = et u(t).
Thus we notice that by multiplying by the term u(t) we are effectively considering the unilateral Laplace Transform whereby the limits
tend from 0 to +
Also we notice that h(t) is not Fourier transformable as it is not absolutely integrable.
Consider the Laplace transform of h(t) as shown below:

As stated earlier the symbol s is a complex number and is defined as s = + j..


Substituting s in the above equation we get:

Observing the above equation closely, we realize that firstly H(s) converges if and only if

We analyze the part (1-) as follows:


For a decaying e(1-) it is essential that (1which is also denoted as Re(s) > 1.

)< 0 . This implies that (

> 1) which means that the Real part of 's' is greater than '1'

This is what defines the " Region of Convergence " in an S-Complex Plane. The ROC of the Laplace Transform is always determined by
the Re(s). The ROC in general gives us an idea of the stability of a system and is also a representation of the poles-zero plot of a
system. It is essential to note that the ROC never includes poles.

Evaluation of the integral yields:


H(s)=

=1/(s-1)

We observe that there is a single pole at s=1. Since the Region of Convergence cannot contain poles therefore ROC start from '1' and
tends outwards to infinity.
e st in physical systems:
We consider the real part of e st , where s = + j .

Such a response is visible in RLC (Resistance-Inductance and Capacitance) systems. It is not only visible in the electrical field but also in
other disciplines like mechanical field. In such cases the above expression is multiplied by a polynomial or a combination of such
expressions.

What is the need to consider region of convergence while determining the Laplace transform?
If we consider the signals e-at u(t) and -e-at u(-t) we note that although the signals are differing, their Laplace Transforms are identical
which is 1/( s+a). Thus we conclude that to distinguish L.T's uniquely their ROC's must be specified. Further from the ROC we can
define many important conclusions which
A few important properties of the ROC are listed below:
The ROC of F(S) consists of strips parallel to

in complex variable plane (S-plane).

We know that the ROC is dependent only on the real parts of 'S' which is '' , therefore the property.

The ROC does not contain any poles.


Since if this happens then the Laplace Transform becomes infinity. For example, if the F(S)= 1/(s-1) then the ROC cannot
contain s =1 because at this point the L.T becomes infinity.

If h(t) is a time limited signal and is Laplace Transformable, then its ROC will be the entire S-plane.
For example the ROC for

is the entire S-plane.

The region of convergence is always between two vertical lines in s-plane. These vertical lines need not be in finite
region. But note that the ROC is always simply-connected but not multiply-connected in the s-plane.
This fact can be explained by the following illustration:

Let H1(s) and H2(s) be the respective Laplace transforms of the first and second terms.H1(s) and H2(s) converge in the region
Re(s) > 1 and Re(s) < 1 respectively. But, h(t) doesn't have any Laplace transform due to no common ROC where both H1(s) and
H2(s) converge.

Consider another example on ROC:


Let us consider another example which illustrates the need to specify ROC for completely defining the Laplace transform of a given
function:
h 1 (t) = et u(t) h 2 (t) = -et u(-t)
We will use the concepts gathered till now to determine the ROC's of the above signals after computing the respective Laplace
Transforms.

[ROC for this is given by Re(s) > 1]

[ Thus the ROC of H2(s) is given by Re(s) < 1;provided Re(1-s) > 0]
Thus, two different functions may have same expressions but correspond to different ROC.
ROC's are given as:

Conclusion:
In lecture you have learnt:

is called the Laplace Transform of h(t) and the Region of Convergence (ROC) of the Laplace transform is
essentially determined by the real part of the complex number 's' denoted as Re(s).
Two different functions may have the same Laplace Transform so the only way to uniquely describe them is by the means of ROC.
The ROC consists of strips parallel to the
Congratulations, you have finished Lecture 30.

-axis, it does not include poles and is always simply-connected.

Module 4 : Laplace and Z Transform


Lecture 31 : Z Transform and Region of Convergence
Objectives:
Scope of this lecture:
We have already seen the implementation of Fourier Transform and Laplace Transform for the study of Continuous Time (C.T.) signals
and systems. Now our interest lies in frequency domain analysis and design of Discrete Time (D.T.) signals and systems. The ZTransform provides a valuable technique for frequency domain analysis of D.T. signals and design of DT-LTI systems. Further ZTransform offers an extremely convenient and compact way to describe digital signals and processors. Numerical problems are presented
for a better understanding of the relevant concepts involved.
We shall look at the definition of Z-transform .
The need to consider Region of Convergence (ROC) with suitable illustrations
The nature of ROC's in both Laplace Transform and Z-transform Domains..

Z-transform
The response of a linear time-invariant system with impulse response h[n] to a complex exponential input of the form
represented in the following way :

where
In the complex z-plane , we take a circle with unit radius centered at the origin.

H(w) is periodic with period


When we replace z by

with respect to ' w ' .

,we get periodicity of

in the form of a circle.

Nature of Region of Convergence


Laplace Transform:
The ROC of the Laplace transform X(s) of a two-sided signal lies between two vertical lines in the s-plane.

can be

and

depend only on real part of s. For a right-sided signal

plane. Similarly for a left-sided signal


and
for both t > 0 and t < 0 ; both

and the corresponding ROC is referred to as right-half

. This ROC is referred to as left-half plane. When x(t) is two-sided i.e; of infinite extent
are finite and the ROC thus turns out to be a vertical strip in the s-plane.

Z-transform:
The ROC of X(z) of a two sided signal consists of a ring in the z-plane centered about the origin.

and

depend only on magnitude of z. As in the case of Laplace transform

left-sided sequence. If x[n] is two-sided ;the ROC will consist of a ring with both

Conclusion:

Flash File

Congratulations, you have finished Lecture 31.

for a right-sided sequence and


and

finite and non-zero.

for a

Module 4 : Laplace and Z Transform


Lecture 32 : Properties of Laplace and Z Transform
Objectives:
Scope of this Lecture:
In the previous lecture you have learnt about the ROC conditions for Laplace Transform as well as Z-Transform along with their
respective plots. The LT as well as the ZT have several properties , which will be covered in detail in this lecture.
We shall look at the properties of Laplace and Z-transform.
The properties in general are Linearity, Differentiation in Time Domain, Time Shift and Time Scaling.
Later, we shall use the above properties to determine Laplace transform and Z-transform.

Properties of Laplace and Z-Transform


1) Linearity
For Laplace:
with ROC R1 and

If

with ROC R2,

then
with ROC containing

The ROC of X(s) is at least the intersection ofR1 and R2,which could be empty,in which case x(t) has no Laplace transform.

For z-transform :
If

with ROC = R1 and

with ROC = R2

then
with ROC containing

2) Differentiation in the time domain


If

with ROC = R

then

with ROC = R.

This property follows by integration-by-parts. Specifically let


Then

and hence

The ROC of sX(s) includes the ROC of X(s) and may be larger.
This property holds for z-transform as well.

3) Time Shift
For Laplace transform:
with ROC = R

If
then

with ROC = R

For z-transform:

with ROC = R

If
then

with ROC = R except for the possible addition or deletion of the origin or infinity

Because of
the multiplication by

for

poles will be introduced at z=0, which may cancel corresponding zeroes of X(z) at z=0. In this case

equals the ROC of X(z) but with the origin deleted. Similarly, if

the ROC for

may get deleted.

4) Time Scaling
For Laplace transform:
with ROC=R.

If
then

with .

Let a

where

A special case of time scaling is time reversal, when a= -1

For z transform:
The continuous-time concept of time scaling does not directly extend to discrete time. However, the discrete time concept of time
expansion i.e. of inserting a number of zeroes between successive values of a discrete time sequence can be defined. The new sequence
can be defined as
x (k) [n] = x[n / k] if n is a multiple of k
= 0 if n is not a multiple of k
has k - 1 zeroes inserted between successive values of the original sequence. This is known as upsampling by k. If
x[n] X(z) with ROC = R
then x (k) [n] X(zk) with ROC = R 1/k i.e. z k
i.e. X(zk) = S x[n](z k) -n , < n <
= S x[n]z-nk, < n <


For Laplace transform
If
x(t) X(s) with ROC = R
then
et x(t) X(s - ) where Re(s - )

ROC(X(.))

For z-transform
If x[n] X(z) with ROC = R
then
n X(z/) 0 where z-1
Consider

ROC(X(.))

Conclusion:
In this lecture you have learnt:
If

with ROC = R then

1.

2.

3.

with ROC = R.

with ROC = R.

with

4. 4. e t x(t) X(s - ) where Re(s - )

ROC(X(.))

5. 5. If y(t) = (x*h)(t), Y(s) = H(s).X(s) where ROC of Y(s) = ROC(X)


If

ROC(H)

with ROC = R then


1.

with ROC = R except for the possible addition or deletion of infinity from ROC.

2. The continuous-time concept of time scaling does not directly extend to discrete time.Read upsampling for the reason.
3. Other properties of z-transform are similar to that of Laplace transform.

Congratulations, you have finished Lecture 32.

Module 4 : Laplace and Z Transform


Lecture 33 : Inverse Laplace and Z Transform
Objectives:
Scope of this lecture:
In the previous lecture we had seen the various properties of Laplace Transform as well as Z-Transform. We require these properties
when solving numerical problems related to LT as well as ZT. We now move on to the second phase which is the Inverse LT and ZT
definitions.
Also we will study the relationship between the Inverse LT and ZT and the similarity in their properties.
Relationship of Laplace Transform with Fourier Transform
Inverse Laplace Transform and its relationship with Inverse Fourier Transform.
Inverse Z-Transform and it relationship with Inverse Laplace Transform.

Inverse Laplace Transform:


We know that there is a one to one correspondence between the time domain signal x(t) and its Laplace Transform X(s). Obtaining the
signal 'x(t)' when 'X(s)' is known is called Inverse Laplace Transform (ILT). For ready reference , LT and ILT pair is given below :
X(s) = LT { x(t) } Forward Transform
x(t) = ILT { X(s) } The Inverse Transform
Some of the methods available for obtaining 'x(t)' from 'X(s)' are :
The complex inversion formulae.
Partial Fractions.
Series method.
Method of differential equations
In general:
If the Laplace Transform of 'x(t)' is 'X(s)' then the Inverse Laplace Transform of 'X(s)' is given by:

Now 'C' is any vertical line in the s-plane that is parallel to the imaginary axis.

Relationship between Laplace Transform and Fourier Transform


The Fourier Transform for Continuous Time signals is infact a special case of Laplace Transform. This fact and subsequent relation
between LT and FT are explained below.
Now we know that Laplace Transform of a signal 'x'(t)' is given by:

The s-complex variable is given by

But we consider
the vertical strip at

and therefore 's' becomes completely imaginary. Thus we have


.

. This means that we are only considering

From the above discussion it is clear that the LT reduces to FT when the complex variable only consists of the imaginary part . Thus LT
reduces to FT along the

(Imaginary axis).

Review:
We saw that if the imaginary axis lies in the Region of Convergence of 'X(s)' and the Laplace Transform is evaluated along it.
The result is the Fourier Transform of 'x(t)'.
Relationship between inverse Laplace Transform and inverse Fourier Transform
Similarly while evaluating the Inverse Laplace Transform of 'X(s)' if we take the line ' C ' to be the imaginary axis (provided it lies in
the Region of Convergence ). This is shown below as:

Thus above we notice that we get the Inverse Fourier Transform of 'X(f)' as expected.
This tells us that there is a close relationship between the Laplace Transform and the Fourier Transform. In fact the Laplace Transform is
a generalization of the Fourier Transform, that is, the Fourier Transform is a special case of the Laplace Transform only. The Laplace
Transform not only provides us with additional tools and insights for signals and systems which can be analyzed using the Fourier
Transform but they also can be applied in many important contexts in which Fourier Transform is not applicable. For example , the
Laplace Transform can be applied in the case of unstable signals like exponential signals growing with time but the Fourier Transform
cannot be applied to such signals which do not have finite energy.

Inverse Z - Transform
We know that there is a one to one correspondence between a sequence x[n] and its ZT which is X[z].
Obtaining the sequence 'x[n]' when 'X[z]' is known is called Inverse Z - Transform.
For a ready reference , the ZT and IZT pair is given below.
X[z] = Z { x[n] } Forward Z - Transform
x[n] = Z -1 { X[z] } Inverse Z - Transform
For a discrete variable signal x[n], if its z - Transform is X(z), then the inverse z - Transform of X(z) is given by

where ' C ' is any closed contour which encircles the origin and lies ENTIRELY in the Region of Convergence.

Relationship between Z - Transform and Discrete Time Fourier Transform (DTFT)

Similarly , on making the same substitution in the inverse z - Transform of X(z); provided the substitution is valid , that is, |z|=1lies in
the ROC.

Hence we conclude that the z -Transform is just an extension of the Discrete Time Fourier Transform. It can be
applied to a broader class of signals than the DTFT, that is, there are many discrete variable signals for which the
DTFT does not converge but the z-Transform does so we can study their properties using the z -Transform.
Examples:

The z - Transform of this sequence is

Also we observe that the DTFT of the sequence does not exist since the summation

diverges. This example confirms that in some cases the z - Transform may exist but the DTFT may not.

Conclusion:
In this lecture you have learnt:
If the Laplace Transform of 'x(t)' is 'X(s)' , then the Inverse Laplace Transform of X(s) is given by

where 'C' is any vertical line in the s plane, that is, parallel to the imaginary axis.
Fourier Transform of 'x(t)' = Laplace Transform of 'x(t)' when s=jw i.e. if the imaginary axis lies in the Region of Convergence of
'X(s)' and the Laplace Transform is evaluated along it , then the result is the Fourier Transform of 'x(t)'.
Congratulations, you have finished Lecture 33.

Module 4 : Laplace and Z Transform


Lecture 34 : Rational System Functions
Objectives:
Scope of this Lecture:
In the previous lectures we have systematically studied Laplace and Z- Transforms. We started with their respective definitions,
similarities in properties and the respective Inverse LT and ZT. Also we studied an important concept of ROC for LT as well as ZT. In this
lecture we will use the ROC concepts to understand poles, zeros and rational systems.
Let us define Rational system and Rational system functions.
We shall define poles and zeros of a rational system function.
We shall see what happens when we differentiate Laplace and z-transform and application of the above properties.

Introduction
In theory one can find the Inverse Laplace and the Inverse z - Transform using the integral formula given previously but this procedure
will generally involve integration of complex functions which may become very difficult at times so we normally find the inverse transform
using our 'experience' or observation. That is, we try to split the given function (whose transform we have to calculate) into those
functions whose Inverse Transform we know beforehand; just as we do while integrating a function of real variables.
We will proceed in a step by step manner towards our goal of finding the Inverse Laplace and z - Transform of a given function We shall
focus in depth on the class of those systems only which have a rational system function. Let us first define what a rational system
function is.

Rational System Function and Rational System


Recall that the system function of a continuous variable (respectively discrete variable) system is the Laplace Transform (respectively z Transform) of the impulse response of the system.

A rational system function is a system function which is a rational function, i.e. a rational system function corresponding to a system can
be expressed as a ratio of two polynomials in s ( respectively z ).
A continuous variable (respectively discrete variable) LSI system with rational system function H(s) (respectively H(z) ) is called a
RATIONAL SYSTEM.
Rational systems are the most studied and used. Also, these are the best known realizable systems (whether in continuous or discrete
variable), that is, various techniques have been developed to implement rational systems in practical situations, but this is not the case
with other systems, i.e. those which don't have a rational system function. This is the main reason why we focus on systems having
rational system functions only.
Now we proceed to define the terms 'poles' and 'zeroes' of a rational system function.

Poles and Zeros of a Rational System Function


If we think of H(s) as
the solutions of N(s) = 0 are called the zeroes of H(s) ,and the solutions of D(s) = 0 are called the poles of H(s).

Example:

Over here, s = 2 is called the pole of H(s).


(If we plot |H(s)| 3 -dimensionally, that is, if we consider the plane formed by the x-axis and the y-axis to be the s-plane and plot |H(s)|
along the z- axis, then the plot of |H(s)| can be thought of as a tent on the plane of s, held by an infinite pole at s = 2, since at s = 2,
|H(s)| tends to infinity, hence the term 'pole' .)

Quite obviously, the solutions of N(s) = 0 make H(s) also equal to zero. Hence the term 'zero'.
Now we show one of the important properties of the Transforms, namely differentiation of Laplace and z - Transform with respect to s
and z respectively.This property comes in very useful while calculating the inverse Laplace and inverse z - Transform of a given function.

Differentiation of Laplace Transform


Taking the continuous variable case first; as we know, the Laplace Transform of x(t) is given by

On differentiating both sides wrt s and multiplying both sides by -1 , we get

provided it exists. This can be thought of as taking the Laplace Transform of tx(t).
Hence we observe that, if

Similarly we have

Differentiation of z - Transform
Now for the discrete variable case, the z - Transform of x[n] is given by

On differentiating both sides w.r.t z and then multiplying both sides by -z, we get

provided it exists. This can be thought of as taking the z Transform of nx[n].


Hence we observe that, if

Similarly we can proceed further, by differentiating again and again.


Example:

In the first case x(t) is a right sided signal, and the Region of Convergence lies to the right side of a vertical line in the s - plane and in
the second case, x(t) is a left sided signal and the Region of Convergence lies to the left side of a vertical line in the s - plane.
As a general rule, we can say that for a right sided continuous variable signal, the ROC will be on the right side of a vertical line in the s
plane and similarly on the left side of a vertical line in the s plane for a left sided continuous variable signal.

In the first case, x[n] is a right sided signal and the Region of Convergence is the region in the z plane outside a circle and extending
upto infinity. In the second case, x[n] is a left sided signal and the Region of Convergence is the region lying inside a circle and including
z = 0.
As a general rule, we can say that for a right sided discrete variable signal, the ROC will be to the exterior of some circle in the z plane
and for a left sided discrete variable signal, it will be to the interior of some circle in the z plane.
The above example will be very useful while finding the inverse transform ( Laplace or z ) of a given rational function.

Z-Transform : Simple Pole case with no Zeros


Example:

Using the property derived above by differentiation of z transform, we get

Hence , we get

Now we proceed on solving inverse Laplace and Z - transforms with multiple poles, after looking at simple pole cases of Laplace and z
transform. We shall make use of the differentiation properties of Laplace and z transform extensively for solving multiple pole cases. We
start by deriving the following relations.

Inverse Laplace transform : multiple poles, no zeros


Problem :
Derive and prove using induction :-

Solution
Proof ( By Mathematical Induction )

Part i) Consider M=1

Part ii) Consider that the above result holds true for M = K;

To prove that it holds for M = K+1


We know the property that.

Thus, the result holds true for M = K+1.


Hence, by principle of Mathematical Induction the above result holds true.
We can now use the above relation for finding inverse Laplace tranforms of various rational functions.

Inverse Z transform : multiple poles, no zeroes


Problem:
Derive and prove using Induction.

Solution:
Proof ( By Mathematical Induction )

Proof ( By Mathematical Induction )


Part i) Consider M =1

Part ii) Consider that the above result holds true for M = k;

To prove that it holds for M = k+1:


We know the property that,

Now,

Taking z - inverse on both the sides, and using the above stated property we get

Substituting n=n+1 we get,

(at n= -1 , n+1 =0)


Thus, the result holds true for M = k+1. Hence, by principle of Mathematical Induction the above result holds true.
We can now use the above relation for finding inverse z transforms of various rational functions.

Conclusion:
In this lecture you have learnt:
A rational system function is a system function which can be expressed as a ratio of two polynomials in s ( respectively z )
A continuous variable (respectively discrete variable) LSI system with rational system function H(s) (respectively H(z) ) is called a
RATIONAL SYSTEM.

If we think of H(s) as
the solutions of N(s) = 0 are called the zeroes of H(s) ,and the solutions of D(s) = 0 are called the poles of H(s).

Congratulations, you have finished Lecture 34.

Module 4 : Laplace and Z Transform


Lecture 35 : Inverse Laplace and Z Transform of Rational Functions
Objectives
Scope of this Lecture:
In the previous lecture we have studied the concepts of poles, zeroes and rational systems. In this lecture we will continue the same
rythm and dig deeper concepts.
We shall look at the Inverse Laplace and Z-transform of rational functions
We shall solve numericals for a better understanding.
After looking at inverse Laplace and Z - transforms with multiple poles case , we now proceed in a step by step manner towards finding
the inverse Laplace and z - Transform of a given function. We will focus only on rational system functions as in earlier cases.
Inverse Laplace transform : Rational functions
Consider an arbitrary rational polynomial in Laplace Transform

Examples:
1) Let us consider the function in s:

2) Let us consider an LTI system with system function:

i.e.
As the ROC has not been specified. there are several different ROCs and correspondingly, several different system impulses.
Possible ROCs for the system with poles at s = -1and s = 2 and a zero at s = 1

Fig.a

Causal, unstable system.

Fig.b noncausal, stable

Fig.c

system.

Noncausal, unstable system.

Conclusions:
Properties of certain class of systems can be explained simply in terms of the locations of the poles. Particularly, consider a causal LTI
system with a rational system function H(s). Since the system is causal, the ROC is to the right of the right most pole. Consequently, for
this system to be stable (i.e. for the ROC to include the j-axis), the right most pole of H(s) must be to the left of the j-axis. i.e.


Inverse Z - transform:
Consider an arbitrary rational z-transform:

Examples:
Example 1:
Consider the z transform

Example :
Consider the z transform

There are two poles one at z=1/4 and at z=1/3. The partial fraction expansion, expressed in polynomials in 1/z, is

Thus, x[n] is the sum of 2 terms, one with z - transform 1/[1-(1/4z)] and the other with z - transform 2/[1-(1/3z)]. Thus,

As the ROC is not mentioned, we get different inverses for different possible ROCs. We do not discuss causality and stability as this may
not be a system function. One possible inverse is worked out, the other two left as an exercise to the reader.

fig d: Pole pattern when ROC is right sided, i.e. |z|>1/3


We can identify by inspection ,

fig e:

When the ROC is two-sided, i.e. 1/4<|z|<1/3

fig f:

When the ROC is left sided,i.e. |z|<1/4

Conclusion:
In this lecture you have learnt:
If the system is causal then the ROC extends from the right most pole to infinity.
A system is stable if the ROC includes the imaginary axis and therefore the right most pole of 'H(s)' must be to the left of the
imaginary axis
A causal system with a rational function 'H(s)' is stable if and only if all poles of H(s) lie in the left-half of the s-plane and must
include the unit radius circle in the z-plane.

Congratulations, you have finished Lecture 35.

Module 4 : Laplace and Z Transform


Lecture 36 : Analysis of LTI Systems with Rational System Functions
Objectives
Scope of this Lecture:
Previously we understood the meaning of causal systems, stable systems and instable systems using the concept of ROC. Here we will
delve deeper into the previously established concepts by studying several theorems and other properties.
We shall derive necessary and sufficient conditions for causality and stability of both discrete and continuous rational systems.
We shall look at plotting of poles and zeros.
We shall even prove some theorems.
By taking inverse transform of the rational system function, one can arrive at linear constant coefficient difference/differential
equations.

Continuous Rational System


Causality
For a causal LTI system, the impulse response is zero for t=0 (and thus it is right sided!)
h(t) = 0 for all t<0

where H(s) is the system function (assuming system has a system function
with

for t > 0,

as

Thus, if the region of convergence is non null,

must be included in the ROC for the system to be causal.

Proof: As region of convergence is not null there exist an

as

for

which is given to be convergent

Hence H(S) is convergent for all Re(S)>Re(So)


is inthe region and belongs to ROC
Necessary and sufficient condition for causality in a rational system:
The region of convergence must include


DISCRETE RATIONAL SYSTEM :
A Causal Discrete time LSI system has an impulse response h[n], this is zero for n<0 and thus it is right sided.h[n]=0 for all
n< 0 for causality system function H(z)

Assuming H(z) has a non null ROC ,we require,


for n > 0,
thus

must be included in the ROC.

Example:
Let

for

is causal,

for

but consider

causal.

STABILITY OF RATIONAL SYSTEMS:


A continous LSI system is stable if and only if its impulse response is absolutely integrable, i.e.

.
Exploring the convergence of Laplace transform of impulse response of a stable LSI system, we find that

as

(Stability))

Thus H(s) converges on imaginary axis ( Re(s)=0 )


So, Re(s)=0 or imaginary axis is contained in ROC of system function for all stable LSI systems.
We can also look at this from a different point of view.Impulse response being absolutely integrable implies Fourier transform converges
as

is nothing but Fourier transform it is also bound to converge for Re{s} = 0

Re{s} = 0 is included in its ROC.

In general, Re{s} = 0 lies in ROC is not sufficent condition to imply stability. But for rational systems Re{s} = 0 lies in ROC
stable.
Now, we will prove the above result .

Proof for sufficiency condition :For any system to be stable, poles can not lie in ROC.Thus, there should not be any poles on the (imaginary axis) Re(s)=0.
Suppose and are the poles of the system function H(s) where Re()<0 and Re( )>0.
Now consider, inverse transform of

, there are two choices

system is

As Re{s}=0 is contained in the ROC and

, the only possible option is

( to have a non-empty ROC).

Looking at inverse transform of

As Re{s} = 0 lies in ROC , we will have to take

to be the inverse.

Thus, in a rational system, with ROC of the system function including Re(s)=0, the poles to the left of imaginary axis contribute rightsided exponentially decaying term and poles to the right of the imaginary axis contribute left-sided exponentially decaying term.

contributes a right-sided decaying exponential

contributes a left handed decaying exponent.

Poles to the right of imaginary axis contribute -P (t)et u(-t), where P (t) is a polynomial of degree k-1
k = order of pole at

in H(s)

Similarly poles to the left of imaginary axis contribute P (t)et u(t)

The absolute integral

Thus the absolute integral

sum of the absolute integrals of these terms (finite number because the system function is

rational)
Therefore, the system is stable.

Later, we shall prove the theorem ,that irrespective of the polynomial p(t),
order to justify the convergence of each absolute integral.

converges if and only if

converges, in

Rational continuous system functions


Let the system function

We can represent graphically the system function by showing all poles(zero's of D(s) ) and zero's (zero's of H(s) ) and it's ROC in splane.
eg:

Complete pictorial representation of the above system function H(s) in s-plane.

From this graph we can write the system function as

Now for stability, Re{s} = 0 should lie in ROC.


Representation of poles and zeros
Consider representation of the system function

poles to left of imaginary axis


poles to right of imaginary axis
, -- poles of order 1

--pole of order 2

-- pole of order 3

Thus H(s) can be represented as

On expansion of H(s) in terms of partial fraction we would get

Recall that in a rational system, with ROC of the system function including Re(s)=0, the poles to the left of imaginary axis contribute
right-sided exponentially decaying term and poles to the right of the imaginary axis contribute left-sided exponentially decaying term.
Thus, as we have seen earlier, contributes a right handed decaying exponential and contributes a left handed decaying exponential
and the contributions of following terms in the denominator are

Theorem

Irrespective of the polynomial p(t),

Proof by induction:
Mathematical Induction on degree of polynomial

converges if and only if

converges.

Base case: Suppose the statement is true for n=1 case we prove it is true for n=2 case.

Induction step: We assume

converges , for any polynomial of degree (k-1), We proceed to prove

converges for

apolynomial of degree k

is polynomial of degree (k-1) , by the assumption , we know

converges there by

also converges.

Hence, proved.

Theorem 2
For a discrete rational system stability implies and is implied by the unit circle in the z plane belonging to the ROC of the system function.
Proof :(a) For the stability of the system function
If the discrete rational system is stable then

The z transform of the impulse response (or the system function ) converges for | z | = 1.
(b) For a stability to be implied by | z | =1 (the unit circle ) belonging to the ROC of the system function
A pole cannot lie on the unit circle | z | = 1 in a stable system.

Rational discrete system functions.


Considering the function

when

is the pole of order 1 { (

< 1) is the assumption } ,

Now consideringthe Inverse transform of

is a pole of order 1 (

we have,

>1) .

<1, hence the only possible option for inverse is

as | z | =1 is contained in the ROC and

Similarly for the function

( since

>1and | z | =1 is contained in the ROC of the function )

Therefore,
is a left sided exponentially decaying term (
is a right sided exponentially decaying term The contribution of
The contribution of
(possibly multiplied by a polynomial in n if the order of pole >1 )
possibly multipled by a polynomial in n if the order of the pole >1 )

n u[n]

-( n u[-n-1])

Proof for stability of rational discrete systems


Similar to proof for stability of rational continuous systems, the absolute sum must be convergent.
The absolute sum

(Assuming two poles

is

<1) and

>1) of the order >1)

Increasing the number of poles would not make any difference to the proof .
Now we know that

are polynomials in n.

are absolutely summable.

Finite number of such terms is absolutely summable and hence the Impulse response is absolutely summable.
Therefore ,the system is stable.
The absolute summability of one sided terms of

(where p(n) is a polynomial) depends only on

Theorem 4

We prove summability of

Proof by Induction:
Induction on degree of polynomial
Base case: (k=1)

depends on summability of

and not on the polynomial.

Induction step: Assume

by our assumption

is summable for (k-1) case .we proceed to prove it for k case.

is summable for polynomial of degree

(k-1).

THEOREM :
A neccesary and sufficient conditon for a continuos rational system to be a Causal and Stable is that all the poles must lie in the left half
plane, i.e. Re (s)< 0.
THEOREM :
A neccesary and sufficient conditon for a discrete rational system to be a Causal and Stable is that all the poles must lie inside the unit
circle, i.e.|z| < 1 .

System Defination of Causal Rational System and Linear Constant Coefficient Difference equation
(a) Continuos system :The system function can be written as ,

Taking the inverse Laplace transform we have ,

(b) Discrete system :-

It is always possible to write the system function this way for a Causal rational discrete system .
Taking the inverse z-transform of the above equation we have ,

Conclusion :
In this lecture you have learnt:
Necessary and sufficient condition for causality in a continuous rational system : The region of convergence must include
.
Necessary and sufficient condition for causality in a discrete rational system: The region of convergence must include
In general, Re{s} = 0 lies in ROC is not sufficent condition to imply stability. But for rational
system is stable.
systems Re{s} = 0 lies in ROC
In a rational system, with ROC of the system function including Re(s)=0, the poles to the left of
imaginary axis contribute right-sided exponentially decaying term and poles to the right of the
imaginary axis contribute left-sided exponentially decaying term.
For a discrete rational system stability implies and is implied by the unit circle in the z plane
belonging to the ROC of the system function.
Congratulations, you have finished Lecture 36.

Das könnte Ihnen auch gefallen