Sie sind auf Seite 1von 23

UNIT V

Digital Filters
Introduction:
v Based on combining ever increasing computer processing speed with higher
sample rate processors, Digital Signal Processors (DSP’s) continue to receive
a great deal of attention in technical literature and new product design.
v The following section on digital filter design reflects the importance of
understanding and utilizing this technology to provide precision stand alone
digital or integrated analog/digital product solutions.
v By utilizing DSP’s capable of sequencing and reproducing hundreds to
thousands of discrete elements, design models can simulate large hardware
structures at relatively low cost.
v DSP techniques can perform functions such as Fast-Fourier Transforms
(FFT), delay equalization, programmable gain, modulation,
encoding/decoding, and filtering.

Programs can be written where:


v Filter weighting functions (coefficients) can be calculated on the fly, reducing
memory requirements or Algorithms can be dynamically modified as a
function of signal input.
v DSP represents a subset of signal-processing activities that utilize A/D
converters to turn analog signals into streams of digital data.
v A stand-alone digital filter requires an A/D converter (with associated anti-
alias filter), a DSP chip and a PROM or software driver.
v An extensive sequence of multiplication’s and additions can then be
performed on the digital data.
v In some applications, the designer may also want to place a D/A converter,
accompanied by a reconstruction filter, on the output of the DSP to create an
analog equivalent signal.
v A digital filter solution offering a 90 dB attenuation floor and a 20 kHz
bandwidth can consist of up to 10 circuits occupying several square inches of
circuit-board space and costing hundreds of dollars.
v Digital filters process digitized or sampled signals. A digital filter computes a
quantized time-domain representation of the convolution of the sampled input
time function and a representation of the weighting function of the filter.
v They are realized by an extended sequence of multiplications and additions
carried out at a uniformly spaced sample interval. Simply said, the digitized
input signal is mathematically influenced by the DSP program.
v These signals are passed through structures that shift the clocked data into
summers (adders), delay blocks and multipliers.
v These structures change the mathematical values in a predetermined way;
the resulting data represents the filtered or transformed signal.
v It is important to note that distortion and noise can be introduced into digital
filters simply by the conversion of analog signals into digital data, also by the
digital filtering process itself and lastly by conversion of processed data back
into analog.
v When fixed-point processing is used, additional noise and distortion may be
added during the filtering process because the filter consists of large numbers
of multiplications and additions, which produce errors, creating truncation
noise.
v Increasing the bit resolution beyond 16-bits will reduce this filter noise. For
most applications, as long as the A/D and D/A converters have high enough
bit resolution, distortions introduced by the conversions are less
v Although DSP’s rarely serve exclusively as anti-alias filters (in fact, they
require anti-alias filters), they can offer features that have no practical
counterpart in the analog world.
Some examples are
1. A linear phase filter that provides steep roll-off (near brick wall)
characteristics.
2. A programmable digital filter that allows the signal conditioning to be
changed on the fly via software, (frequency response or filter shape
can be altered by loading stored or calculated coefficients into a DSP
program).
v Instead of using a commercial DSP with software algorithms, a digital
hardware filter can also be constructed from logic elements such as registers
and gates, or an integrated hardware block such as an FPGA (Field
Programmable Gate Array).
v Digital hardware filters are desirable for high bandwidth applications; the
trade-offs are limited design flexibility and higher cost.

Block diagram representation of typical digital filter


configuration.

FIR and IIR Digital Filter Design:


v Digital filters process digitized or sampled signals.
v A digital filter computes a quantized time-domain representation of the
convolution of the sampled input time function and a representation of the
weighting function of the filter.
v They are realized by an extended sequence of multiplications and additions
carried out at a uniformly spaced sample interval. Simply said, the digitized
input signal is mathematically influenced by the DSP program.
v These signals are passed through structures that shift the clocked data into
summers (adders), delay blocks and multipliers.
v These structures change the mathematical values in a predetermined way;
the resulting data represents the filtered or transformed signal.
v The filters designed by considering all the infinite samples of impulse
response are called IIR filters.
v The impulse response is obtained by taking inverse Fourier transform of ideal
frequency response.

Infinite Impulse Response Implementations


v Like its name, Floating Point DSP’s can perform floating-point math, which
greatly decreases truncation noise problems and allows more complicated
filter structures such as the inclusion of both poles and zeros.
v This permits the approximation of many waveforms or transfer functions that
can be expressed as an infinite recursive series.
v These implementations are referred to as Infinite Impulse Response (IIR)
filters. The functions are infinite recursive because they use previously
calculated values in future calculations akin to feedback in hardware systems.
v The equivalent of classical linear-system transfer functions can be
implemented by using IIR implementation techniques.
v A common procedure is to start with the classic analog filter transfer function,
such as a Butterworth, and apply the required transform to convert the filter
equations from the complex S domain to the complex Z-domain.
v The resulting coefficients yield a Z-domain transfer function in a feedback
configuration with a number “n” of delay nodes that is equal to the order of
the S-domain transfer function.
v These implementations are referred to as IIR filters because when a short
impulse is put through the filter, the output value does not converge quickly
to zero, but theoretically continues decreasing over an infinite number of
samples.
v Floating Point DSPs can produce near equivalent analog filter transforms such
as Butterworth, Chebycheff and elliptic because they use essentially the same
mathematical structure as their analog counterparts.

Figure 4 illustrates a bi-quad digital filter structure that computes the response of a
second order IIR transfer function. It has two delay nodes and the computation
coefficients are A1k, A2k, B1k and B2k.
Floating Point processors do have some advantages over Fixed Point
processors.
1. Specific DSP applications such as IIR filters are easier to implement with
floating point processors.
2. Floating Point application code can have lower development costs and shorter
time to market with respect to corresponding programs in a Fixed-Point
format.
3. Floating Point representation of data has a smaller amount of probable error
and noise.
4. After all is said, these powerful Floating-Point devices can emulate Fixed-Point
processors but at higher hardware cost.

Finite Impulse Response Implementations:


v Fixed-Point DSP processors account for a majority of the DSP applications
because of their smaller size and lower cost.
v The Fixed-Point math requires programmers to pay significant attention to the
number of coefficients utilized in each algorithm when multiplying and
accumulating digital data to prevent distortion caused by register overflow
and a decrease of the signal-to-noise ratio caused by truncation noise.
v The structure of these algorithms uses a repetitive delay-and-add format that
can be represented as “DIRECT FORM-I STRUCTURE”,

v FIR (Finite Impulse Response) filters are implemented using a finite number
“n” delay taps on a delay line and “n” computation coefficients to compute the
algorithm (filter) function. The above structure is nonrecursive, a repetitive
delay-and-add format, and is most often used to produce FIR filters. This
structure depends upon each sample of new and present value data.
v FIR filters can create transfer functions that have no equivalent in linear
circuit technology. They can offer shape factor accuracy and stability
equivalent to very high-order linear active filters that cannot be achieved in
the analog domain.
v Unlike IIR (Infinite Impulse Response) filters FIR filters are formed with only
the equivalent of zeros in the linear domain. This means that the taps depress
or push down the amplitude of the transfer function.
v The amount of depression for each tap depends upon the value of the
multiplier coefficient. Hence, the total number of taps determines the
“steepness” of the slope.
v The number of taps (delays) and values of the computation coefficients (h0,
h1,..hn..)are selected to “weight” the data being shifted down the delay line to
create the desired amplitude response of the filter.
v In this configuration there are no feedback paths to cause instability. The
calculation coefficients are not constrained to particular values and can be
used to implement filter functions that do not have a linear system
equivalent.

Two very different design techniques are commonly used to develop digital FIR
filters:

The Window Technique and The Equiripple Technique.


Window’s:
v The simplest technique is known as “Windowed” filters. This technique is
based on designing a filter using well-known frequency domain transition
functions called “windows”.
v The use of windows often involves a choice of the lesser of two evils. Some
windows, such as the Rectangular, yield fast roll-off in the frequency domain,
but have limited attenuation in the stop-band along with poor group delay
characteristics.
v Other windows like the Blackman, have better stop-band attenuation and
group delay, but have a wide transition-band (the band-width between the
corner frequency and the frequency attenuation floor).
v Windowed filters are easy to use, are scalable (give the same results no
matter what the corner frequency is) and can be computed on-the-fly by the
DSP.
v This latter point means that a tunable filter can be designed with the only
limitation on corner frequency resolution being the number of bits in the
tuning word.

Equiripple:
An Equiripple or Remez Exchange (Parks-McClellan) design technique
provides an alternative to windowing by allowing the designer to achieve the desired
frequency response with the fewest number of coefficients. This is achieved by an
iterative process of comparing a selected coefficient set to the actual frequency
response specified until the solution is obtained that requires the fewest number of
coefficients. Though the efficiency of this technique is obviously very desirable, there
are some concerns.
v For equiripple algorithms some values may converge to a false result or not
converge at all. Therefore, all coefficient sets must be pre-tested off-line for
every corner frequency value.
v Application specific solutions (programs) that require signal tracking or
dynamically changing performance parameters are typically better suited for
windowing since convergence is not a concern with windowing.
v Equiripple designs are based on optimization theory and require an enormous
amount of computation effort. With the availability of today’s desktop
computers, the computational intensity requirement is not a problem, but
combined with the possibility of convergence failure; equiripple filters typically
cannot be designed on-the-fly within the DSP.
Many people will use windowing such as a “Kaiser” window to produce good scalable
FIR filters fairly quickly without the worry of non-convergence. However, if one is
interested in producing the highest performance digital filter for a given hardware
configuration, the iterative Remez Exchange algorithm is worth the test.

Important features of IIR filters:


1. The physically realizable IIR filters does not have linear phase.
2. The IIR filter specifications include the desired characteristics for the
magnitude response only.
STRUCTURE FOR IIR
The various forms of IIR structures are:
1. Direct-Form Structures,
2. Signal Flow Graphs and Transposed Structures,
3. Cascade- Form Structures,
4. Parallel-Form Structures,

Direct-Form Structures:
The rational system function that characterizes an IIR system can be viewed as two
systems in cascade, that is,

Where H1 (z) consists of the zeros of H1 (z), and H2 (z) consists of the poles of H (z),

The direct I realization is depicted below. This realization requires M+ N+ 1


multiplication, M+N additions, and M+N+1memory locations. If all the all-pole filter
H2 (z) is placed before the all-zero filter H1 (z), a more compact Structure is obtained
as shown below. The difference equation for the all pole filter is

Since W (n) is the input to the all zero system, its output is

The above equations involve delayed versions of the sequence {w (n)}.


Consequently, only a single delay line or a Single set of memory locations is required
for storing the past values of {w (n)}. The resulting structure is called a direct form
II realization and is depicted below.
This structure requires M+N+1 multiplications, M+N additions, and the
maximum of {M,N} memory locations. Since the direct form II realization minimizes
the number of memory locations, it is said to be canonic.

Signal Flow Graphs and Transposed Structures:


A signal flow graph provides an alternative, but equivalent, graphical
representation to a block diagram structure. Basic elements of a flow graph are
branches and nodes. A signal flow graph is basically a set of directed branches
Connect at nodes. A system block diagram can be converted to signal flow graph as
shown below.
v The above signal graph consists of two summing nodes and three branching
points. Branch transmittances are indicated for the branches in the flow
graph.
v The input to the system originates at a source node and the output signal is
extracted at a sink node. The subject of linear signal flow graphs is important.
v One basic notion involves the transformation of one flow graph into another
without changing the basic input-output relationship.
v One technique that is useful is the transposition theorem which states that if
the directions of all branch transmittances and interchange the input and
output in the flow graph, the system function remains unchanged.
v The resulting structure is called a transposed structure or a transposed form.

Cascade- Form Structures:


Let us consider a high-order IIR system with system function given by
ROUND OF EFFECTS IN DIGITAL FILTERS
• The presence of one or more quantizers in the realization of a digital filter
results in a nonlinear device with characteristics that may be significantly
different from the ideal linear filter.
• As a result of the finite-precision arithmetic operations performed in the
digital filter, some registers may overflow if the input signal level becomes
large.
• Overflow represents another form of undesirable nonlinear distortion on the
desired signal at the output of the filter.
• Consequently, special care must be exercised to scale the input signal
properly, either to prevent overflow completely or, at least, to minimize its
rate of occurrence.

Limit cycles:
• In recursive systems, when the input is zero or some nonzero constant value,
the nonlinearities due to finite precision arithmetic operations may cause
periodic oscillations in the output.
• During periodic oscillations, the output y(n) of a system will oscillate between
a finite positive and negative value for increasing n or the output will become
constant for increasing n. Such oscillations are called limit cycles. These
oscillations are due to round-of errors in multiplication and overflow in
addition.
• In recursive systems, if the system output enters a limit cycle, it will continue
to remain in limit cycle even when the input is made zero.
• Hence these limit cycles are also called zero input limit cycles. The system
output remains in limit cycle until another input of sufficient magnitude is
applied to drive the system out of limit cycle.
• Consider the difference equation of first order system with only pole as shown
below.

The system has one product ay (n-1). If the product is quantized to finite word
length then the response y (n) will deviate from actual value.
Let y’ (n) be the response of the system when the product is quantized in each
recursive realization. Now the above equation can be written as
Where Q [ ] stands for quantization operation. The above equations described the
structures of the systems.

The table shown below lists the response of the actual system for four
different locations of the pole Z input X (n) = (n), where = 15/16, which has
the representations 0.1111.

• Ideally, the system should decay toward zero exponentially. In the actual
system, however, the response v(n) reaches a steady-state periodic output
sequence with a period that depends on the value of the pole.
• When the pole is positive, the oscillations occur with a period Np = 1, so that
the output reaches a constant value of 1/16 for a==1/2 and 1/8 for a=3/4.
On the other hand, when the pole is negative, the output sequence oscillates
between positive and negative values. Hence the period is Np=2.
• These limit cycles occur as a result of the quantization effects in
multiplications. When the input sequence x (n) to the filter becomes zero, the
output of the filter then, after a number of iterations, enters into the limit
cycle
• The amplitudes of the output during a limit cycle are confined to a range of
values that is called the dead band of the filter.
• In fixed point addition of two binary numbers the overflow occurs when the
sum exceeds the finite word length of the register used to store the sum.
• The overflow in addition may lead to oscillation in the output which is referred
to as overflow limit cycle.
• The overflow occurs when the sum exceeds the dynamic range of number
system.
• The overflow arithmetic can be eliminated if saturation arithmetic is
performed. In saturation arithmetic, when an overflow is sensed, the output is
set equal to minimum allowable value.
• The saturation arithmetic introduces nonlinearity in the adder and the signal
distortion due to this nonlinearity is small if the saturation occurs
infrequently.

Scaling to prevent Overflow:


• The two methods of preventing overflow are saturation arithmetic and scaling
the input signal to the adder.
• In saturation arithmetic, undesirable signal distortion is introduced. In order
to limit the signal distortion due to frequent overflows, the input signal to the
adder can be scaled such that the overflow becomes a rare event.
• Let x(n) be the input to the system, hk(n) be the impulse response between
the input and output of node-k and yk(n) represent the response at node-k.
POSSIBLE QUESTIONS
Part - A

1. What are the limitations of impulse invariant mapping technique?


2. Give the transform relation for converting low pass to band pass in digital
domain
3. State the properties of Chebyshev filter.
4. What is impulse invariant mapping? What is its limitation?
5. Give the transform relation for converting low pass to band pass in digital
domain.
6. State the properties of Butter worth filter.
7. Give the bilinear transformation.
8. What is frequency prewar ping?

PART-B

1. Explain coefficient quantization effects in direct form realization of IIR filter.

2. A digital system is characterized by the difference equation


y(n) = 0.9y(n —1)+x(n).Determine the dead band of the system when x (n)
=0 and y(—l) = 12.

3. Design a Butterworth digital filter to meet the following constraint.

Use bilinear transformation mapping technique. Assume T = 1 sec.

4. Describe impulse invariant mapping technique for designing IIR filter.

5. Develop cascade and parallel realization of the system described by the


difference equation y(n)+(3/8)y(n —1)—(3/32)y(n — 2)—(1/64)y(n —3) =
x(n)+ 3x(n —1)+ 2x(n —2)

6. Design a digital Chebyshev filter to meet the constraints


by using bilinear transformation and assume sampling period T = 1 sec.

7. Obtain direct form—II, cascade and parallel realizations of a discrete— time


system described by the difference equation

8. Derive the equation for calculating the order of the Butterworth filter. (6)

9. Using impulse invariant mapping technique, convert the following analog


transfer function into digital. Assume T = 0.1 sec.

10. Describe bilinear transformation mapping for designing IIR filter.(8)

11. Explain how you will develop cascade realizations from direct form realization.
The specification of the desired low pass filter is
a. 0.8 ≤ | H (ω)| ≤ 1.0 ; 0 ≤ ω ≤ 0.2Π
i. | H (ω)| ≤ 2.0 ; 0.32Π ≤ ω ≤ Π

12. Design a Butterworth digital filter using impulse invariant transformation T= 1


sec. The specification of the desired low pass filter is
a. 0.9 ≤ | H (ω)| ≤ 1.0 ; 0 ≤ ω ≤ Π/2
i. | H (ω)| ≤ 0.2 ; 3Π/4 ≤ ω ≤ Π

13. Design a Butterworth digital filter using Bilinear transformation T= 1 sec.

Das könnte Ihnen auch gefallen