Sie sind auf Seite 1von 88

DESIGNING OF LOW COMPLEXITY FIR FILTERS

USING GENETIC ALGORITHMS


A dissertation submitted in partial fulfillment
of the requirement for the award of the degree of
MASTER OF TECHNOLOGY
Electronics and Communication Engineering
(Communication Systems)

UNDER GUIDANCE OF

SUBMITTED BY

Dr. Butta Singh

Supreet Kaur

Assistant Professor

2014ECB1422

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

GURU NANAK DEV UNIVERSITY, REGIONAL CAMPUS


JALANDHAR

(June, 2016)

Candidate's Declaration form


I hereby certify that work which has been presented in thesis entitled "Designing
of Low Complexity FIR Filters using Genetic Algorithms" by Supreet Kaur in
partial fulfillment of requirement for the award of degree of M. Tech. Electronics
and Communication Engineering Specialization (Communication systems) at
Guru Nanak Dev University, R.C. Jalandhar is an authentic record of my own
work carried out during the period JAN 2016 to JUNE 2016 under the
supervision of Dr. Butta Singh, Assistant Professor of Electronics and
Communication Engineering, Guru Nanak Dev University, R.C. Jalandhar. The
matter presented in the thesis has not been submitted in any other
university/Institute for the award of M.Tech or any other Degree.

Signature of the Student


This is to certify that the above statement made by the student is correct to the
best of my knowledge.

Signature of Supervisor
The M.Tech Viva Voce Examination of Supreet Kaur has been held on
..................................and accepted.

Signature of Supervisor

Signature of Examination Board members

Signature of H.O.D

Acknowledgement
The following is an attempt to offer my heartfelt gratitude to the various persons
who have been great help in one way or the other for the successful completion of
this dissertation.
I am indebted to my thesis supervisor Dr. Butta Singh, Department of
Electronics and Communication Engineering for her gracious encouragement and
very valued constructive criticism that has driven me to carry out the project
successfully.
I am deeply grateful to Dr. Jyoteesh Malhotra, Head of Department Electronics
Technology, Guru Nanak Dev University, RC Jalandhar for his support and
encouragement. I thank several anonymous publishers of various journals and
research papers for their help.
I wish to express my heart full thanks to the Faculty Members of Department of
Electronics and Communication Engineering Dr. Deepkamal kaur Randhawa,
Er. Manjeet Singh, Er. Nitika Soni, Er Himali Sarangal, Er. Harmander
kaur, Dr. Vinit Grewal for their goodwill and support that helped me a lot in
successful completion of this project and Guru Nanak Dev University, RC
Jalandhar for availing MATLAB.
A Big thank you goes to my Parents for always being there when I needed
them the most. Finally I express my deep sense of gratitude to the almighty GOD
for their blessings for the completion of this dissertation.
Supreet Kaur
2014ECB1422

ii

Abstract

With the explosive increase in the wireless communication


system and portable devices, the power reduction and system complexity has
become a major problem. In the applications like portable storage devices and
personal communications, low computational cost and low power dissipation,
hence flexible battery life time is must. With the rapid growth of internet and
information on demands, handled wireless system is becoming increasingly
popular. With the limited energy in a reasonable size battery and low
computational cost, minimum power dissipation in digital communication
devices is necessary. Many of the communication systems now a days utilize
Digital Signal Processing (DSP) to resolve the transmitted information. Finite
Impulse Response (FIR) have been used and continued to be important in
building block in many DSP.
The number of adders, multipliers and shift registers used in FIR filters increases
the computational cost of the system. Signal switching activity is the major
component of power dissipation in CMOS Circuits. These activities corresponds
to the number of energy consuming transitions in the multiplier and accumulate
(MAC) of the filter while implementing on DSP. The transition densities of the
multiplier depends on number of 1s present in the desired filter coefficients
values. As the power of the multipliers depends on the transition thus forms the
measure of power dissipation and complexity of the filter.
Genetic Algorithm (GA) can be implemented as a computer simulation in which
a population of abstract representation (the chromosomes, or the genotype, or the
phenotype) of the candidate solutions to be optimized problem evolves the better
solution. Traditionally, solutions are represented in the form of binary string of
0s and 1s, but other encoding is also possible. The working of GA is discussed in
detail in this thesis. As aw know evolutionary algorithms are differing from
traditional algorithms search. They search population of points in parallel, not a

iii

single point so for this reason it is mostly preferred. Genetic algorithms (GAs)
particularly have emerged as a powerful technique for searching in high
dimensional spaces, capable of solving problems despite their lack of knowledge
of the problem being solved.
In this thesis, the complexity of the system is minimized by reducing the number
of 1s between the actual filter and the desired filter using genetic algorithm
optimization technique. Moreover it provides the detail description of power
dissipated in the CMOS circuits and the technologies which are used to
overcome this problem.
The error reduction table for three Low pass filter shows that hamming window
reduces 25% of error in the filter. The toggling of signal after GA id reduced to
50% in case of Hamming window which further reduces the system complexity
and power dissipated. So at the end it concluded that hamming window gives
better results than other windows

iv

Table of Contents
Candidate's declaration form

Acknowledgement

ii

Abstract

iii

Table of Contents

List of Figures

viii

List of Tables

xi

List of Abbreviations

xii

Chapter 1 Introduction

1-17

1.1 Background

1.2 Types of Signal Processing

1.2.1 Analog signal processing

1.2.2 Digital signal processing

1.3 Basics of Digital Signal Processing

1.4 Comparison of Digital and Analog Processing

1.5

Types of Digital Filters

1.5.1 Finite Impulse Response (FIR)

1.5.2 Infinite Impulse Response (IIR)

1.5.3 Comparison of FIR and IIR Filter

1.6 Computational Complexity of FIR Filter

1.6.1 Power Dissipation Sources

10

1.6.2 Physical Capacitance

11

1.6.3 Switching Activity

11

1.6.4 Leakage Power

11

1.7 Low Power Design Technologies

12

1.7.1 System Level

12

1.7.2 Algorithm Level

13

1.7.3 Architecture Level

14

1.7.4 Circuit Level

15

1.7.5 Technology Level

16

1.8 Motivation

16

1.9 Objectives of the Thesis

17

1.10 Outline of the Thesis

17

Chapter 2 Literature Survey

18-25

2.1 Related Work

18

Chapter 3 Proposed Method

26-45

3.1 Genetic Algorithm

26

3.1.1 Background

26

3.2 Biological Background

26

3.3 Basic Concept

27

3.4 Why GA?

27

3.5 Genetic Algorithm Cycle

28

3.6 Working Principle

28

3.6.1 Encoding

29

3.6.1.1 Binary encoding

30

3.6.1.2 Permutation encoding

30

3.6.1.3 Ternary encoding

30

3.6.2 Population

31

3.6.3 Fitness Function

31

3.7 Genetic Operators

31

3.7.1 Selection

32

3.7.1.1 Roulette Wheel Method


3.7.2 Crossover

34

3.7.2.1 Crossover Rate


3.7.3

32

Mutation

34
35

vi

3.7.3.1

Mutation Rate

37

3.7.4 Termination of GA

37

3.8 FIR Filter Design

37

3.8.1

FIR Filter Specification

39

3.8.2

Computation of FIR Coefficients

41

3.8.2.1

41

Window Method

3.9 Methodology used for the Minimization of Number of 1s


in FIR Filter

44

3.9.1

Problem Formulation

44

3.9.2

Least Mean Square Error (LMS)

44

3.9.3

Solution Methodology

45

3.9.3.1 Steps of the Algorithm for the Minimization


of number of 1s Using GA

Chapter 4 Results and Discussion


4.1 Filter Response with Optimizing Filter Coefficients

45

46-66
48

4.1.1 Rectangular Window

48

4.1.2 Hamming Window

54

4.1.3 Hanning Window

60

Chapter 5 Conclusion and Future Scope

67

5.1

Conclusion

67

5.2

Future Scope

67

References

68-72

vii

List of Figures
Fig 1.1 Basic Filter

Fig 1.2 Block Diagram of Analog Signal Processing

Fig 1.3 Block Diagram of Digital Signal Processing

Fig 1.4 Block Diagram of FIR Filter

Fig 1.5 Block Diagram of IIR Filter

Fig 1.6 CMOS Inverter

11

Fig 1.7 Leakage Current Types: (a) Reverse Biased Diode Current,
(b) Sub-Threshold Leakage Current
Fig 1.8 Clock Gating

12
13

Fig 1.9 (a) Original Signal Flow Graph (b) Unrolled Signal Flow
Graph

14

Fig 1.10 Original Data Path

14

Fig 1.11 Parallel Implementation

15

Fig 1.12 Pipelining Implementation

15

Fig 1.13 A Two input NAND Gate

16

Fig 3.1 Representation of Chromosomes

27

Fig 3.2 Genetic Evolution Flow

29

Fig 3.3 Representation of Phenotype and Genotype

29

Fig 3.4 Binary Encoding

30

Fig 3.5 Permutation Encoding

30

Fig 3.6 Ternary Encoding

30

Fig 3.7 Fitness Function

32

Fig 3.8 Roulette Wheel

33

Fig 3.9 One Point, Two Point, and Uniform Crossover

35

Fig 3.10 Flowchart of Crossover

36

Fig 3.11 Mutation Operation

36

Fig 3.12 Flowchart of Mutation

38

Fig 3.13 Flowchart of Filter Design

39

Fig 3.14 Magnitude Frequency Response Specification of Low


Pass Filter

40

viii

Fig 3.15 Ideal Frequency Response of Low Pass Filter

41

Fig 3.16 Magnitude Response of Rectangular Window

42

Fig 3.17 Magnitude Response of Hamming Window

43

Fig 3.18 Magnitude Response of Hanning Window

43

Fig 4.1 Low Pass Filter Ideal Response with specifications fc=350,
fs 1500,N=16 and Rectangular Window

48

Fig 4.2 Filter Response With Optimized Filter Coefficients with


fc=350,fs=1500, N=16 and Rectangular Window

49

Fig 4.3 Low Pass Filter Ideal Response with specifications fc=350,
fs= 2000, N=16 and Rectangular Window

50

Fig 4.4 Filter Response With Optimized Filter Coefficients with


fc=350,fs=2000, N=16 and Rectangular Window

50

Fig 4.5 Low Pass Filter Ideal Response with specifications fc=500,
fs=2000, N=16 and Rectangular Window

51

Fig 4.6 Filter Response With Optimized Filter Coefficients with


fc=500,fs=2000, N=16 and Rectangular Window

52

Fig 4.7 Low Pass Filter Ideal Response with specifications fc=500,
fs= 3000, N=16 and Rectangular Window

53

Fig 4.8 Filter Response With Optimized Filter Coefficients with


fc=500,fs=3000, N=16 and Rectangular Window

53

Fig 4.9 Low Pass Filter Ideal Response with specifications fc=350,
fs= 1500, N=16 and Hamming Window

54

Fig 4.10 Filter Response With Optimized Filter Coefficients with


fc=350, fs=1500, N=16 and Hamming Window

55

Fig 4.11 Low Pass Filter Ideal Response with specifications fc=350,
fs= 2000, N=16 and Hamming Window

56

Fig 4.12 Filter Response With Optimized Filter Coefficients with


fc=350, fs=2000, N=16 and Hamming Window

56

Fig 4.13 Low Pass Filter Ideal Response with specifications fc=500,
fs= 2000, N=16 and Hamming Window

57

Fig 4.14 Filter Response With Optimized Filter Coefficients with


fc=500, fs=2000, N=16 and Hamming Window

ix

58

Fig 4.15 Low Pass Filter Ideal Response with specifications fc=500,
fs= 3000, N=16 and Hamming Window

59

Fig 4.16 Filter Response With Optimized Filter Coefficients with


fc=500,fs=3000, N=16 and Hamming Window

59

Fig 4.17 Low Pass Filter Ideal Response with specifications fc=350,
fs= 1500, N=16 and Hanning Window

60

Fig 4.18 Filter Response With Optimized Filter Coefficients with


fc=350,fs=1500, N=16 and Hamming Window

61

Fig 4.19 Low Pass Filter Ideal Response with specifications fc=350,
fs= 2000, N=16 and Hanning Window

62

Fig 4.20 Filter Response With Optimized Filter Coefficients with


fc=350,fs=2000, N=16 and Hamming Window

62

Fig 4.21 Low Pass Filter Ideal Response with specifications fc=500,
fs= 2000,N=16 and Hanning Window

63

Fig 4.22 Filter Response With Optimized Filter Coefficients with


fc=500, fs=2000, N=16 and Hamming Window

64

Fig 4.23 Low Pass Filter Ideal Response with specifications fc=500,
fs= 3000, N=16 and Hanning Window

65

Fig 4.24 Filter Response With Optimized Filter Coefficients with


fc=500,fs=3000, N=16 and Hamming Window

65

List of Tables
Table 1.1: Comparison of Digital and Analog signal processing

Table 1.2: Comparison of FIR and IIR Filter

Table 3.1: Summary of Important Features of Common


Window Function

42

Table 4.1: Shows the Comparison of Simulation Results of Various


Windows

66

xi

List of Abbreviations
DSP

Digital Signal Processing

MAC

Multiplier and Accumulate

ADC

Analog to Digital Converter

ASIC

Application Specific Integrated Circuit

FPGA

Field Programmable Gate Array

DAC

Digital to Analog Converter

FIR

Finite Impulse Response

IIR

Infinite Impulse Response

CMOS

Complementary Metal Oxide Semiconductor

PMOS

P-Channel Metal Oxide Semiconductor

fc

Cut Off Frequency

fs

Sampling Frequency

SUMBE

Signed and Unsigned Modified both Encoding


Multiplier

CSD

Canonic Signed Digit

VLSI

Very Large Scale Integration

DLMS

Delayed Least Mean Square

NMOS

N- Channel Metal Oxide Semiconductor

SM

Signed Magnitude

ARM

Advance RISC Machine

TSP

Travelling Salesman Problem

HD

Hamming Distance

GA

Genetic Algorithm

DNA

Deoxyiribonucleic Acid

Pc

Crossover Probability

Pm

Mutation Probability

xii

xiv

CHAPTER 1
INTRODUCTION
1.1 BACKGROUND
The evolution of human society has been always closely tied with
the effective communication and exchange of information so that it can enable to
pass human knowledge and skills from one generation to another generation. In the
twentieth century the last three decades, in particular, are often termed as
"information age". The way in which the information is being transmitted, stored and
processed has entirely changed the availability of powerful and fast computers and
the rapid technology advancement in telecommunications fuelled by the growth of
Internet and multimedia. One of the most important key enabling technologies in the
development and enhancement of communication infrastructure was Signal
Processing. The field of signal processing contains the algorithms and hardware that
allow processing of signals which are produced by natural or artificial means. These
signals include speech, audio, video, images, satellite and weather data, etc.
Processing of these signals can involve data acquisition, data conversion, data
coding, data compression, transmission, display, etc. When these signals are
represented in the discrete form and are being processed by computers or special
purpose digital hardware, they can be identified as an exciting and rapidly expanding
field of Digital Signal Processing (DSP) [Prokias et al (2004)].
In DSP, A filter is a device or process that filters out or removes some unwanted
component or feature from a signal as shown in Fig 1.1. Filtering is a class of signal
processing, which defines the feature of filter being the complete or partial
suppression of some aspect of the signal which are not required in designing. The
filter basically functioned as to remove unwanted parts of the signal, such as random
noise, or to extract the important information from the noisy signal, such as the
frequency components lying within a certain frequency range [Mitra (2005)]. An
electrical network that alters the amplitude and/or phase characteristics of a signal
with respect to frequency is also done by filters. Ideally, a filter will not add any new
frequencies to the input signal, nor it will make any change to the component
frequencies of that signal, but it will change the relative amplitudes and/or their
1

Introduction
phase relationships of various frequency components. Filters are often used in
electronic systems to emphasize signals in required frequency ranges and reject
signals that are in other frequency ranges.
FILTER
Raw signal (unfiltered)

filtered signal
Fig 1.1 Basic Filter

The primary functions of filters are one of the followings [Smith (1997)]:
To limit a signal into a fixed frequency band as in low-pass, high-pass, and bandpass filters.
To divide a signal into two or more sub-bands as in filter-banks, sub-band coders,
frequency multiplexers, and graphic equalizers.
To change some parts of the frequency spectrum of a signal as in telephone
channel equalization and audio or video graphic equalizers.
To graphics the input-output relationship of a system such as telecommunication
channels, human vocal tract, and music synthesizers.
Depending on the basis of the filter equation and structural implementation, filters
may be broadly classified into the following categories:
Linear filters versus nonlinear filters.
Time-invariant filters versus time-varying filters.
Adaptive filters versus non-adaptive filters.
Recursive versus non-recursive filters.
Direct form, cascade form, parallel form and lattice structures.

1.2 TYPES OF SIGNAL PROCESING


Signal are categorised into two types;
Analog signal processing
Digital signal processing
1.2.1 Analog Signal Processing
An analog processing has an continuous analog signal at both its input x(t) and its
output s(t). Both x(t) and s(t) are functions of a continuous variable time (t) and can
contains infinite number of values as explained in Fig 1.2. An analog filter uses
2

Introduction
analog circuits made up from electronic components such as resistors, capacitors and
op amps to obtain the required filtering effect. Such analog filter circuits are widely
used in the applications such as noise reduction, video signal enhancement, graphic
equalizers in hi-fi systems, and many other areas [Mitra (2006)]. At all stages of
designing, the signal which is being filtered is an electrical voltage or current which
is the direct analogue of the physical quantity (e.g. a sound or video signal or
transducer output) involved.

Fig 1.2 Block Diagram of Analog Signal Processing

Advantages:
Simple and unite methodologies of plan
Realization is fast and simple
Disadvantages:
Little stable and responsive to temperature variations
Expensive to realize designing in large amount
1.2.2 Digital Signal Processing
In digital signal processing, Digital filters consist of digital processor that performs
numerical calculations on sampled values of the signal. The processor may be a
general purpose computer such as a PC, or a specialised digital signal processor chip.
Digital filters are used widely in signal processing applications, such as digital image
processing, spectrum analysis, and pattern recognition. Digital filters eliminate
number of problems that are associated with their classical analog counterparts and
thus are preferably replace with the digital filters. The analog input signal is sampled
firstly and then digitized using an ADC (analog to digital converter). The resulting
values in the form of binary numbers, representing successive sampled values of the
3

Introduction
input signal, are transferred to the processor, where the numerical calculations are
carried out on. Fast DSP processors can handle complex consolidation of filters in
parallel or cascade (series), which makes the hardware requirements relatively simple
and compact in comparison to that of equivalent analog circuitry [Tan et al (2007)].

1.3 BASICS OF DIGITAL SIGNAL PROCESSING


A digital filter is a system that performs mathematical operations on a sampled,
discrete-time signal to reduce or enhance certain aspects of that signal. This is in
comparison to the other major type of electronic filter, the analog filter, which is an
electronics circuit

which operates on a continuous time analog signals. An analog

signal may be processed by digital filter by first being digitized and represented as a
sequence of numbers, then manipulated mathematically, and then reconstructed and
it is manipulated by the circuit [Mitra (2006)]. Digital filters are very important part
of DSP. In fact, their extraordinary performance is one of the key reasons that DSP
has become so popular. The filters have two uses: signal separation and signal
restoration. Signal separation is needed when a signal has been contaminated with
interference, noise, or other signals. For example, imagine a device for measuring the
electrical activity of a baby's heart while still in the womb. The raw signal will likely
be corrupted by the breathing and heartbeat of the mother. A filter might be used to
separate these signals so that they can be individually analyzed. Signal restoration is
used when a signal has been distorted in some way. For example, an audio recording
made with poor equipment may be filtered to better represent the sound as it actually
occurred. Another example is the deblurring of an image acquired with an
improperly focused lens, or a shaky camera [Smith (1997)]. A digital system usually
consists of an analog to digital converter to sample the input signal followed by a
microprocessor and some peripheral components such as memory to store data and
filter coefficients etc as shown in Fig 1.3. Finally a digital to analog converter to
complete the output stage. Program instructions (software) running on the
microprocessors implement the digital filter by performing the necessary
mathematical operations on the numbers receives from the ADC. In some high
performance application, an ASIC or FPGA is used instead of a general purpose
microprocessor, or a specialized DSP with specific parallel architecture for
expediting operations such as filtering [Mitra (2006)].

Introduction

Fig1.3 Block Diagram of Digital Signal Processing

Digital filters may be more expensive than an equivalent analog filter due to their
increased complexity, but they make practical many designs that are impractical or
impossible as analog filters. Since digital filters use a sampling process and discretetime processing, they experience latency (the difference in time between the input
and the response), which is almost irrelevant in analog filters. The cut-off frequency
of the pass-band is a frequency at which the transition of the pass-band to the
transition region occurs. The cut-off frequency of the stop-band is a frequency at
which the transition of the transition region to the stop-band occurs [Brapate (2007)].
Advantages
The following list gives some of the main advantages of digital over analog filters
[Tan et al (2007)].
Digital filters can be designed with an exactly linear phase
They do not suffer from the degradation mechanisms of passive and active
components of analogue filters
Digital filters have better stability, reproducibility and higher orders of
precision
It is possible to realise filters with very low cut-off frequencies
They can be realised as integrated circuits.
Fast DSP processors can handle complex combinations of filters in parallel or
cascade (series), making the hardware requirements relatively simple and
compact in comparison with the equivalent analog circuitry.
Digital filters are easily designed, tested and implemented on a general
purpose computer or workstation.

Introduction

1.4 COMPARISON OF DIGITAL AND ANALOG PROCESSING


Table 1.1 shows the performance parameters of analog and digital signal. It can be
seen that digital circuits are widely used as when compared with analog signal
because of their better performance and their flexibility.

Table 1.1:Comparison of Digital and Analog signal processing.

DIGTAL

ANALOG

Digital signal are discrete time


signal generated by digital modulation.
Denoted by square waves.

Analog signal is continuous signal


which represents physical measurements.
Denoted by sine waves.

Memory is stored in the form of bits.


Draws only negligible power.

Memory is stored in the form of


sine waves.
Draws large power consumption

Cost is high and not easily portable.

Low cost and portable.

1.5 TYPES OF DIGITAL FILTERS


Filters can be divided into several groups according to the requirement. The two
most important types of digital signal processing are;
FIR (finite impulse response)
IIR (Infinite impulse response)

1.5.1 FIR Filters


One of the primary type of filter in DSP is FIR filter. This filter do not have any
feedback and for this reason it is said to be finite. The impulse response sequnce is of
finite duration,i.e. it has finite number of non-zero terms.
In general a FIR filter is described by the difference equation given below[Smith
(1997)]

y ( n)

b x( n k )
k

k 0

Introduction
The output y(n) of the FIR filter is the function of the input signal x(n) as given in Fig
1.4 . The response of this filter consists of the finite samples of M so it is said to be as
Finite Duration Impulse Response filter.

Fig 1.4 Block Diagram of FIR filter

The basic characteristics of FIR filters are:


Linear phase characteristic
Stability
High filter order (more complex circuits)
FIR filter design essentially consists of two parts
Approximation problem
Realization problem
The approximation stage takes the specification and gives transfer function through
four steps.They are follows:
A desired or ideal response is chosen, usually in the frequency domain
An allowed class of filters is chosen (e.g. the length N for a FIR filters).
A measure of the quality of approximation is chosen.
A method of the quality of approximation is chosen
The realization part deals with choosing the structure to implement the transfer
function this may be in the form of circuit diagram or in the form of a program.

Introduction
1.5.2 IIR Filter
IIR Filter is known as a recursive filter [Smith (1997)] shown in Fig 1.5 has feedback
from output to input, and in general its output sample is a function of the previous
output samples and the present and past input samples as described by the following
equation,

y ( n)

y y ( n k ) p x( n k )
k

k 1

k 0

IIR filters have one or more than one non-zero feedback coefficients. That is, as a
result of the feedback term, if the filter has one or more than one poles, once the filter
has been excited with an impulse there is always an output.

Fig 1.5 Block Diagram of IIR Filter

The basic characteristics of IIR filter are:


Non-linear phase characteristic
Resulting digital filter has the potential to become unstable

Low filter order (less complex circuits)


1.5.3 Comparison between FIR and IIR Filter
The comparison performance of FIR ans IIR filters is shown the table 1.2. The table
illustrates that FIR filters are commonly use as compared to IIR filter because they are
8

Introduction
more versatile in designing complex structures as well as they give out the best
results.

Table 1.2:Comparison of FIR and IIR Filter

FIR

IIR

FIR filter are non-recursive

IIR filters are recursive

Impluse response eventually reaches to


zero
Phase response is linear

Impluse response keep on ringing


infinitely
Non-linear phase response

They can be realized efficiently in


hardware
It does not have any non-zero terms

The realization of IIR filter is very


difficult
It contains both poles and zeros

They are always stable

They are not stable

1.6 COMPUTATIONL COMPLEXITY OF FIR FILTER


It is well known that FIR lters have some desirable features like stability, low
coecient sensitivity and linear phase response if the coecients are symmetric. The
drawback of an FIR lter is relatively high computational cost due to the
involvement of large amount of multipliers. The complexity of a digital FIR lter is
inversely proportional to its transition bandwidth [Kaiser(1974)]. There are several
ways in which a filter can have different computational complexity. For example, the
order of a filter is more or less proportional to the number of operations. This means
that by choosing a low order filter, the computation time can be reduced. For discrete
filters the computational complexity is more or less proportional to the number of
filter coefficients. If the filter has many coefficients, for example in the case of
multidimensional signals such as tomography data, it may be relevant to reduce the
number of coefficients by removing those which are sufficiently close to zero.
During FIR filtering, the coefficients and data memory, data buses provides
successive coefficients and the data values for the weighted sum computation. The
power dissipation in the coeficients memory data bus hence depemds on the
successive coefficient values and the power dissipation in the data memory, data bus
depends upon the successive data values. The data memory, the data bus power
disspation cannot be controlled during FIR filter synthesis. The coefficient memory
9

Introduction
data bus power however can very much be minimized by optimizing filter coefficients
so to reduce the distance between successive coefficient values and also by reducing
the total numbar of signal toggling in the opposite direction between the successive
coefficients. The coefficients and the input data samples from the input to the
multiplier during FIR filtering. The multiplier power dissipation thus depends on the
number of toggles and also on the number of 1s in these inputs. The coefficient
optimization for reducing the coefficient memory, data bus power thus also reduces
the multiplier power. Higher power reduction can be achieved by focussing on lower
significant bits of the coefficients during minimization
1.6.1 Power Dissipation Sources
In CMOS, the total power dissipated out from the circuit is obtained by adding the
three components as; [Yeap (1998)], [Sakurai (2002)].

pavg pdynamic pstatic pleakage


Two main sources which results in power dissipation are;
Static dissipation
Dynamic dissipation
Dynamic dissipation occurs due to short circuits (switching transient) as well as
charging and discharging of capacitance at the load. This power is further divided
into two components

pdynamic pswitching pshort


p switching arises

when the capacitance at the load c l of a CMOS circuit is

changed through PMOS transistor as shown in Fig 1.6 in order to make voltage
transition from 0 to 1 high the supply voltage Vdd

Due to this voltage transistor the power is dissipated as it is determined from product
is Vdd I c transient current [Chadrakasan et al (1995)] [Veendrick (1984)].

I c CL

dVout
dt

Pshort is another component of power dissipation. At certain point the switching

transient turns on the both CMOS and PMOS transistor which results in the short
circuit between Vdd and ground.
10

Introduction
Static dissipation occurs when the leakage current is drawn out from the voltage supplied
to the circuit.

pleakage current is present in the nA region and consumes the overall

power.

Fig 1.6 CMOS Inverter

1.6.2 Physical Capacitance


Dynamic power consumption is directly proportional to switching of physical
capacitance characteristics. Devices and interconnects are two primary sources of
physical capacitance. By using small devices, less logic, short wires etc.can be
minimized. This reduces the current driven ability of transistor and the performance
of operation is very slow. So if the designer is free of scale voltage he will lesser
concern with the physical capacitance and its side effects.
1.6.3 Switching Activity
Large amount of physical capacitance is present inside a chip. If there is no switching
activity than the dynamic power is not consumed. Switching activity consist of two
components. Firstly the data rate which determines that how often an average of new
data is arriving at each node. Secondly, the data activity which corresponds to how
much expect number of energy is being consumed during the transition which occurs
while trigging at the arrival of new data piece.
1.6.4

Leakage Power

Two sources which results in the leakage of power are; firstly when the current flows
through the reversed biased transistors, secondly when the currents flows through the
non-conducting transistors. The leakage current is proportional to the exponential of
11

Introduction
the voltage which is at threshold and the leakage area as shown in Fig 1.7. The power
consume by the leakage current is as high as the power consumption of the switching
activity i.e 0.6m .

Fig 1.7 Leakage Current Types: (a) Reverse Biased Diode Current,
(b) Sub-thresshold Leakage Current

In order to increase the energy efficiency of signal processing circuits and required
computing power. Designer has developed various design methodologies that can be
applied in custom, application-specific integrated circuits. These defined
technologies have been successful in increasing energy efficiency and reducing
computational complexity in designing a FIR filter. These technologies also suffer
from some drawback like- they provide limited function that means they are less
flexible.

1.7 LOW POWER DESIGN TECHNIQUES


The objective of this dissertation is to come up with digital filters that are suitable for
low power applications. The design variables of CMOS circuits which results in the
dynamic power consumption have been identified in FIR filters. These technologies
are discussed in detailed to overcome this problem in designing filters: technology,
Circuit techniques, algorithm, architecture and switching level.
1.7.1 System level
A system is basically made up of both software and hardware which affects the
power consumption. The designing of system basically includes scheduling, strategy,

12

Introduction
hardware platform selection and partition of hardware and software. The designing
of system has a great impact on the power consumption and hence low power
techniques are applied to reduce the power consumption at this level.
However if, for example, the instruction level power model is available for a
processor, software power optimization can be performed [Tiwari et al (1994)]. It is
observe that the frequent use of the cache and faster code are likely to reduce the
consumption of power in a system.
Two techniques used for low power consumption at the system level are: clock
gating as shown in Fig 1.8 and power down. The hardware units that are not working
are shut down that means they are at the sleep mode so that the power can be saved.
The clock drivers which consumes30-40% of the overall power are gated to reduce
switching activity.

Fig 1.8 Clock Gating

1.7.2 Algorithm level


The selection of algorithm has a great effect on the power consumption. The function
of algorithm is to select the low power consumption that satisfies the filters
condition. The computational part and the storage part determine the cost of
algorithm. Number of iterations, cost of communication are the complexity measures
of the algorithm. Reduction of these measures is the key issues of the algorithm
selection.
One of the most important low power consumption of the algorithm level is
algorithmic transformation [Rabaey (1996)]. This technique reduces the complexity,
concurrency which automatically reduces the number of operations and hence the
power consumption.
In Fig 1.9, the loop unrolling technique [Chadrakasan et al (1995(a))] [Chadrakasan
et al (1995)(b))], is the transformation that aims to reduce the power consumption
upto 20% and enhance the speed also

13

Introduction

Fig 1.9 (a) Original Signal Flow Graph. (b) Unrolled Signal Flow Graph. .

1.7.3 Architecture level


The selection of algorithm determines the architecture of the system. As we know
that voltage scaling is the efficient way to reduce the power consumption. When the
supply voltage is reduced, the consumption of power is automatically reduced but the
drawback is that it increases the gate delays. To overcome this gate delay parallelism
and pipelining [Chandrakasan et al (1992)] are the two low power techniques .A data
path which determines the largest number of C and (A+B) is shown in the Fid 1.10

Fig 1.10 Original Data Path

It is made of adders and comparator. The original clock frequency is 40Mhz


[Chadrakasan et al (1995)(b)]. In parallel technique shown in Fig 1.11, the
throughput is maintained by reducing the supply power voltage. The parallel system
with the twice amount of resources is shown.
The clock frequency 40Mhz is reduced to half that is 20 Mhz which automatically
allows the supply voltage to be scaled down from 5V to 2.9V which results in the
reduction of power consumption. In pipelining method, the pipelining buffer/register
is added after the adder which increases the throughput as shown in Fig 1.12,
If Tadd is equal to Tcomp, than it results in the increase of the throughput by factor
of 2 as a result the supply voltage is scaled down to 2.9V.
14

Introduction

Fig 1.11 Parallel Implementation

Fig 1.12 Pipelining Implementation

1.7.4 Circuit level


In CMOS circuits, the dynamic consumption of power results by the transitions.
These spurious transitions consumes between 10-40% of power switching activity. In
order to reduce these transitions the delay which occurs at the gate should be made
equal. This could only be done by insertion of buffer as it results in increasing of
capacitance load and reduces the transition. This technique is known as path
balancing. The transistor size both affects the power consumption and the delay. Let
consider a Fig 1.13 in which input A is close to the output of two NAND gate and
consumes less power than input B which is near to the ground.
Pin ordering is to assign the switching of input pins very closer to the out node.
However the switching activity factors for difficult pins must be known and

15

Introduction
automatically limits the use of pin ordering [Yeap (1998)]. Hence by this way, the
consumption of power is reduced without any cost complexity

Fig 1.13 A Two Input NAND Gate

1.7.5 Technology level


The optimization which can be done at this level is by voltage scaling. It is very
important to scale down the supply voltage for the improvement in the energy.
Unfortunately the speed penalty is paid for the reduction of Vdd with the delay
increasing as it approaches the threshold voltage of the device. The relationship
between the gate delay and Vdd is given by
td

1
(Vdd Vt ) 2

The main objective is to reduce the power while making the overall throughput of the
system fixed.

1.8 MOTIVATION
The impmentation of Finite impulse response (FIR) is done as a series of multiply
and accumulate operations on a programmable Digital Signal Processor (DSP). The
multiply and accumulator (MAC) unit of a DSP experiences large amount of
switching activity due to signal transition which results in the higher power
consumption and also increase the computational cost of the filter. In order to reduce
the computational cost, lesser number of adder and multipliers should be used and
this can only be achieved by the optimization techniques. So the Genetic Algorithm

16

Introduction
technique is used to overcome this computational problem in designing FIR filter as
this algorithm is quite simple.

1.9 OBJECTIVES OF THE THESIS


To evaluate the computational complexity of window technique based
FIR filter.
To minimize the computational complexity using optimization techniques
and to analyze performance on different window based FIR filters.

1.10 OUTLINE IF THE THESIS


Chapter 1 presents the background of Digital Signal Processing. Varoius types of
digital filters, power dissipation resources,

Detalied description of low power

technologies , motivation and the objectives of the thesis.


Chapter 2 The problem formulation to the field of digital filter and survey of related
design methods which focuses on filter coefficients with a low complexity.
Chapter 3 The background of genetic algorithm, GAs are explained in depth
concentrating on various methods of encoding, selection, reproduction and fitness
function. Designinf of FIR filter using window method. GA based filter design
technique and solution methodology.
Chapter 4 presents the simulation results and discussion on the comparitive analysis
of three FIR designing window.
Chapter 5 Finally conclusion and further work related to improving of GAs based
filter design techinque is presented.

17

CHAPTER 2
LITERATURE SURVEY
A general desire in any digital signal processing system design is
that the number of operations (additions and multiplications) needed to compute the
filter response is as low as possible. In certain applications, this desire is a strict
requirement, for example due to limited computational resources, limited power
resources, or limited time. The last limitation is typical in real-time applications.
There are several ways in which a filter can have different computational complexity.
For example, the order of a filter is more or less proportional to the number of
operations. This means that by choosing a low order filter, the computation time can
be reduced.
For discrete filters the computational complexity is more or less proportional to the
number of filter coefficients. If the filter has many coefficients, for example in the
case of multidimensional signals such as tomography data, it may be relevant to
reduce the number of coefficients by removing those which are sufficiently close to
zero. In multirate filters, the number of coefficients by taking advantage of its
bandwidth limits, where the input signal is down sampled (e.g. to its critical
frequency), and up sampled after filtering.

2.1 RELATED WORK


Lu et al [1997] proposed a method for designing low power consumption digital FIR
filter. In this method the digital filter is implemented in the form of cascade
arrangement of low order section. In the first section, optimization technique is used
for the designing of the digital filter which is then fixed. After this the second section
is added, which was designed in the way that it satisfies the first two cascade section
as per the overall requirements. This process continues until the required
specifications are obtained as well as the multi-section filter. In multi-section filters
the minimum number of sections which are used for the input current signal can be
switched by the use of simple adaptation mechanism. This method results in the
minimum power consumption the digital FIR filter.

18

Literature Survey
Lin et al [1998] limits the search of the prototype class of the filter which is obtained
by using Kaiser Window. The single parameter that is the cut off frequency (fc) is
optimized in Kaiser Window design. This further reduces the complexity of the
designed filter by obtaining a new parameter new . It is concluded that the design
complexity of other non- linear optimization is much more complex than the Kaiser
Window approach. On the other hand, it is well known that stop-band attenuation is
determined by the Kaiser window so direct control can be given over the stop-band
attenuation.
Rasidi et al [2011] proposed a method which determines the use of low power serial
multiplier and adder combining both the multipliers and shift adder and folding
transformation in linear phase architecture so that they can reduce the overall
dynamic power consumption of the FIR filter. The implementation of the filter was
done using Xilinx ISE. Virtex IV FPGA and Xinlin XPower analyzer was used to
analyze the power consumption, the minimum power achieved was 110mw in 100
MHz to 8 taps and 8 bits coefficients.
Joshi et al [2010] had proposed a brief description of the structure and the
characteristics of

Finite Impulse filter (FIR). The designing of FIR is done

efficiently by using MATLAB implementation and they are also implemented on


fixed programmable gate array (FPGA). By using these tools the filtered coefficients
has been prototype. The desired result obtained by using these tools has become very
less and consumption of power is reduced.
Li [2011] discussed the use of the distributed algorithm and its different structures.
The 60 order filter which was based on FGPA is implemented. The multiplication
to look up table structure conversion was seen in this algorithm. The speed of the
system was high and the less number of resources are used in this filters which was
used for designing low power consumption.
Rajput et al [2012] presented the implementation and designing of signed and
unsigned modified both encoding (SUMBE) multiplier. The Bough Wooley
multiplier performed signed number multiplication. The Braun array multiplier
performs unsigned numbers multiplication. This result in the high speed multiplier
units for both the process which increases the power dissipation. So the modified
19

Literature Survey
booth encoder circuit which generates the product in the parallel combination. Since
multiplication of both is done by same multiplier in the modified system hence it
reduces the cost and the power dissipation which results in the low computational
complexity.
Anantha et al [1992] discussed the certain considerations which are taken into
account in low power design filter. In this paper, it includes the technology and the
style of logic and the logic implemented. Power dissipation contributes the factor
like, transition which occur due to critical raise condition and due to leakage current
in the circuit. So in order to get rid from this problem a pass-gate logic family with
modified threshold is used. It is found as the best performer for the designing of low
power FIR digital filter.
Sameuli [1989] had presented an optimization algorithm for improving FIR filter
coefficients in which the additional non-zero elements are added in the CSD set in
order to compensate the non uniform distribution in the Canonic Signed Digit (CSD)
set. This algorithm consist of two stages; stage one defines the search methodology
of the optimum scale factor and the second stage is the bivariate local search
methodology in the neighbourhood set. This result in the increase in computational
complexity from additional CSD digits. However it improves the frequency
response. These techniques can be used with other filters to reduce the complexity.
Dusan et al [1981] discussed the comparison between an optimal and sub-optimal
algorithm which is used for designing of finite word length and shows the simulation
results for FIR filter coefficients whose length varies from 15 to 35. The conclusion
was made that when the computer resources are not available for optimal method it is
worth to apply the local search method to the filter which have rounded coefficients.
Darren et al [1995] presented a low power implementation approaches with less
number of hardware overhead than in traditional FIR filter implementations. Parallel
of block processing having duplication of hardware result in the reduction of power
consumption. Parallel processing by block size L requires the critical path to the
charged L times longer as compared to sequential implementation which leads to low
power consumption.

20

Literature Survey
Chetana et al [1995] illustrates that the gate pipelined MAC3 circuits, can be clocked
at very high speed because of the simplicity of pipelined stage. However this results
in the increasing of high power dissipation in the circuit. In comparison to this half
bit pipelined MAC2 was designed which results with the same performance
characteristics as MAC3, but have lesser area and power dissipation in the circuit.
Sankarayya et al [1996] had discussed a new set of algorithm for the realization of
low power FIR filter that uses various orders of differences between the coefficients
for the computation of the convolution with the input data. The results of this
computation can be stored and reused. The technique used in the computational is
necessary as per convolution coefficients are compared to the coefficients which
were used directly. It was shown that the reduction in the results also reduces the net
energy dissipation.
Erdogan et al [1996] presented a new multiplication technique for an application to
a single multiplier CMOS which is based on DSP Processors. Reduction of switching
activity with the multiplier section results in low power consumption. This scheme
uses the transpose direct form FIR filter structure. This reduction has been
demonstrated with two examples as they were discussed in the paper having different
word length and filter order achieving upto 63% of power reduction in the FIR filter.
Hezar et al [1996] presented an efficient design procedure for the filter whose
coefficients are restricted to the ternary set [-1, 0, +1] which is cascaded by a
multiplication free architecture. This programming algorithm had minimized the
errors which occur during the performance of ternary filter coefficients set. Power
reduction in VLSI was very much better than compared to the other current efforts
which seek to implement these design using VLSI implementation.
Horrock et al [1996] had discussed an overview of the methodologies that had been
used in the past few years for DSP. This includes minimization of power at
architecture level and algorithm level. This paper also lists the power consumption
techniques.
An ultra low power delayed least mean square (DLMS) adaptive filter which
operates in the threshold region for hearing aid application is discussed. The
threshold operation which was discussed in this threshold operation is done by the
21

Literature Survey
parallel architecture with pseudo NMOS style. This architecture operates and the low
clock rate and reduced power while remaining the throughput [IEEE transactions on
VLSI (2003)]. DLMS operates at the frequency of 22 KHz using 400mV supply to
achieve 91% improvement in power as compared to CMOS style.
Low power block based filtering cores had been studied and they are specially
designed for low power consumption filter implementation [Erdogan et al (2003)].
This system uses algorithm flow for designing low power consumption filters and
also uses 2 compliments and SM number (sign magnitude) presentation. Its been
concluded that the 49% of power consumption was reduced as compared to
conventional filtering cores and areas overhead of 5% was increased.
Another scheme was described for the implementation of low power cores for the
hearing and applications [Zwyssig et al (2001)]. In this method, there are two power
saving approaches that has been investigated. First method uses the macrocomponent framework in which the assembly of cores were very easy to identify
hierarchical plug in- basis and second

method uses system on chip

strategy.

These techniques operate at less frequency and also results in less computational cost
in the FIR filter in DSP processor. The ARM based system on chip platform was
used for designing and testing in which the core was embedded. Using this method,
Power handling/manipulating was possible by power management.
A coefficient algorithm was used for the designing of low power FIR filter. In this
the algorithm decomposes the individuals coefficients into two sub-components
[Erdogen et al (1998)]. The heuristic approach was used for the decomposition which
divides the given coefficients. The part which was produced after division is than
implemented using single shift operation and leaves the other parts and reduces the
word length. This results in the reduction of the switching capacitance and results in
the 63% of the power saving.
Mehendale et al [1996] presented an algorithmic and architectural transform for the
designing of low power filter. In this the implementation was done on both hardware
macros and software on programmable DSP. For the implementation these transform
addresses the reduction in the power in the program memory and data buses and
multiplier. Also it discusses the architectural extension to support some of these

22

Literature Survey
transformations. This transforms reduces the computational cost and achieved the
power reduction.
Young et al [2004] discussed low power canonic signed digit (CSD) filter response
and high speed structure using vertical common sub expression. The horizontal
common sub-expression method was used in conventional linear phase CSD filter
because of its inherent symmetrical filter coefficients. In this the similar values linear
filter having significant bits of adjacent filter coefficients were equal. The method is
more efficient where the bit precision of implementation was lower. The
computational cost also decreases.
Suckely [1991] discussed that for FIR filter designing; a genetic algorithm was used
that produces the filter realization very close to the minimum computational
complexity. This method was automatic and efficient. Also it was said earlier that
computational complexity of the filter had been guaranteed to be produce
automatically but still the optimization method was needed to produce the acceptable
run times. The efficient design procedure was needed for the work on the genetic
algorithm application. The genetic algorithm was commonly used because of its
simplicity.
Merakos et al [1997] presented a noval high level transformation for the
implimentation of low power FIR filter in the paper. The new idea was the recording
of the coefficients in the filter which aims at the minimization of the switching
activity. As a measure of the switching activity the Hamming Distance (HD) of the
filter that is between successive coefficients, stored in a memory, was used. The
transformation that can be incorporated both in application specific architectures as
well as in general purpose programmable architecture. The recording of the N
number of coefficients, for the HD optimization of this sequence can be discussed by
a Travelling Salesman Problem (TSP), which is a well known NP complete problem.
A novel heuristic algorithm for a fast and accurate solution to these problem discuses
above was proposed. The experimental results shows that the proposed technique
leads to significant power saving in terms of switching activity reduction as well as
computation cost.
Soni et al [2011] proposed that the Exponential window provides better side-lobe
roll-off ratio as compared to Kaiser Window which is efficiently used for some
23

Literature Survey
applications such as filter design, beam forming, and speech processing. Besides this
proposed a design of digital non-recursive Finite Impulse Response (FIR) filter by
using Exponential window. The far-end stop-band attenuation was most significant
parameter when the signal which was going to be filtered has great concentration of
spectral energy. In a sub-band coding, the filter is processed to separate out various
frequency bands for independent processing. In case of speech, e.g. the far-end
rejection of the energy in the stop-band should be more in oreder to minimize the
energy leakage from one band to another. Therefore, the desinging of filter should be
done in such a way so that it can provide better far-end stop-band attenuation
(amplitude of last ripple in stop-band). The designing of digital filter by Kaiser
window has a better far-end stop-band attenuation than the designing of filter by the
other previously well known adjustable windows such as Dolph- Chebyshev and
Saramaki, which are special cases of Ultraspherical windows, but obtaining a digital
filter which performs higher far-end stop-band attenuation than Kaiser window will
be useful. In this paper, the designing of non-recursive digital FIR filter had been
proposed by using Exponential window. It provides better far-end stop-band
attenuation as compared to filter designed by well known Kaiser window, which is
the advantage of designing a filter by Exponential window over the designing of
filter by Kaiser window. Hence the computational comlexity becomes lesser.
Multipliers play an important role design of digital FIR filters. A novel design
technique for deriving highly efficient multipliers which operates on a limited range
of multiplier [Turner et al [2004] values was discussed in the literature. Using the
technique,

Xilinx

Virtex

field

programmable

gate

array

(FPGA)

.The

implementations for a poly-phase filter and discrete cosine transform were derived
with area reductions of 31%70% and speed increases of 5%35% when they were
compared to designs using general-purpose multipliers. This design gives better
result as compared to the other fixed coefficient methods.
Algorithmic methods focus on design of filter coefficients with lower complexity
rather than on the optimisation of hardware structures. This in turn leads to lower
implementation complexity of filtering structures. There are two categories of
algorithms to solve an approximation problem for FIR filters with powers-of-two
coefficients (PWR2): exact and approximate. Exact algorithms guarantee the optimal
filter design, i. e. a minimum order of the filter for a given specification. An example
24

Literature Survey
of exact algorithms is an exhaustive search (examines all possibilities) and a branchand-bound algorithm [Lim et al (1983)]. Approximate algorithms do not guarantee
the optimality of the design, although they can deliver near-optimal designs in less
time than exact algorithms. The majority of algorithms for the multiplier-less FIR
filter design belong to the category of approximate algorithms.
Yunlong et al. [2011] they proposed a simple method for design an FIR filter as
compare to other existed methods. This method is the simplest except rectangular
window method. Filter transition bandwidth is smaller than 4.65/N for filter order N.
For the same filter specifications filter order obtained by using the new method is
much smaller than by using Kaiser Window if minimum stop-band attenuation is in
the range of 39.5 dB to 48.5dB and corresponding maximum pass-band ripple is
from 0.35 dB to 0.18 dB.

25

CHAPTER 3
PROPOSED METHOD
3.1 GENETIC ALGORITHM
GA is a search technique used in computing to find true or
approximate solutions to optimization and search problems. GA are a particular class
of evolutionary algorithms that use techniques inspired by evolutionary biology such
as inheritance, mutation, selection, and crossover (also called recombination).GAs
dier from classical optimization and search methods in several respects. Rather than
focusing on a single solution
3.1.1 Background
GA is part of evolutionary computing, which is a fast developing area of artificial
intelligence. It was inspired by Darwins theory of evolution.
GA is a stochastic search method which is used for searching an optimal solution to
the evolution function of an optimization problem [Pittman et al (2000)].GA was
proposed by Holland in early seventies [Holland et al (1975)] as a computer program
which mimic the natural evolutionary process. Then GA was extended to a functional
optimization by De Jong [Jong (1980)] and then Goldbug presented a detailed
mathematical model of a GA [Goldbug (1989)].

3.2 BIOLOGICAL BACKGROUND


All living organisms consist of cells. In each cell, there is asset of chromosomes,
which are strings of DNA and they serve as a model of the whole organism. A
Chromosome consists of genes on blocks of DNA as shown in Fig 3.1. Each gene
encodes a particular pattern. Basically it can be said that each gene encodes a trait
e.g. colour of eyes. Possible settings of traits (bluish brown eyes) are called alleles.
Each gene has its own position in the chromosome search space. This position is
called locus. Complete set of genetic material is called genome and a particular set of
genes in genome is called genotype. The genotype is based on organisms phenotype
(development after birth), its physical and mental characteristics such as eye colour,
intelligence and so on.

26

Proposed Method

Fig 3.1 Representation of Chromosomes

3.3 BASIC CONCEPT


The GA is an optimization and search technique which is based upon the principles
of genetics and natural selection. GAs is inspired by Darwinians theory of survival
of fittest and population is composed of many individuals to evolve under the
specified selection rules to a state which maximizes the fitness (i.e. minimizes the
cost function). It combines the survival of fittest among string structures with a
structured yet randomized information exchange to form a search algorithm with
some of innovative flair of human search. In each generation, a new set of artificial
creatures (strings) is created by using bits and pieces of the fittest of the old, an
occasional new part is tried for best measure. GAs are good at taking larger,
potentially enlarge search spaces and navigating them looking for best combination
of things and solutions which we might not be able find in a life. This optimization
technique is different from most of traditional methods in the sense that genetic
algorithms need design space to be converted into genetic space [Cemes et al.,
(1993)]. So, GA algorithms work with the coding of variables.

3.4 WHY GA?


The demand of GAs comes from its simplicity and elegance as robust search
algorithm as well as from their power to discover best solution rapidly for difficult
high dimentional problems. GAs are useful arid efficient when the search space is
large, complex or poorly understood, domain knowledge is scarce or it is very
difficult to encode the narrow the search space, no mathematical analysis is
available, traditional search methods fails

27

Proposed Method

3.5 GENETIC ALGORITHM CYCLE


GA is the process which is based on the laws of natural selection and natural
genetics. A genetic algorithm consists of three main operators [Tang et al., (1996);
Harris & Ifeachor, (1988)]:
Selection
Genetic operation
Replacement
GA differs from the classical optimization and searching methodology in several
respects. Instead of focusing on only one solution, GA operates on group of trial
solution in parallel where a population of individuals is manipulated in each iteration
cycle (Generation) and each individual is termed as Chromosomes. They represent
one candidate in each set. Within the population fit individual survive to reproduce
and they are further combined with their genetic material to produce new offsprings.
As in nature, selection provides the necessary mechanism for better to survive. Each
solution is provide with a fitness value which determines how good it when
compared to others in a population. The recombination process is then simulated
through crossover where the data strings are exchanged between the chromosomes.
New genetic is also pass through the mutation process that causes the alteration of a
single bit in a string. The selection, crossover, mutation processes constitutes GA
cycle or generation.
A genetic search evolution flow is presented in Fig 3.2.In the reminder of this
section, the aspects associated to the fundamental steps of GA, described above, such
as chromosomes representation, encoding schemes, population initialization, fitness
function, genetic operator, and selection methods.

3.6 WORKING PRINCIPLE


The working principle of GA is consider a unconstrained optimization problem
Maximize f(x)
xiL xi xiU

For i = 1, 2......N

...3.1

In order to minimize f(x), for f(x)>0 then the objective function is written as
maximize
28

Proposed Method
1
1 f ( x)

...3.2

Maximize [-f(x)], instead of minimizing f(x), if f(x) <0. As a result both


maximization and minimization can be controlled by GA.
Population

operators

Evolution environment
Fig 3.2 Genetic Evolution Flow

3.6.1 Encoding
As we know that GA searches directly in the solution space. So they are encoded in
the way which can be manipulated by GA. This representation is known as Genetic
or Chromosomes representation. The genetic representation is known as genotype
and the physical appearance of that which is caused by genes is known as phenotype
as shown in Fig 3.3

Fig 3.3 Representation of Phenotype and Genotype

29

Proposed Method
3.6.1.1 Binary encoding
This encoding is commonly used because of its simplicity. In binary encoding,
each chromosome is a string of bits 0 and 1as given in Fig 3.4. Binary encoding
provides many possible chromosomes even with the small number of alleles

Fig 3.4 Binary Encoding

3.6.1.2 Permutation encoding


In ordering problem, this encoding is used, such as Task Ordering Problem or
Travelling Salesman Problem. Fig 3.5 shows the permutation encoding.
In this encoding, each chromosome is a string of numbers that determines a position
in a sequence.

Fig 3.5 Permutation Encoding

3.6.1.3 Ternary encoding


In this encoding scheme, a filter coefficient is represented by a string of ternary
digits, each of the ternary corresponds to a power-of-two (POT) numbers. Consider a
POT number, k*2m. If k is positive (+1), positive m of the string is 1, if k is negative
(-1) than m of the string is-1. Otherwise the digit is zero. As shown in example,
consider0.421875 in ternary encoding as given in Fig 3.6

Coefficients 0.421875
2-1
0

-2-4

-2-6

1 0 0 -1 0

Fig 3.6 Ternary Encoding

30

Proposed Method
3.6.2 Population
Population is set by randomizing the initial sets of solutions. The size of the
population may vary but usually it is fixed to a certain value. Still it is common that
the population is purely generational. This means that the next generation is
constructed by the offsprings, except individual which are preserved only if elitism
operators are used. The population is purely randomized.
The population p t at the generation t can be denoted as a set of chromosomes as

pt xt (1),.xt (2).......xt ( N )

...3.3

3.6.3 Fitness Function


In order to drive the search of the GA, the fitness level of the each individual is
evaluated in the population by using a fitness function.
The fitness function determines the best quality in the rank wise manner of all
possible phenotype solutions. And it is basically an objective or cost function. The
best qualities solutions are selected rest are rejected as shown in Fig 3.7. The fitness
function is dependent on environment and system application that undergoes the GA
process []. Given population p t at generation t, the GA iteration starts by evaluating
the set as,

F t f t (1), f t (2)...... f t ( N )

...3.4

The objective function value are associated with chromosomes x t (k ) with k=


1,2.......N. The GA is then applied to the genetic operator and selection is done to
produce population p t 1 for the next generation.

3.7 GENETIC OPERATORS


Evolution from one generation to another is simulated by preserving, rearranging or
changing/altering genetic material which is contained in the string of fit individual.
These functionalities are provided by genetic operators. The basic GA operators are;
Selection
Crossover
Mutation

31

Proposed Method
Crossover and mutation are probabilistic operations and their occurrence
frequencies are controlled by predefined probabilities Pc and Pm
respectively. High frequency of occurrence is assigned to crossover as they
play the major role in GA typically 80-90% and low frequency of occurrence
is assigned to mutation typically 5-10%.

Fig 3.7 Fitness Function

3.7.1 Selection
Selection is defined as the process of selecting which individual should be allowed to
contribute to the next generation in genetic process. The individuals who are selected
survive for a long to reproduce that means they are equivalent of survival of the
fitness. The selection method used in GA is The Roulette Wheel selection method.
3.7.1.1 Roulette wheel method
Parents are selected according to their fitness value. The better the chromosomes are,
the more the chances to be selected the parents have. The circumference of Roulette
wheel is divided into segments and marked for each string proportionate to the
fitness value as clearly seen in Fig 3.8
The wheel is spun n times; each time a segment is selected by the wheel pointer. The
segment with the highest fitness value will have more probability for selection.
Firstly the ranks are provided to the population and then according to that rank the
fitness value are assigned to them.

32

Proposed Method
For example, population having low fitness value is ranked as number 1 and the
population with the highest fitness value is ranked as number N.
The following algorithm can describe this process;
th
The i string in the population is selected with the probability to f i , where f i is the

Fig 3.8 Roulette Wheel

fitness value. The population size is usually kept fixed, the sum of probabilities of
th
each string being selected for the tool must be one. The probability of i selected

string is

pi

Fi
n

F
j 1

Where n is population size.


Step 1: calculate the fitness function f i of parents having k chromosomes.

C k ,1 i k .
th
Step 2: compute the probability of i selected string.

Pi

fi
k

f
i 1

Step 3: compute the cumulative probability Pci , by adding the individuals


probabilities.
j

Pcj p i ,

j= 1,2,---------k such that 0 pci 1 . Set L=1

i 1

33

Proposed Method
Step 4: generate a random number r, 0 r 1 . Set i =1, Pc 0 0
Step 5: if Pci r and Pc (i 1) r

C i selected for the tool


Else i=i+1
GOTO step 5.
Step 6: L=L+1
Step 7: if L>K, STOP, else GOTO step 4.
3.7.2 Crossover
After the reproduction phase is completed, the population is enhanced with better
individuals. Reproduction makes duplication of good strings, but they do not create
new strings. Crossover operator is applied to the matting pool with a hope that better
string can be created. The cross over operator aims in searching the parameter space.
In addition, search is to be made in a way that the information which is stored in the
present string is maximally preserved because these parent strings are instances of
good strings selected during reproduction. Like its counterpart in nature, new
individuals are produced by the crossover that have some parts of both parents
genetic material. Crossover is a recombination operator, which proceeds in three
steps. First, a random a pair of two individual strings is selected by the reproduction
operators for mating, then a cross-site is selected at random along the string length
and then the position values are swapped between two strings following the crosssite. There are many types of crossover which exists in GA as given in Fig 3.9
3.7.2.1 Crossover rate
The term crossover rate is usually denoted as Pc, the probability of crossover in GA.
The probability in GA varies from 0 to 1. The calculation is done in GA by finding
out the ratio of the number of pairs to be crossed to some fixed population. Typically
cross over rates are ranged from 0.5 to 1 for a population size of 30 to 200. In
random cross-sites, the children strings produced may not have a combination of best
sub strings from parent strings depending on whether or not the crossing site falls in
the appropriate place. But if best strings are created by cross over, there will be more
copies of them generated by the reproduction operator in the next mating pool. But if
best strings are not created by cross over, they will not survive too long, because
reproduction will select again those strings in subsequent generations as in Fig 3.10.
34

Proposed Method

Fig 3.9 One Point, Two Point and Uniform Crossover

3.7.3 Mutation
After crossover process, the strings are subjected to mutation. Mutation of a bit
involves flipping/altering it, changing 0 to 1 and vice versa with a small amount of
mutation probability Pm as shown in Fig 3.11. The bit-wise mutation is processed
bit-by- bit by flipping a coin with a probability of Pm . Flipping a coin with the
probability of Pm is simulated as follows. A number between 0 and 1 is randomly
chosen. If the random number is smaller than Pm then the outcome of the coin
flipping is true otherwise it is false. If at any bit, the outcome is true then bit is
changed, otherwise the bit is not altered. The bits of the strings are independently
muted, that is, the probability of mutation of other bits is not affected by the mutation
of the bits. The mutation operator introduces new genetic structures in the population
by randomly modifying some of its building blocks. It helps the search algorithm to
escape from local minimas traps since the modification is not related to any previous
genetic structure of the population. It creates different structure representing other
sections of the search space. A simple genetic algorithm treats the mutation only as a
secondary operator with the role of restoring lost genetic materials.
35

Proposed Method

Fig 3.10 Flowchart of Crossover

Suppose, for an example, all the strings in a population have conveyed to a zero at a
given position, and then cross over cannot regenerate a one at that position while a
mutation could. The mutation is simply an insurance policy against the irreversible
loss of genetic material. The mutation is also used to maintain diversity in the
population.

Fig 3.11 Mutation Operation

36

Proposed Method
3.7.3.1 Mutation rate
Mutation rate is the probability of mutation, which is used in calculating the number
of bits to be muted. The diversity is preserved by the mutation operator among the
population, which is also very important for the search. Mutation probabilities are
smaller in natural populations which lead us to conclude that mutation is
appropriately considered a secondary mechanism of GA adoption. Typically, the
population size of 30 to 200 with the mutation rate varying from 0.001 to 0.5 is used
by the simple GA. The flowchart shown in Fig 3.12 describes the working of
mutation.
3.7.4 Termination of GA
Because the GA is a stochastic search method, it is very difficult to formally specify
convergence criteria. As the fitness value of a population may remain constant for a
number of generations before a superior individual is found, the application of
conventional termination criteria becomes problematic. A common practice is to
terminate the GA after a pre-specified number of generations and then test the quality
of the best members of the population against the problem definition. The GA may
be restarted or a fresh search initiated, if no acceptable solutions are found.

3.8 FIR FILTER DESIGN


The design of a digital filter involves following five steps [Infeachor et al (2002)].
Filter Specification: This may include stating the type of filter, for example
low-pass filter, the desired amplitude and/or phase responses and the
tolerances, the sampling frequency, the word length of the input data.
Filter Coefficient Calculation: The coefficient of a transfer function h(z) is
determined in is this step, which will satisfy the given specifications. The
choice of coefficient calculation method will be influenced by several factors
and the most important are the critical requirements i.e. specification. The
window, optimal and frequency sampling method are the most commonly
used.
Realization: This involves converting the transfer function into a suitable
filter network or structure as shown in the Fig 3.13.
Analysis of Finite Word Length Effects: The analysis of the effect of
quantizing the filter coefficients and input data as well as the effect of
37

Proposed Method
carrying out the filtering operation using fixed word lengths on the filter
performance is carried out in this step.

Fig 3.12 Flowchart of Mutation

38

Proposed Method

Implementation: This involves producing the software code and/or hardware


and performing the actual filtering

Fig 3.13 Flowchart of Filter Design

3.8.1 FIR Filter Specifications


The requirement specifications includes,
Signal characteristics.
The characteristics of the filter.
The manner of implementation.
Other design constraints (cost).
All the above requirements are application dependent as shown in Fig 3.14. The
characteristics of digital filters are often in specified in the frequency domain. For
frequency selective filters, such as low-pass and band-pass filters, the specifications
are often in the form of tolerance.

39

Proposed Method
In the pass-band, the magnitude response has a peak deviation of p and in the stopband, it was a maximum deviation of s .The difference between p and s gives the
transition width of the filter and transition band determines how sharp the filter is.
The magnitude response decreases monotonically from the pass-band to stop-band in
this region. The following are the key parameters of interest:
(i) p Peak pass-band deviation (or ripples).
(ii) s Stop-band deviation.
(iii) s Stop-band edge frequency.
(iv) p Pass-band edge frequency
(v) Fs Sampling frequency.
Thus the minimum stop-band attenuation, AS And the peak pass-band ripple, AP , in
decibels are given as,
AS (stopband attenuation) = 20 log 10 s

AP (passband ripple) =

20 log 10 (1 P )

Another important parameter is the filter length, n, which defines the number of
filter.

Fig 3.14 Magnitude Frequency Response Specification of Low Pass Filter

40

Proposed Method

3.8.2 Computation of FIR Coefficient


The objective of most FIR coefficient calculation methods is to obtain accurate
values such that the resulting filter meets the design specifications, such as amplitude
frequency response and throughput requirements. The most commonly methods to
obtain are the window, optimal and frequency sampling method. All three can lead to
linear phase FIR filters [Phoung (2009)].

3.8.2.1 Window method


In this method, the frequency response of a filter H D ( ) , the corresponding impulse,
hd (n) , are related by the inverse Fourier transform [Prokias et al (2004)] :
1
hD ( n )
2

( )e jn d

...3.5

Fig 3.15 Ideal Frequency Response of a Low-pass Filter.

If we know H D ( ) , we can obtain hd (n) by evaluating the inverse Fourier Transform


by equation. The Figure 3.15 shows the ideal frequency response of low-pass filters
where c the cut-off frequency is and the frequency scale is normalised: T = 1. By
letting the response go from c to c .The impulse response is given by:
h D ( n)

1
2

1 e

jn

...3.6

jn

2 f c Sin (n c )
,
n c
41

n 0,

Proposed Method

n0

2 fc

The ideal infinite impulse response is truncated by using various windows. When this
window is multiplied by the ideal transfer function then all the coefficients within the
window are retained and all that are outside the window are discarded.
Table 3.1:Summary of Important Features of Common Window Function .

Window

Transition Width

Minimum Stop Band


Attenuation

Rectangular

4 / n

-21 dB

Hamming

8 / n

-53 dB

Hanning

8 / n

-44 dB

1, 0 n M
wn
0, otherwise

Rectangular window:

Fig 3.16 Magnitude Response of Rectangular Window

42

Proposed Method

Hamming Window:

0.54 0.46 cos2 n M , 0 n M


wn
otherwise
0,

Fig 3.17 Magnitude Response of Hamming Window

Hanning Window:

0.5 0.5 cos2 n M , 0 n M


wn
otherwise
0,

Fig 3.18 Magnitude Response of Hanning Window

43

Proposed Method

3.9 METHODLOGY TO MINIMIZE THE NUMBER OF 1S IN


FIR FILTER COEFFICIENTS
The designing of FIR filters is successfully achieved by the GA . The problem occurs
as error minimization between the ideal frequency and the desired frequency
response as per the designing specification in terms of pans band ripples, stop band
attenuation, linear phase and power consumption. One more objective is added in the
minimization of number of 1s between the successive value in order to minimize the
filter coefficients in the design filter. As distance is defined as the switching activity
which increases the power dissipation so it should be minimized. This problem is
solved using GA.
3.9.1 Problem Formulation
Least mean square error is generally used to define the fitness function of
chromosomes (filter coefficients). The best results are achieved only by the least
square error. The transfer function of FIR filter is;
N

H ( Z . ) a n Z n
n 0

Where is the vector of filter coefficient [ a1 , a 2 ..............a N ] and N is the order of


filter.
jwT
The frequency response is obtained by substituting Z e
N

H ( , ) a n e jwT
n 0

Where and T are the angular frequency and sampling period respectively.

3.9.2 Least Mean Square (LMS) Error


The objective of LMS is defined as the sum of error throughout a discrete frequency
domain by subtracting the magnitude of ideal filter to designed filter.

Error

H () H (, )

2 1/ 2

Where H D (, ) is designed filter response and H 1 ( ) is the ideal frequency


response.

44

Proposed Method
3.9.3 Solution Methodology
The intent works is to optimize the coefficients of FIR filter, to minimize the number
of 1s and satisfies the desired characteristics in terms of stop band attenuation and
pass band ripples. This section illustrates the steps of algorithm that shows how the
objective is achieved.
3.9.3.1 Steps of algorithm for the minimization of number of 1s using GA.
Step1: Compute filter coefficients h1 n and freq. response H1 of ideal
FIR filter for 0 n N 1.
Step 2: Calculate the number of 1s between the FIR filter coefficients h1 n with
ternary encoding.
Step 3: Set the Number of chromosomes (k), mutation rate Pm ,Crossover rate

Cm Stopping criteria.
Step 4: Populate k sets of possible designed solutions, to produce symmet
-ric coefficients H D n0 i k 1 and 0 n N 1
Step 5: Compute the frequency response of the coefficients chromosomes
for H D in population.
Step 6: Calculate the number of 1s in each of the coefficient chromosome.
Step 7:Evaluate the fitness of the chromosomes
Step 8: Apply Roulette wheel selection.
Step 9: Apply crossover operators at a desired rate.
Step 10: Mutate at a desired rate
Step 11: Evaluate again the fitness of the chromosomes.
Step12: If the Stopping criterion is met store the chromosomes according to the
fitness, else goto stop 8.for minimization of 1s, using G.A.

45

Proposed Method

46

CHAPTER 4
RESULTS AND DISCUSSIONS
This chapter presents the results of low power filter design and low
computational complexity using GA Algorithm. The GA algorithm is implemented
under the MATLAB environment. These filters vary in terms of cut off frequency
(fc), sampling frequency (fs) and number of coefficients (N). The filters are designed
using three windows;
Rectangular Window
Hamming Window
Hanning Window
Minimization of number of 1s using GA is formulated as multi-objective algorithm
which reduces the mean square error as well as the distance between the ideal filter
and the desired FIR filter. The coefficients value quantized to 32 bit coefficients
represents from the initial sets of coefficients for optimization the results are
presented using GA.
The minimization of computational cost complexity using optimization genetic
technique is formulated as a multi-objective minimization problem which minimizes
the number of 1s and mean square error between the ideal filter and the desired FIR
filter. In order to generate the solution, weighted method is used which convert the
multi-objective problem into a single objective problem by assigning the appropriate
weights to the objectives. The solutions are generated by making the trade off
between the objectives. The procedure continues until the best combinations of
solutions have been achieved. The best solution is then selected on the basis of
fitness, reduction in computational complexity corresponds to minimum square error.
This algorithm is applied on three windows based designed FIR filter.
Below are the response graph of the ideal and the desired filters which shows the
reduction in computational cost by decreasing the number o 1s using GA. The graph
shows the linear response of FIR filter. These filters are designed by evaluating the
window techniques of FIR filters. Among three windows best results are obtained by

46

Results and Discussions


using hamming window because small amount of attenuation is seen in this window
as compared to others moreover the percentage reduction in the least square error is
about 42% in case of Hamming window as compared to other two windows.
Fig 4.1 to 4.24 shows the results of proposed method applied to optimize the number
of 1s in FIR Filters with different specifications and windows.

47

Results and Discussions

4.1 FILTER RESPONSE WITH OPTIMIZED FILTER


COEFFICIENTS
4.1.1 RECTANGULAR WINDOW
Case 1: Specifications
Window Type

Rectangular

Sampling frequency (fs)

1500

Cut off frequency (fc)

350

Number of coefficients (N)

16

Original coefficients

-0.0424,-0.0051, 0.0566, 0.0219,


-0.0831, -0.0637, 0.1717, 0.4260,
0.4260, 0.1717, -0.0637, -0.0831,
0.0219, 0.0566,-0.0051, -0.0424

Actual number of 1s

38

Optimized coefficients

-.0.0391, -0.0078, 0.0547, 0,


0.0078,-0.0625, 0.1484, 0.4141,
0.4141, 0.1484, -0.0625, 0.0078,
0, 0.0547, -0.0078,- 0.0391

Reduced number of 1s

30

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

0
-200
-400
-600
-800

Fig 4.1 Low Pass Filter Ideal Response with Specification


fc=350, fs=1500, N=16 and Rectangular Window

48

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.2 Filter Response With Optimized Filter Coefficients


with fc= 350. fs=1500, N=16 and Rectangular Window

Case 2: Specifications
Window Type

Rectangular

Sampling frequency (fs)

2000

Cut off frequency (fc)

350

Number of coefficients (N)

16

Original coefficients

0.0392, 0.0372, -0.0135, -0.0688,


-0.0591, 0.0487, 0.2116, 0.3326,
0.3326, 0.2116, 0.0487, -0.0591,
-0.0688, -0.0135, 0.0372, 0.0392

Actual number of 1s

38

Optimized coefficients

0.0313, 0.0547, -0.0078, -0.0938,


-0.0156, 0.0391, 0.2422, 0.2422,
0.2422, 0.2422, 0.0391, -0.0156,
-0.0038, -0.0078, 0.0547, 0.0313

Reduced number of 1s

34
49

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-200

-400

-600

Fig 4.3 Low Pass Filter Ideal Response with Specification


fc=350, fs=2000, N=16 and Rectangular Window

Magnitude (dB)

0
-20
-40
-60
-80

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.4 Filter Response With Optimized Filter Coefficients


with fc= 350. fs=2000, N=16 and Rectangular Window

50

Results and Discussions


Case 3: Specifications
Window Type

Rectangular

Sampling frequency (fs)

2000

Cut off frequency (fc)

500

Number of coefficients (N)

16

Original coefficients

-0.0300, -0.0346, 0.0409, 0.0500


-0.0643, -0.0900, 0.1501, 0.4502
0.4502, 0.1501, -0.0900, -0.0643
0.0500, 0.0409, -0.0346, -0.0300

Actual number of 1s

40

Optimized coefficients

-0.1094, -0.0156, 0.0469, 0.0625


-0.0078, -0.0938, 0.1172, 0.5000
0.5000, 0.1172, -0.0938, -0.0078,
0.0625, 0.0469, -0.0156, -0.1094

Reduced number of 1s

28

Magnitude (dB)

50
0
-50
-100
-150

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.5 Low Pass Filter Ideal Response with Specification


fc=500, fs=2000, N=16 and Rectangular Window

51

Results and Discussions

Magnitude (dB)

20
0
-20
-40
-60

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.6 Filter Response With Optimized Filter Coefficients


with fc= 500. fs=2000, N=16 and Rectangular Window

Case 4: Specifications
Window Type

Rectangular

Sampling frequency (fs)

3000

Cut off frequency (fc)

500

Number of coefficients (N)

16

Original coefficients

0.0424, 0.0245, -0.0289, -0.0707


-0.0455, 0.0637, 0.2122, 0.3183
0.03183, 0.2122, 0.0637, -0.0455
-0.0707, -0.0289, 0.0245, 0.0424

Actual number of 1s

40

Optimized coefficients

0.0234, 0.0156, -0.0156, -0.0313


-0.0078, 0.0078, 0.2344, 0.3281
0.3281, 0.2344,0.0078, -0.0078
-0.0313, -0.0156, 0.0156, 0.0234

Reduced number of 1s

36

52

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-200

-400

-600

Fig 4.7 Low Pass Filter Ideal Response with Specification


fc=500, fs=3000, N=16 and Rectangular Window

Magnitude (dB)

20
0
-20
-40
-60

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.8 Filter Response With Optimized Filter Coefficients


with fc= 500. fs=3000, N=16 and Rectangular Window

53

Results and Discussions


4.1.2 HAMMING WINDOW
Case 1: Specifications
Window Type

Hamming

Sampling frequency (fs)

1500

Cut off frequency (fc)

350

Number of coefficients (N)

16

Original coefficients

-0.0034, -0.0006, 0.0131, 0.0087


-0.0489, -0.0491, 0.1566, 0.4217
0.4217, 0.1566, - 0.0491, -0.0489
0.0087, 0.0131, -0.0006, -0.0034

Actual number of 1s

26

Optimized coefficients

0.0078, 0.0078, 0.0078, -0.0234


0.0078, 0.0078, -0.1797, -0.4609
-0.4609, -0.1797, 0.0078, 0.0078
-0.0234, 0.0078, 0.0078, 0.0078

Reduced number of 1s

24

Fig 4.9 Low Pass Filter Ideal Response with Specification


fc=350, fs=1500, N=16 and Hamming Window

54

Results and Discussions

Fig 4.10 Filter Response With Optimized Filter Coefficients


with fc=350. fs=1500, N=16 and Hamming Window

Case 2: Specifications
Window Type

Hamming

Sampling frequency (fs)

2000

Cut off frequency (fc)

350

Number of coefficients (N)

16

Original coefficients

0.0031, 0.0045, -0.0031, -0.0274


-0.0347, 0.0375, 0.1931, 0.3239
0.3239, 0.1931, 0.0375, -0.0347
-0.0274, -0.0031, 0.0045, 0.0031

Actual number of 1s

32

Optimized coefficients

0.0078, 0.0078, -0.0234, 0.0078


0.0078, 0.0078, 0.1797, 0.3047
0.3047, 0.1797, 0.0078, 0.0078
0.0078, -0.0234, 0.0078, 0.0078

Reduced number of 1s

26

55

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.11 Low Pass Filter Ideal Response with Specification


fc=350, fs=2000, N=16 and Hamming Window

Magnitude (dB)

0
-20
-40
-60
-80

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.12 Filter Response With Optimized Filter Coefficients


with fc=350. fs=2000, N=16 and Hamming Window

56

.5

Results and Discussions


Case 3: Specifications
Window Type

Hamming

Sampling frequency (fs)

2000

Cut off frequency (fc)

500

Number of coefficients (N)

16

Original coefficients

-0.0024, -0.0041, 0.0005, -0.0199


-0.0378, -0.0693, -0.1369, 0.4456
0.4456, -0.1369, -0.0693, -0.0378
-0.0199 0.0095, -0.0041, -0.0024

Actual number of 1s

28

Optimized coefficients

0.0078, 0.0078, 0.0078, 0.0078


0.0078,-0.0234, 0.1172, 0.4922,
0.4922, 0.1172, -0.0234, 0.0078
0.0078, 0.0078, 0.0078, 0.0078

Reduced number of 1s

16

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.13 Low Pass Filter Ideal Response with Specification


fc=500, fs=2000, N=16 and Hamming Window

57

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.14 Filter Response With Optimized Filter Coefficients


with fc=500. fs=2000, N=16 and Hamming Window

Case 4: Specifications
Window Type

Hamming

Sampling frequency (fs)

3000

Cut off frequency (fc)

500

Number of coefficients (N)

16

Original coefficients

-0.0024, -0.0041, 0.0095, 0.0199


-0.0378, -0.0693, 0.1369, 0.4456
0.4456, 0.1369, -0.0693, -0.0378
0.0199, 0.0095, -0.0041, -0.0024

Actual number of 1s

32

Optimized coefficients

-0.0469, -0.0234, 0.0547, 0.0078


-0.0547, -0.0625, 0.1172, 0.4844
0.4844, 0.1172, -0.0625, -0.0547
0.0078, 0.0547, -0.0234, -0.0469

Reduced number of 1s

22

58

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.15 Low Pass Filter Ideal Response with Specification


fc=500, fs=3000, N=16 and Hamming Window

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.16 Filter Response With Optimized Filter Coefficients


with fc=500. fs=3000, N=16 and Hamming Window

59

Results and Discussions


4.1.3 HANNING WINDOW
Case 1: Specifications
Window Type

Hanning

Sampling frequency (fs)

1500

Cut off frequency (fc)

350

Number of coefficients (N)

16

Original coefficients

0, -0.0002, 0.0094, 0.0076,


-0.0459,-0.0477, 0.1553, 0.4213,
0.4213, 0.1553, -0.0477, -0.0459,
0.0076, 0.0094,-0.0002, 0

Actual number of 1s

28

Optimized Coefficients

0.0078, 0, 0.0078, 0.0391


-0.0078, -0.0547, 0.1172, 0.4141
0.4141, 0.1172, -0.0547, -0.0078
0.0391, 0.0078, 0, 0.0078

Reduced number of 1s

22

Magnitude (dB)

100

-100

-200

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.17 Low Pass Filter Ideal Response with Specification


fc=350, fs=1500, N=16 and Hanning Window

60

Results and Discussions

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.16 Filter Response With Optimized Filter Coefficients


with fc=350. fs=1500, N=16 and Hanning Window

Case 2: Specifications
Window Type

Hanning

Sampling frequency (fs)

2000

Cut off frequency (fc)

350

Number of coefficients (N)

16

Original coefficients

0, 0.0016, -0.0022, -0.0283,


-0.0326, 0.0365, 0.1914, 0.3290,
0.3290, 0.1914, 0.0365, -0.0326,
-0.0283, -0.0022, 0.0016, 0

Actual number of 1s

36

Optimized coefficients

0.0078, -0.0234, -0.0078, -0.0078,


-0.0078, 0.0547, 0.2109, 0.3203,
0.3203, 0.2019, 0.0547, -0.0078,
-0.0078, -0.0078, -0.0234, 0.0078

Reduced number of 1s

30

61

Results and Discussions

Magnitude (dB)

50
0
-50
-100
-150

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.19 Low Pass Filter Ideal Response with Specification


fc=350, fs=2000, N=16 and Hanning Window

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.20 Filter Response With Optimized Filter Coefficients


with fc=350. fs=2000, N=16 and Hanning Window

62

Results and Discussions


Case 3: Specifications
Window Type

Hanning

Sampling frequency (fs)

2000

Cut off frequency (fc)

500

Number of coefficients (N)

16

Original coefficients

0, -0.0015, 0.0068, 0.0173


-0.0355, -0.0675, 0.1357, 0.4452
0.4452, 0.1375, -0.0675, -0.0355
0.0173, 0.0068, -0.0015, 0

Actual number of 1s

40

Optimized coefficients

-0.0625, -0.0078, 0.0078, 0.0547


-0.0156, -0.1172, 0.0391, 0.4375
0.4375, 0.0391, -0.1172, -0.0156,
0.0547, 0.0078, -0.0078, 0.0625

Reduced number of 1s

30

Magnitude (dB)

50

-50

-100

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.21 Low Pass Filter Ideal Response with Specification


fc=500, fs=2000, N=16 and Hanning Window

63

Results and Discussions

Magnitude (dB)

20
0
-20
-40
-60

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

-1500

Fig 4.22 Filter Response With Optimized Filter Coefficients


with fc=500. fs=2000, N=16 and Hanning Window

Case 4: Specifications
Window Type

Hanning

Sampling frequency (fs)

3000

Cut off frequency (fc)

500

Number of coefficients (N)

16

Original coefficients

0, 0.0011, -0.0048, -0.0244


-0.0251, 0.0477, 0.1919, 0.3148
0.3148, 0.1919, 0.0477, -0.0251,
-0.0244, -0.0048, 0.0011, 0

Actual number of 1s

32

Optimized coefficients

-0.0625, 0, 0.0078, 0.0156


-0.0313, 0.0078, 0.2344, 0.3750
0.3750, 0.2344, 0.0078, -0.0313
0.0156, 0.0078, 0, -0.0625

Reduced number of 1s

30

64

Results and Discussions

Magnitude (dB)

100

-100

-200

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.23 Low Pass Filter Ideal Response with Specification


fc=500, fs=3000, N=16 and Hanning Window

Magnitude (dB)

20
0
-20
-40
-60

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)

0.9

Phase (degrees)

-500

-1000

Fig 4.24 Filter Response With Optimized Filter Coefficients


with fc=500. fs=3000, N=16 and Hanning Window

65

Results and Discussions


The results of three FIR filters in terms of percentage error reduction and reduction
in the number of 1s after genetic algorithm performance is summarised in the
comparison table 4.1 given below;

Table 4.1: Comparison of Simulation Results at Various Windows

Window Type Specifications

Original 1s Optimized 1s Error

% reduction

Response in No. of 1s
fc
Rectangular

Hamming

Hanning

fs

350 1500 16

38

30

0.6518

21.05%

350 2000 16

38

34

0.8396

10.52%

500 2000 16

40

28

0.832937

30%

500 3000 16

40

36

0.6617

10%

350 1500 16

26

24

0.629

7.69%

350 2000 16

32

24

0.5718

25%

500 2000 16

28

16

0.673

42.85%

500 3000 16

32

22

0.5367

31.25%

350 1500 16

28

22

0.5096

21.42%

350 2000 16

36

30

0.3898

16.66%

500 2000 16

40

24

0.3146

40%

500 3000 16

32

30

0.2496

6.25%

66

CHAPTER 5
CONCLUSION AND FUTURE WORK
5.1 CONCLUSION
In this present work error minimization algorithms are presented for the
low power and low complexity realization of FIR filters on the Programmable DSP
analysis is presented to show that the number of multipliers and adders in filters
increases the computational cost and the number of signal toggling between the
values forms the main cause of power dissipation. To improve this problem the
optimization technique is used.
Optimization algorithm is presented in detail so as to minimize the number of signal
toggling or to minimize the number of 1s. One such technique presented is the
genetic algorithm for the minimization of these problems discussed above. GA can
be implemented as a computer simulation in which a population of abstract
representation (the chromosomes, or the genotype, or the phenotype) of the candidate
solutions to be optimized problem evolves the better solution. GAs particularly has
emerged as a powerful technique for searching in high dimensional spaces, capable
of solving problems despite their lack of knowledge of the problem being solved.
The error reduction for three Low pass filter shows that hamming window reduces
25% of error in the filter. The toggling of signal after GA is reduced to 50% in case
of Hamming window which further reduces the system complexity and power
dissipated. So at the end it concluded that hamming window gives better results than
other windows.

5.2 FUTURE WORK


The minimization of error and the number of signal toggling using Fuzzy Logic and
Artificial Neural Network (ANN) with genetic algorithm is expected to receive more
attention in future. It is foreseen that more hybrid system will be launched for signal
processing applications and GAs are the important components of these
development.

67

68

REFERENCES

Anantha, P. et al. (1992). Low power CMOS digital. IEEE Journal of Solid
Stare Circuits, 27(4): 473-484.
Barapate, R. A. (2007). Digital Signal Processing, Pune: Tech-Max Publication.
Chadrakasan, A. and Brodersen, R. W. (1995). Low Power Digital CMOS Design.
Kluwer Publisher, 2-5.
Chadrakasan, A. et al.(1995). Optimizing power using transformations. IEEE
Transaction on Computer Aided Design,. 14(1): 12-31.
Darren, N. et al. (1995). Low power digital filter architectures. IEEE Symposium on
Circuit and Systems , 1: 231-234.
Erdogan, A. T. and Arslan, T. ( 2003). Low power block based FIR filtering cores.
Proceedings of the International Symposium on Circuits and Systems (ISCAS '03), 5:
341-344.
Erdogan, A.T. and Arslan, T. (1998). A coefficient segmentation algorithm for low
power implementation of Fir filters. IET Electronics Letter, 34(19): 1817-1819.
Erdogen, A. T. and Arslan, T. (1996). Low power multiplication scheme for FIR filter
implimentation on single multiplies CMOS DSP processors. IEEE Electronics
Letters, 32(21): 123-125.
Goldberg, D. E. (1989). Genetic algorithms

in search

optimization

and

machine.Learning. San Francisco: Addison-Wesley.


Hezar, R. and Madisetti, V. K. (1996). Loow power digital filter implimentation using
ternary coefficients. IEEE Workshop On VLSI Processing, 179-188.
Holland, J. H. (1975). Adaptation in natural and articial systems. Ann Arbor, MI:
University of Michigan Press.
Horrocks, D. H. et al. (1996). Low power design for DSP: methodologies and
techniques. Elsevier Science Microelectronics Journal proceedings, 731-744.
68

References
Ifeachor, E. C. and Jervis, B. W. (2002). Finite impulse response (FIR) filter design
in Digital Signal Processing. A Practical Approach, South Asia: Prentice hall, 342440.
Jang, Y. and Yang, S. (2004). Low power CSD linear phase FIR filter structure
using vertical common sub-expression. IEEE Electronics Letters Online, 38(15):
777-779.
Jong, K. D. (1980). Hybrid

methods

using

genetic

algorithms for global

optimization.IEEE Transaction Systems Man Cybernatics, 10: 566 574.


Joshi, S. and Ainapure, B. (2010). FPGA based FIR filter. International Journal of
Engineering Science and Technology, 2(12): 7320-7323.
Kaiser, J.F. (1974). Nonrecursive digital lter design using i -sinh window
function. Proceeding of IEEE International Symposium Circuits System, 20-30.
Kodek, D. and Steiglitz, K. (1981). Comparison of optimal and local search methods
for designing finite word length FIR digital filters. IEEE Transaction Circuits and
Systems, 28(1).
Li, J. et al. (2011). Design and simulation of 60-order filter based on FPGA. In
proceedings International Conference on Intelligent Human-Machine Systems and
Cybernetics (IHMSC), IEEE: 113 115.
Lim, Y. C., and Parker, S. R.(1983) `FIR filter design over a discrete powers-of- two
coefficient space'. IEEE Transaction on Acoustics, Speech, and Signal Processing,
31(3): 583-591.
Lin, Y. P. and Vaidyanathan, P. P. (1998). A kaiser window approach for the design
of prototype filters of cosine modulated filterbanks. Signal Processing Letters,
IEEE, 5(6): 132-134.
Lu, W. S. et al. (1997). Sequential design of FIR digital filters for low- power DSP
applications. Conference Record of the Thirty-First Asilomar Conference on
Signals, Systems &amp, IEEE. 701-704.
Mahesh, M. S. et al. (1996). Algorithm and agricultural transformation for low power
realization of FIR fiter. IEEE 11th Conference on VLSI Design, proceedings:12-17.
69

References
Merakos, P. K. et al. (1997). A novel transformation for reduction of switching
activity in FIR filters implimentation. IEEE 13th Conference on Digital Signal
Processing, Proceedings: 653-656.
Mitra,S .K . (2005). Digital Signal Processing, New York: Tata McGraw Hill.
Nagendra, C. et al. (1995). Low power consideration in the design of pipelined FIR
filters. IEEE Symposium on Low Power Electronics, 32-33.
Phuong , N.H. (2009). The FIR filter Design . The Window Design Method.
Pittman, J. and Murthy, C.A. (2000). Fitting optimal picewise linear functions
using genetic algorithms. IEEE Tarns. on Pattern Analysis Machine Intelligence,
22: 701718.
Proakis, J. G., and Manolakis, D. G.(2004). Digital Signal Processing. Prentice Hall
India publication.
Rabaey, J. M. and Pedram, M. (1996). Low power design methodologies. Kluwer
Publishers, 2-11.
Rajput, R. P. and Swamy, M. N. S. (2012). High speed modified booth encoder
multiplier for signed and unsigned numbers. International Conference on Modelling
and Simulation, 649-654.
Rashidi, B. and Pourormazd, M. (2011). Design and implementation of low power
digital FIR filter based on low power multipliers and adders on Xilinx FPGA.
International Conference on Electronics Computer Technology, 18-22.
Sakurai, T. (2002). Low power and high speed VLSI design with low supply voltage
through cooperation between levels. International Symposium on Quality Electronic
Design. Proceedings, 18(21): 445-450.
Samueli, H. (1989). An improved search for the design of multiplierless FIR filters
with power-of two coefficients. IEEE Transaction circuits and system, 36(7):10441047.

70

References
Sankaraya, N. et al. (1996). Algorithm for low power realization of FIR filters using
differential coefficients. IEEE 10th international conference on VLSI design
proceedings, 370-375.
Smith, S.W. (1997). The scientist and engineer's guide to Digital Signal Processing.
San Diego: California Technical Publications.
Soni, V. et al. (2011). Application of exponential window to design a digital non
recursive FIR filter. International Conference on Communications and Signal
Processing, 1015-1019.
Suckley, D. (1991). Genetic algorithm in the design of FIR filter. IEEE Proceedings
on Genetic Algorithms, 138: 234-238.
Tan, L. and Jiang, J. (2007). Digital Signal Processing: Fundamentals and
Applications, Amsterdam: Academic Press.
Tiwari, V. et al. (1994). Compilation techniques for low energy an overview. In
IEEE Symposium On Low Power Electronics, San Diego, California, USA.
Proceedings, 3839.
Turner, R. H. and Woods, R .F.(2004). Highly efficient limited range multipliers for
LUT-based FPGA architectures. IEEE Transactions on Very Large Scale Integration
(VLSI) Systems, 12 (10):1113-1118.
Ultra-Low-Power DLMS Adaptive Filter for Hearing Aid Applications.(2003). IEEE
Transactions on Very Large Scale Integration (VLSI) Systems, 11(6): 1058-1067.
Veendrick, H. J. M. (1984). Short-circuit dissipation of static CMOS circuitry and
its impact on the design of buffer circuits. IEEE Journal of Solid-State Circuit, 19:
468473.
Yeap, G K. (1998). Practical low power digital VLSI design. Kluwer Academic
Publisher, Norwell, Mass, 2-8.
Yunlong, W. et al. (2011). An extreme simple method for digital FIR filter design in
proceedings. Third International Conference on Measuring Technology and
Mechatronics Automation (ICMTMA), IEEE, 410-413.

71

References
Zwyssig, E. P. et al. (2001). Low power system on chip implementation scheme of
digital filtering cores. IEEE Seminar on Low Power IC Design,proceedings, 5:1-9.

72

Das könnte Ihnen auch gefallen