Sie sind auf Seite 1von 83

Implementation of heart beat monitoring using DWT

CONTENTS
Abstract
Chapter 1

introduction

Significant features of ECG waveform:


Main features of this simulator:
Principle:
Fourier series
Calculations:
How do we generate periodic QRS portion of ECG signal
How do we generate periodic p-wave portion of ECG signal

Chapter 2 ADAPTIVE NOISE CANCELLATION


(ANC)

introduction of anc

signal processing and adaptive filter

filter implementation

adaptive filters
adaptive filters in the filter design toolbox
noise cancellation
adaptive signal identification

Chapter3 ALGORITHM FOR ADAPTIVE FILTER


Adaptive Algorithm

Lms Adaptive Algorithm

Chapter4 ADAPTIVE SYSTEM

general properties

system configuration and filter design

design & simulation

system identification

adaptive interference cancelling

noise cancellation

Chapter 5 matlab

Matlab Introduction

Typical uses of MATLAB

features of MATLAB

MATLAB System

Development Environment.

The MATLAB Mathematical Function Library.

The MATLAB Language.

Graphics.

MATLAB Application Program Interface (API).

Starting MATLAB
MATLAB Desktop

Implementations

Arithmetic operations

sum, transpose, and diag

Generating Matrices

M-Files

Graph Components

Plotting Tools

Editor/Debugger

Chapter 6 SIMULATION

CODE
RESULTS
APPPENDIX – BIBILOGRAPHY
CONCLUSION
Abstract:

Electrocardiography deals with the electrical activity of the heart. Monitored


by placing sensors at defined positions on chest and limb extremities of the subject,
electrocardiogram (ECG) is a record of the origin and propagation of the electric action
potential through cardiac muscle. It is considered a representative signal of cardiac
physiology, analyze these signals and extract information about the cardiovascular
system.
Most of the methods used are linear and it has been recognized that
nonlinear methods may be more suitable for analyzing signals that originate from
complex nonlinear living systems. Recent developments in non-linear analysis have
provided various methods for the study of the complex cardiovascular system. It is now
generally recognized that many processes generated by the biological system can be
described in an effective way by using the methods of nonlinear dynamics. The nonlinear
dynamical techniques are based on the concept of chaos, which was first introduced with
Applications to complicated dynamical systems in meteorology. Since then, it has been
applied to medicine and biology.
The aim of the ECG simulator is to calculate heart beat of ECG
waveforms. This monitoring system is developing on matlab using desecrate wavelet
transformation technique.
Chapter 1

Introduction:
The aim of the ECG simulator is to produce the typical ECG waveforms of
different leads and as many arrhythmias as possible. My ECG simulator is a
matlab based simulator and is able to produce normal lead II ECG
waveform.

The use of a simulator has many advantages in the simulation of ECG


waveforms. First one is saving of time and another one is removing the
difficulties of taking real ECG signals with invasive and noninvasive
methods. The ECG simulator enables us to analyze and study normal and
abnormal ECG waveforms without actually using the ECG machine. One
can simulate any given ECG waveform using the ECG simulator.

Significant features of ECG waveform:


A typical scalar electrocardiographic lead is shown in Fig. 1, where the
significant features of the waveform are the P, Q, R, S, and T waves, the
duration of each wave, and certain time intervals such as the P-R, S-T, and
Q-T intervals.
fig 1.Typical ECG signal

Main features of this simulator:


• Any value of heart beat can be set
• Any value of intervals between the peaks (ex-PR interval) can be set
• Any value of amplitude can be set for each of the peaks
• Fibrillation can be simulated
• Noise due to the electrodes can be simulated
• Heart pulse of the particular ECG wave form can be represented in a
separate graph

Principle:
Fourier series
Any periodic functions which satisfy dirichlet’s condition can be expressed
as a series of scaled magnitudes of sin and cos terms of frequencies which
occur as a multiple of fundamental frequency.
∞ ∞
f (x) = (ao/2) + Σ an cos (nπx / l) + Σ bn sin (nπx / l),
n=1 n=1
ao = (1/ l ) ∫ f (x) dx , T = 2l --
(1)
T

an = (1/ l ) ∫ f (x) cos (nπx / l) dx , n = 1,2,3…. --


(2)
T

bn = (1/ l ) ∫ f (x) sin (nπx / l) dx , n = 1,2,3…. --


(3)
T

ECG signal is periodic with fundamental frequency determined by the heart


beat. It also satisfies the dirichlet’s conditions:
• Single valued and finite in the given interval
• Absolutely integrable
• Finite number of maxima and minima between finite intervals
• It has finite number of discontinuities
Hence fourier series can be used for representing ECG signal.

Calculations:
If we observe figure1, we may notice that a single period of a ECG signal is
a mixture of triangular and sinusoidal wave forms. Each significant feature
of ECG signal can be represented by shifted and scaled versions one of these
waveforms as shown below.
• QRS, Q and S portions of ECG signal can be represented by triangular
waveforms
• P, T and U portions can be represented by triangular waveforms
Once we generate each of these portions, they can be added finally to get the
ECG signal.
Lets take QRS waveform as the centre one and all shiftings takes place with
respect to this part of the signal.

How do we generate periodic QRS portion of ECG signal

Fig 2. generating QRS waveform


From equation (1), we have

f(x) = (–bax/l) + a 0 < x < ( l/b )


= ( bax/l) + a (– l/b)< x < 0

ao = (1/ l ) ∫ f (x) dx
T
= (a/b) * (2 – b )

an = (1/ l ) ∫ f (x) cos (nπx / l) dx


T
= ( 2ba / (n2π2 )) * ( 1 – cos (nπ/b))
bn = (1/ l ) ∫ f (x) sin (nπx / l) dx
T
= 0 ( because the waveform is a even function)


f (x) = (ao/2) + Σ an cos (nπx / l)
n=1

How do we generate periodic p-wave portion of ECG signal

Fig 3. generation of p-wave


f(x) = cos ((πbx) /(2l)) (–l/b)< x < (l/b)

ao = (1/ l ) ∫ cos ((πbx) / (2l)) dx


T
= (a/(2b))(2-b)

an = (1/ l ) ∫ cos ((πbx) / (2l)) cos (nπx / l) dx


T
= (((2ba)/(i2π2)) (1-cos((nπ)/b))) cos((nπx)/l)
bn = (1/ l ) ∫ cos ((πbx) / (2l)) sin (nπx / l) dx
T
= 0 ( because the waveform is a even function)


f (x) = (ao/2) + Σ an cos (nπx / l)
n=1
CHAPTER 2

ADAPTIVE NOISE CANCELLATION (ANC)

INTRODUCTION

In a general sense, adaptive filters are systems that vary through time
because the characteristics of its inputs may be varying. That is what separates it from
classical digital signal processing - the digital system itself changes through time. In a
sense, its convolution properties are evolving.

Therefore, adaptive filters must be non-linear because superposition does


not hold. But when their adjustments are held constant after adaptation, then some can
become linear, and belong under the well-named class linear adaptive filters. We will be
working with those in this project.

Whenever there is a requirement to process signals in an environment of


unknown statistics, adaptive filters will more often than not do the job better than a fixed
filter. The best way to introduce adaptive filters is by example. Say you are talking on
you cell phone in your car, and the engine is producing unwanted noise that the cell
phone must filter out. Well, when you change gears, the noise will be at a different
frequency, and you don't want to stop in the middle of your conversation and toy with the
electronics in your phone to adjust the band pass. The filter must do the job for you,
without your intervention. An adaptive filter can track that noise, follow its
characteristics, and knock it out so that you may have a good, clean conversation.

Signals are a very fundamental part of every telecommunication system,


and such systems abound with examples of signal processing techniques. In recent years
more and more attention has been given to the use of digital signal processing techniques
in telecommunication systems. For many applications information is now most
conveniently recorded, transmitted and stored in digital form. As a result, digital signal
processing is becoming an important modern tool. Typical reasons for signal processing
include: estimation of characteristic signal parameters, elimination or reduction of
unwanted interference and transformation of a signal into a form that is in some sense
more informative. Digital signal processing deals with the representation of signals as
ordered sequences of numbers and the processing of those sequences. Digital signals are
those for which both time and amplitude are discrete.

Before we go in for Digital signal processing we need to know why we are


going for it when we know we have analog signal processing. The prime benefit of
Digital signal processing over analog signal processing is flexibility. In general, a digital
processing system, is more easily reconfigured , as parameters of the problem change. In
fact, digital simulations are now frequently used to analyze designs in an attempt to
identify potentially costly design errors before the hardware is built. Applications for
digital signal processing currently exist in diverse fields such as acoustics, sonar, radar,
geophysics, communications and medicine.

A category of digital signal processing known as adaptive signal processing


is a relatively new area in which applications are increasing rapidly. Adaptive signal
processing evolved from techniques developed to enable the adaptive control of time-
varying systems. It has gained a lot of popularity due to the advances in digital
technology that have increased the computing capacities and broadened the scope of
digital signal processing. The key difference between classical signal processing
techniques and adaptive signal processing method is that in the latter we deal with time
varying digital systems. When adaptive filters are used to process non-stationary signals,
whose statistical properties vary in time, the required amount of a prior information is
often less than that required for processing via fixed digital filters. Other applications
besides noise cancellation include system identification, signal prediction, source
separation, channel equalization, and more.
SIGNAL PROCESSING AND ADAPTIVE FILTER

Digital Signal Processing (DSP) is used to transform and analyze


data and signals that are either inherently discrete or have been sampled from analogue
sources. With the availability of cheap but powerful general-purpose computers and
custom-designed DSP chips, digital signal processing has come to have a great impact on
many disciplines from electronic and mechanical engineering to economics and
meteorology. In the field of biomedical engineering, for example, digital filters are used
to remove unwanted 'noise' from electrocardiograms (ECG) while in the area of consumer
electronics DSP techniques have revolutionized the recording and playback of audio
material with the introduction of compact disk and digital audio tape technology. The
design of a conventional digital signal processor, or filter, requires a priori knowledge of
the statistics of the data to be processed. When this information is inadequate or when the
statistical characteristics of the input data are known to change with time, adaptive filters
[1, 22] are employed. Adaptive filters are employed in a great many areas of
telecommunications for such purposes as adaptive equalization, echo cancellation, speech
and image encoding, and noise and interference reduction. Adaptive filters have the
property of self-optimization. They consist, primarily, of a time varying filter,
characterized by a set of adjustable coefficients and a recursive algorithm which updates
these coefficients as further information concerning the statistics of the relevant signals is
acquired. A desired response d (n), related in some way to the input signal, is made
available to the adaptive filter. The characteristics of the adaptive filter are then modified
so that its output y^(n), resembles d(n) as closely as possible. The difference between the
desired and adaptive filter responses is termed the error and is defined as:
Ideally, the adaptive process becomes one of driving the error, e (n) towards zero. In
practice, however, this may not always be possible and so an optimization criterion, such
as the mean square error or some other measured and is employed.
Adaptive filters may be divided into recursive and non-recursive categories depending on
their inclusion of a feedback path. The response of non recursive , or finite impulse-
response (FIR) filters is dependent upon only a finite number of previous values of the
input signal. Recursive, or infinite impulse-response (IIR) filters, however, have a
response which depends upon all previous input values, the output being calculated using
not only a finite number of previous input values directly, but also one or more previous
output values. Many real-world transfer functions require much more verbose
descriptions in FIR than in recursive form. The potentially greater computational
efficiency of recursive filters over their non-recursive counterparts is, however, tempered
by several shortcomings, the most important of which are
that the filter is potentially unstable and that there are no wholly satisfactory
adaptation algorithms.
There are two main types of adaptive IIR filtering algorithms [17], which
in the formulation of the prediction error used to assess the appropriateness of the current
coefficient set during adaptation. In the equation-error approach the error is a linear
function of the coefficients. Consequently the mean square error is a quadratic function of
the coefficients and has a single global minimum and no local minima. This means that
simple gradient-based algorithms can be used for adaptation. However in the presence of
noise (which is present in all real problems) equation-error based algorithms converge to
biased estimates of the filter coefficients [17]. The second approach, the output-error
formulation, adjusts the coefficients of the time-varying digital filter directly in recursive
form. The response of an output-error IIR filter is characterized by the recursive
difference equation:
which depends not only upon delayed samples of the input x(n),but also
upon past output samples, y^(n:m); m = 1; : : :;N _1. The output, y^ (n) is a nonlinear
function of the coefficients, since the delayed output signals themselves depend upon
previous coefficient values. Consequently, the mean square error is not a quadratic
function of the feedback coefficients (the a), though it is of the b, and may have multiple
local minima. Adaptive algorithms based on gradient-search methods, such as the widely
used LMS may converge to sub-optimal estimates of the filter coefficients if initialized
within the basin of attraction of one of these local minima. Evolutionary algorithms a
search method which can be resistant to being trapped in local minima, so they provide a
possible avenue for the successful adaptation of output-error based recursive digital
filters. The rest of this article describes the results of exploring some avenues in this area.

FILTER IMPLEMENTATION

To carry out our project in real-time on a DSP board from Texas


Instruments, we set out with the goal of adaptively filtering a simple noise component out
of a voice signal. The goal of implementing our project onto a DSP board seemed at first
trivial; all we had to do was convert some MATLAB code into C and compile the code. It
turned out to be a non-trivial task.

When we began the project, nobody in the group had ever seen a DSP board
like the ones in the lab before, let alone used one. Our first hurdle to overcome was to
familiarize ourselves with the boards functionality.

Using the Elec434 web page as a basis for learning, we set out to master the
structure of the DSP boards use. Our first steps of understanding the boards involved
doing several of Dr. Choi's lab assignments for 434. Upon completing the third lab, we
understood how to implement a low pass FIR filter. MATLAB is used to generate the
desired coefficients. These coefficients are then saved to a file as the two's complement
representation in Q15 (16 bit fixed point) format.
When the board is run, the main function is used to initialize all variables,
and an infinite loop with no operations is run to keep the board running. However, the
board has several interrupts built in that will cause the board to perform various tasks.
One of these interrupts is used for inputing data from the CODEC. Our implementations
of filters involved using this interrupt (IRQ 11) to carry out our desired task.

Every time the interrupt is used, it means that there is an input sample ready
to be dealt with. In our code, we wrote our algorithms into this section of the board. For
most (non-adaptive) filters, this just meant shifting our array, taking a dot product, and
giving an output. Having carried out the first three labs, we were fully capable of creating
low-pass, high-pass, and band-pass filters in real-time.

Previous to this stage, we were working on the DSK boards in the lab, but
due to limitations on the board, we had to switch over to the EVM boards. The big
advantage of doing this was that the DSK boards only have one input on the CODECs,
while the EVM boards have two. We needed the second input if we were to have some
reference to our noise signal. Though it was only a small task, we had to make sure our
FIR filters worked on the EVM board. The EVM boards did offer one extra challenge
though - the gains on the two inputs is different, and therefore it was increasingly difficult
to match the decibel level between the two. This didn't affect our algorithm on the board
very much, but dealing with different sized gains on the inputs made it difficult to not
overflow the registers on the EVM. The overflow of registers made it difficult to tell
when our code had actually started working. In the end, we learned that extremely small
input signals are needed to keep the board from having overflow problems.

While testing our code, we attempted to use the oscilloscopes available in


the lab, but this proved to be another challenge unworthy of the benefits. The CoDec on
the DSP boards have a horrible resonance apparent around 5.5 MHz. While this does not
create a problem for hearing the output, one cannot view the output on an oscilloscope
without great difficulty. It proved to be a much better measure of our achievements to
simply listen to the output on a pair of the PC speakers in the lab.
The last task was to translate and troubleshoot the MATLAB code into C
friendly code. While most of the algorithm remained the same, a couple key points had to
be modified. First, since the algorithm now occurred in real-time with the use of
interrupts, the first "for loop" used in the MATLAB code was stricken out (since it is
inherent in the interrupt style of programming). Also, because of the number
representation on the board, the math had to be modified. Keeping the numbers in Q15
format was essential, and so the result of every operation on a number had to be modified
to accommodate the change. The most important of which was when two numbers were
multiplied, the result had to be right shifted 15 spaces to ensure the fixed point
representation was kept.

Another key factor in the math was number representation. Since the
board uses a two's complement number to represent the signal sample, some of the
constants needed to be changed. Specifically, the value for mu now needed to be greater
than one (not less than), but still close to one. From our experiments, we found that a
value between 1 and 10 worked best, and we choose to use 2 for our demonstrations
because it kept the output voice signal the most clear. The tradeoff was that the
background noise now had a bit of a more vibrant pinging to it.

Our results proved to be satisfactory. Although there are no sound clips


available, the demonstration at the poster session clearly demonstrated the capability of
the board to handle simple reference noises. The board adaptively filtered sinusoids and
triangle waves very well. It worked okay on square waves, but not quite as nicely. On
white noise, however, the filter wasn't very good. We attributed this to the fact that our
two white noise signals (overlay and reference) were generated on two separate PC sound
cards, and, due to the randomness factor, were not very well correlated.
ADAPTIVE FILTERS

Adaptive Filters: Block Diagram

The block diagram below represents a general adaptive filter.


An adaptive FIR filter designs itself based on the characteristics of the input sinal and a
signal that represents the desired behavior of the filter on the input. The filter does not
require any other frequency response specifications. The filter uses an adaptive algorithm
to reduce the error between the output signal y(n) and the desired signal d(n). When
perormance criteria for e(n) have achieved their minimum values through the iterations of
the adapting algorithm, the adaptive filter is finished and its coefficients have converged
to a solution.
The output from the adaptive filter should approximate the desired signal
d(n). If the characteristics of the input (the filter environment) are changed, the filter
adapts by generating a new set of coefficients. Adaptive filter functions in the Filter
Design Toolbox implement the largest block in the figure, using an appropriate technique
for the adaptive algorithm. Inputs are x(n) and initial values for a and b.
Choosing an Adaptive Filter Two main considerations are involved in choosing an
adaptive filter: the job to do and the filter algorithm to use. The suitability of an adaptive
filter design is determined by
• Filter Consistency
Does filter performance degrade when the coefficients change slightly as a result of
quantization or the use of fixed-point arithmetic? Will excessive noise in the signal hurt
the performance of the filter?
• Filter Performance
Does the filter provide sufficient identification accuracy or fidelity, or does it provide
sufficient signal discrimination or noise cancellation?
• Tools
Do tools exist to make the filter development process easier? Better tools can make it
practical to use more complex adaptive algorithms.
• DSP Requirements
Can the filter perform its job within the constraints of the application? Does the processor
have sufficient memory, throughput, and time to use the proposed adaptive filtering
approach? Can you use more memory to reduce the throughput, or use a faster
signal processor?
Simulating filters using the functions in the Filter Design Toolbox is a good way to
investigate these issues.
Beginning with a least mean squares (LMS) filter, which is relatively
easy to implement, might provide sufficient information for your evaluation. This can
also form a basis from which you can study and compare more complex adaptive filters.
At some point, you must test your design with real data and evaluate its performance.
Adaptive Filters in the Filter Design Toolbox
Adaptive filters are implemented in the Filter Design Toolbox as adaptive filter (adaptive
filter) Objects. This object-oriented approach to adaptive filtering involves two steps:
1. Use a filter constructor method to create the filter.
2. Use the filter method associated with adaptive filter objects to apply the filter to data.
The filter method for adaptive filter objects takes precedence over the filter function
in Matlab. when input arguments involve adaptive filter objects.
There are over two dozen adaptive filter constructor methods in the Filter Design
Toolbox. They fall into the following five categories:
• Least mean squares (LMS) based FIR adaptive filters
• Recursive least squares (RLS) based FIR adaptive filters
• Affine projection (AP) FIR adaptive filters
• FIR adaptive filters in the frequency domain (FD)
• Lattice based (L) FIR adaptive filters
For example, ha = adaptivefilter.lms(length,step,leakage,coeffs,states);
constructs an FIR LMS adaptive filter object ha. The input arguments for each constuctor
method are described in the documentation.
To apply the filter to a signal x, use y = filter(ha,x)
Note that in the context of adaptive filtering, Kalman filters are equivalent to RLS
filters. Other non adaptive Kalman filters are available in the Control System Toolbox.
Try: editafodemo
afodemo
System Identification
One common application of adaptive filters is to identify an unknown “black box”
system. Applications are as diverse as finding the response of an unknown
communications channel and finding the frequency response of an auditorium to design
for echo cancellation. In a system identification application, the unknown system is
placed in parallel with the adaptive filter. In this case the same input x(n) feeds both the
adaptive filter and the unkown system. When e(n) is small, the adaptive filter response is
close to the response of the unknown system.
Try:
edit filterid
filterid(1e2,0.5)
filterid (5e2,0.5)
filterid(1e3,0.5)
filterid(5e2,0.1)
filterid(5e2,0.5)
filterid(5e2,1)
Sketch the block diagram of an adaptive filter system to be used for system identification.
Inverse System Identification By placing the unknown system in series with an adaptive
filter, the filter becomes the inverse of the unknown system when e(n) becomes small.
The process requires a delay inserted in the desired signal d(n) path to keep the data at the
summation synchronized and the system causal. Without the delay element, the adapting
algorithm tries to match the output y(n) from the adaptive filter to an input x(n) that has
not yet reached the adaptive elements because it has to pass through the unknown system.
Including a delay equal to the delay caused by the unknown system prevents this
condition. Plain old telephone systems (POTS) commonly use inverse system
identification to compensate for the copper transmission medium. When you send data or
voice over telephone lines, the losses due to capacitances distributed along the wires
behave like a filter that rolls off at higher frequencies (or data rates). Adding an adaptive
filter with a response that is the inverse of the wire response, and adapting in real time,
compensates for the rolloff and other anomalies, increasing the available frequency range
and data rate for the telephone system. Sketch the block diagram of an adaptive filter
system to be used for inverse system identification.
Noise Cancellation
In noise cancellation, adaptive filters remove noise from a signal in real time. To do this,
a signal Z(n) correlated to the noise is fed to the adaptive filter.
As long as the input to the filter is correlated to the unwanted noise accompanying the
desired signal, the adaptive filter adjusts its coefficients to reduce the value of the
difference between y(n) and d(n), removing the noise and resulting in a clean signal in
e(n). Notice that the error signal converges to the input data signal in this case, rather than
converging to zero.
Try:
edit babybeat
babybeat
Sketch the block diagram of an adaptive filter system to be used for system identification.
Prediction
Predicting signals might seem to be an impossible task. Some limiting assumptions must
be imposed by assuming that the signal is periodic and either steady or slowly varying
over time.
Accepting these assumptions, an adaptive filter attempts to predict future values of the
desired signal based on past values. The signal s(n) is delayed to create x(n), the input to
the adaptive filter. s(n) also drives the d(n) input. When s(n) is periodic and the filter is
long enough to remember previous values, this structure, with the delay in the input
signal, can perform the prediction.
You might use this structure to remove a periodic component from stochastic noise
signals. Sketch the block diagram of an adaptive filter system to be used for system
identification. Multirate Filters Multirate filters alter the sample rate of the input signal
during the filtering process. They are useful in both rate conversion and filter bank
applications. Like adaptive filters, multirate filters are implemented in the Filter Design
Toolbox as multirate filter (mfilt) objects. Again, this object-oriented approach to
filtering involves two
steps:
1. Use a filter constructor method to create the filter.
2. Use the filter method associated with mfilt objects to apply the filter to data. The
filter method for mfilt objects (and all filter objects) takes precedence over the filter
function in Matlab when input arguments involve mfilt objects.
There are approximately a dozen multirate filter constructor methods in the Filter Design
Toolbox.
For example,
hm = mfilt.firdecim(3)
creates an FIR decimator with a decimation factor of 3. The only input argument needed
is the decimation factor. Input arguments for each constructor method are described
in the documentation.
To apply the filter to a signal x, use
y = filter(hm,x)
Multirate filters can also be designed interactively in the FDATool. Click the sidebar
button Create a Multirate Filter.
Try:
edit multidemo
multidemo
Interactively design the filter in the demo in the FDATool.
(The material in this lab handout was put together by Paul Beliveau and derives
principally from the Math Works training document “MATLAB for Signal Processing”,
2006.)

Adaptive filters are self-learning systems. As an input signal


enters the filter, the coefficients adjust themselves to achieve a desired result. The result
might be

Adaptive Signal Identification


An Adaptive filter consists of two distinct components:
A digital filter with adjustable coefficients
An Adaptive algorithm to modify the coefficients of the filter to the input changes.
Main object: to produce an optimum estimate desired signal
6.1.2 Algorithm
In Figure 3 two input signals & xk are applied to the Adaptive filter simultaneously.

Here the is the contaminated signal containing both noise nk and the desired signal
is a measure of .

The digital filter of the system is used to process the by producing, an estimation of
. An FIR digital filter is the single input single output system. The FIR Filter is used
here instead of IIR because of its simplicity and stability

Figure 4: FIR Filter Structure


Here tap-weight vectors are the coefficients to be adjusted. An estimate of the desired
signal is then obtained by subtracting the digital filter output from contaminated signal.

The main objective in noise cancellation is to produce an optimum estimate of the noise

in the contaminated signal & hence an optimum estimates of the desired signal.The or
is feedback to Adaptive algorithm & performs two tasks:
1. Desired signal estimation
2. Adjustment of filter coefficients
6.1.3 Adaptive Algorithm
The Adaptive filter algorithm is used to adjust the digital filter coefficients to minimize
the error signal, according to some criterion e.g., in the Least Square sense. Thus taking
square & mean of error signal is
Last term in eq. (3) becomes zero because of un correlation of desired signal with noise &
noise estimate.

The first term in the above equation is estimated of signal power; the second one is the
total signal power while the last one is the noise power. If then is the exact replica of
, the output power contain only the signal power i.e., by adjusting Adaptive filter towards
the optimum position, the remnant noise power & hence the total output power are
minimized. The desired signal power remains unaffected by this adjustment since is
uncorrelated with . Thus:

This shows that minimizing the total power at the output of the canceller maximizes the
signal to noise ration of the output. Having exact estimate of the noise the last term
becomes zero & estimate of desired signal becomes equal to the desired signal. i.e., the
Out put of the canceller becomes noise free:
At this stage adaptive filter turns off (ideally) by setting its weights to zero. A number of
adaptive algorithms are being used like:
LMS
RLS
Kalman
We used LMS here in this system because of the following advantages:
More efficient because of easy computation & better storage capabilities
Numerically stable
6.1.3.1 Lms Adaptive Algorithm

Many adaptive algorithms can be viewed as an approximation of the discrete Wiener


filter. This filter produces an optimum estimate of the part of contaminated signal (that is
correlated with input signal), which is then subtracted from the contaminated signal to
yield the error signal.
Therefore from esq. (2):

Instead of computing noise weights in one go, like above the LMS coefficients are
adjusted from sample to sample in such a way as to minimize the MSE (mean square
error).
The LMS adaptive algorithm is based on the Steepest Descent algorithm, which updates
weight vectors sample to sample:

The gradient vector, the cross-correlation between the primary & the secondary inputs P
and the autocorrelation of the primary input, R, are related as
For instantaneous estimates of gradient vector, we can write as

Figure 5: LMS –Filter implementation

From eq (7) & (10) eq. (8) can be rewritten as


This is Widrow-Hopf LMS algorithm.
The LMS algorithm of eq(11) does not require the prior knowledge signal statistics (R &
P) & uses instantaneous estimates to make the system more accurate. These estimates
improve gradually with time as the weights of the filter are adjusted by learning the signal
characteristics. But practically the Wk never reaches theoretical optimum of Wiener
theory, but fluctuates about it. Performs specific filtering and decision
making tasks, i.e. can be programmed by a training process. Extra polates a model of
behavior to deal with new situations after having been trained on a finite and often small
number of training signals or patterns. Complex and difficult to analyze than non adaptive
systems, but they offer the possibility of substantially increased system
performance when input signal characteristics are unknown or time varying. Easier to
design than other forms of adaptive systems.

Chapter4
ADAPTIVE SYSTEM

An adaptive system is one that is designed primarily for the purpose


of adaptive control and adaptive signal processing. Such a system usually has some or all
of the following characteristics:

Firstly they can automatically adapt in the face of changing


environments and changing system requirements.
Secondly they can be trained to perform specific filtering and
decision making tasks, i.e., they can be programmed by a training process. Because of
this adaptive systems do not require the elaborate synthesis procedures usually needed for
non-adaptive systems. Instead, they tend to be self designing.
Thirdly, they can extrapolate a model of behaveour to deal with new
situations after having been trained on a finite and often small number of training signals
or patterns.
Fourthly, to a limited extent they can repair themselves, i.e., adapt
arround certain kinds of internal defects.
Fifthly, they are more complex and difficult to analyze than non-
adaptive systems, but they offer the possibility of substantially increased system
performance when input signal characteristics are unknown or time varying

General Properties

It is always better to know the properties before we can go and use a


system. This helps us to realize whether we can use it or not for a given application. The
essential and principal property of the adaptive system is its time- varying, self adjusting
property. This can be better understood by considering the following. Suppose a designer
has developed a system of fixed design that must meet specific criterion. From that we
see that the designer has chosen the system that appears best according to the
performance criterion selected that has been chosen from a priori restricted class of
designs such as linear systems. But in many instances, however, the complete range of
input conditons may not be known exactly or the conditions may change from time to
time. In such circumstances, an adaptive system that continually seeks the optimum
within an allowed class of possibilities would give a superior performance compared with
a system of fixed design.

From the above a second property for the adaptive system is that it must be non-linear.
By this we mean that the principle of superposition doesnot hold good. Certain forms fo
adaptive systems become linear systems when their adjustments are held constant after
adaptation. They may be called linear adaptive systems. They are useful in one way
because they are easier tho design than other forms of adaptive systems.

System Configuration and filter design

The adaptive filter can be used in different situations. It would be worthy if


certain conditions of the environment in which we are working is known and make
use of these conditins to design an adaptive filter. The main idea of an adaptive filter
is that it more or less tries to give us the original information by reducing noise.
DESIGN & SIMULATION

General Form of Adaptive Filters

Pictured here is the general structure of an adaptive filter. The main points to be noted
here are

1. The system takes in two inputs


2. The top box has one input
3. The bottom box has that same input plus an error input
4. The top box is being changed by the bottom box
5. What's the output?
If the bottom box is the brain, then the top box is the body. The brain is using what it
knows and putting it through an algorithm in which to control the body. The algorithm is
always of the form

General Form of Algoritm for Updating Coefficients

The functions are determined by you, the programmer. Examples commonly used are
LMS (Least Mean Squared), NLMS (Normed LMS), RMS (Root Mean Squared), and
ABC (just kidding). We will use LMS for this project.

The user will want an output. Depending on what the system is for, the output will be
either y(n), e(n), or the filter itself.

SYSTEM IDENTIFICATION

Adaptive Filter for FIR Filter Identification


The bottom two boxes are our adaptive system. It is figuring out what the top
box is. The top box is a Finite Impluse Response (FIR) filter programmed by MATLAB
with the magic command "filter (B,A, signal)." The filter is a "Direct Form II
Transposed" implementation of the standard difference equation:

a(1)*y(n) = [b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb)] - [a(2)*y(n-1) + ... +


a(na+1)*y(n-na)]
We simulated this filter with the command "filter (rand (3,1), 1, signal)" This sets a(1) in
the difference equations to one. The first three B coefficients are given random numbers.
That's what our adaptive filter is going to figure out. The rest of the coefficients are set to
zero.
Now to put our boxes into action. As seen by the block diagram, any
random is signal is fed into our system and the unknown filter. The signal is filtered
through the unknown, and our initial guess of the unknown. The difference of the outputs
(plus some noise) e(n) is taken and put through our magical coefficient updating
algorithm to get our programmable digital filter closer to the unknown. As more signal is
fed through, our digital filter will start to mimic the unknown. It's learning! (see the code
archive for our MATLAB code)
Fig A

Fig A) This is a plot of the coefficients of our adaptive filter vs. time. You can see the
values converge to the unknown values. So now we know b(1)=1.65, b(2)=0.72,
b(3)=0.38. We have our system identified.

Fig B
Fig B) The first example had mu, the step size of our error correction, equal to 0.1. If we
make it a little larger, say 0.3, we get faster convergence as seen here. But it we have mu
too large, say 0.5, the system diverges.

Fig C

Fig C) If we have the noise turned up, the adaptive filter will oscillate more, but we can
still make out that it converges to an estimate of the B coefficients.
Fig D

D) We can also figure out any n-order FIR filter. Here's a 10th order filter.
ADAPTIVE INTERFERENCE CANCELLING

I have selected my problem for the project as follows. In a car we have the
audio system. There will be some noise coming from the engine when we try to play a
song. We can use an adaptive filter in order to reduce the noise. The system configuration
is shown in the figure 1.

In the figure S(k) is the speech signal, n(k) is the noise signal which
is coming from the engine vibrations, n'(k) is the reference signal that is being used to
reduce the noise, Sys(k), is the primary input to the filter that is the corrupt signal, H(z) is
the transfer function of the adaptive filter, Y(k), is the output of the adaptive filter and
c(k) is the output signal i.e., is the difference of the corrupt signal and the output of the
adaptive filter. The refernce input i.e., n'(k) is in someway correlated with the noise and is
uncorrelated with the original signal. If these conditions are violated, the adaptive filter
may cancel the signal, s(k) in place of (or in additon to) the noise or could fail to cancel
the noise. It can be proved that n(k) and n'(k) are correlated but n'(k) and s(k) are
uncorrelated from the derivation given in appendix [1].
NOISE CANCELLATION

One of the most commmon practical applications of adaptive filters is noise


cancellation. We simulated the situation in which the adaptive filter needed to remove a
time varying noise from a desired speech signal. This could be a model for a person
speaking on a cell phone in a car where his voice is corrupted by noise from the car's
engine.

Block Diagram for Noise Cancellation

We begin with two signals, the primary signal and the reference signal. The primary
signal contains both our desired voice signal and the noise signal from the car engine.
Primary: Noise + Voice

The reference signal is a tapped version of the noise in the primary signal, i.e.
it must be correlated to the noise that we are trying to eliminate. In the case that we are
trying to model, the primary signal may come from a microphone at the speaker's mouth
which picks up both the speech signal and a noise signal from the car engine. The
reference signal may come from another microphone that is placed away from the
speaker and closer to the car engine, so the reference noise will be similar to the noise in
primary but perhaps with a different phase and with some additional white noise added to
it.
Reference: Estimation of Noise

The LMS algotrithm updates the filter coefficients to minimize the error between
the primary signal and the filtered noise. In the process of pouring through some class
notes from ELEC 431, we found that it can be proven through some hard math that the
voice component of the primary signal is orthogonal to the reference noise. Thus the
minimum this error can be is just our desired voice signal.
Output: Voice

We then experimented with varying different parameters. It turns out that the output we
get is very, very sensitive to mu. Apparently there is a very precise method for finding the
most optimal mu, something to do with the eigenfuction of the correlation matrix between
the primary and reference signals, but we used an educated trial and error technique.
Basically we found that mu affects how fast a response we were able to get; a larger mu
gives a faster response, but with a mu that is too large, the result will blow up.
Result with Big mu

We also experimented with several different filter lengths.

One question that is often asked is why we cannot simply subtract the reference noise
from the primary signal to obtain our desired voice signal. This method would work well
if the reference noise was exactly the same as the actual noise in primary.
Reference Noise = Actual Noise

However, if the reference noise is exactly 180 degrees out of phase the
with noise in primary, the noise will be doubled in the output. As can be seen in the
figures below, the output from the adaptive filter is still able to sucessfully cancel out the
noise.
Reference Noise 180 deg Out-of-Phase from Actual Noise

Filtered Output - It still works


The one important condition on the use of adative filters for noise
cancellation is that the noise can't be similar to the desired voice signal. If this is the case,
the error that the filter is trying to minimize has the potential to go to zero, i.e. the filter
also wipes out the voice signal. The figure below shows the output of the adaptive filter
when the reference noise used was a sinusoid of the same frequency as that of the voice
signal plus some white noise.

Filtered Output - No Good When Voice = Noise


INTRODUCTION TO MATLAB

Matlab Introduction

MATLAB is a high performance language for technical computing .It integrates

computation visualization and programming in an easy to use environment

Mat lab stands for matrix laboratory. It was written originally to provide easy access to

matrix software developed by LINPACK (linear system package) and EISPACK (Eigen

system package) projects.

MATLAB is therefore built on a foundation of sophisticated matrix software in which the

basic element is matrix that does not require pre dimensioning

Typical uses of MATLAB

1. Math and computation

2. Algorithm development

3. Data acquisition

4. Data analysis ,exploration ands visualization

5. Scientific and engineering graphics

The main features of MATLAB

1. Advance algorithm for high performance numerical computation ,especially in the

Field matrix algebra

2. A large collection of predefined mathematical functions and the ability to define

one’s own functions.

3. Two-and three dimensional graphics for plotting and displaying data


4. A complete online help system

5. Powerful,matrix or vector oriented high level programming language for individual

applications.

6. Toolboxes available for solving advanced problems in several application areas

MATLAB

MATLAB
Programming language

User written / Built in functions

Computation
Graphics External interface
Linear algebra
2-D graphics Interface with C
Signal processing
3-D graphics and
Quadrature
Color and lighting FORTRAN
Etc
Animation Programs

Tool boxes
Signal processing
Image processing
Control systems
Neural Networks
Communications
Robust control
Statistics

Features and capabilities of MATLAB


The MATLAB System

The MATLAB system consists of five main parts:

Development Environment.

This is the set of tools and facilities that help you use MATLAB functions and files.

Many of these tools are graphical user interfaces. It includes the MATLAB desktop and

Command Window, a command history, an editor and debugger, and browsers for

viewing help, the workspace, files, and the search path.

The MATLAB Mathematical Function Library.

This is a vast collection of computational algorithms ranging from elementary functions,

like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like

matrix inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms.

The MATLAB Language.

This is a high-level matrix/array language with control flow statements, functions, data

structures, input/output, and object-oriented programming features. It allows both

"programming in the small" to rapidly create quick and dirty throw-away programs, and

"programming in the large" to create large and complex application programs.

Graphics.

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well

as annotating and printing these graphs. It includes high-level functions for two-
dimensional and three-dimensional data visualization, image processing, animation, and

presentation graphics. It also includes low-level functions that allow you to fully

customize the appearance of graphics as well as to build complete graphical user

interfaces on your MATLAB applications.

The MATLAB Application Program Interface (API).

This is a library that allows you to write C and Fortran programs that interact with

MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking),

calling MATLAB as a computational engine, and for reading and writing MAT-files.

Starting MATLAB

On Windows platforms, start MATLAB by double-clicking the MATLAB shortcut icon

on your Windows desktop. On UNIX platforms, start MATLAB by typing mat lab at the

operating system prompt. You can customize MATLAB startup. For example, you can

change the directory in which MATLAB starts or automatically execute MATLAB

statements in a script file named startup.m

MATLAB Desktop

When you start MATLAB, the MATLAB desktop appears, containing tools (graphical

user interfaces) for managing files, variables, and applications associated with MATLAB.

The following illustration shows the default desktop. You can customize the arrangement

of tools and documents to suit your needs. For more information about the desktop tools .
Implementations

1. Arithmetic operations

Entering Matrices

The best way for you to get started with MATLAB is to learn how to handle

matrices. Start MATLAB and follow along with each example.

You can enter matrices into MATLAB in several different ways:

• Enter an explicit list of elements.

• Load matrices from external data files.

• Generate matrices using built-in functions.


• Create matrices with your own functions in M-files.

Start by entering Dürer’s matrix as a list of its elements. You only have to

follow a few basic conventions:

• Separate the elements of a row with blanks or commas.

• Use a semicolon, to indicate the end of each row.

• Surround the entire list of elements with square brackets, [ ].

To enter matrix, simply type in the Command Window

A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]

MATLAB displays the matrix you just entered:

A=

16 3 2 13

5 10 11 8

9 6 7 12

4 15 14 1

This matrix matches the numbers in the engraving. Once you have entered

the matrix, it is automatically remembered in the MATLAB workspace. You

can refer to it simply as A. Now that you have A in the workspace,

sum, transpose, and diag

You are probably already aware that the special properties of a magic square

have to do with the various ways of summing its elements. If you take the

sum along any row or column, or along either of the two main diagonals,

you will always get the same number. Let us verify that using MATLAB.

The first statement to try is

sum(A)
MATLAB replies with

ans =

34 34 34 34

When you do not specify an output variable, MATLAB uses the variable ans,

short for answer, to store the results of a calculation. You have computed a

row vector containing the sums of the columns of A. Sure enough, each of the

columns has the same sum, the magic sum, 34.

How about the row sums? MATLAB has a preference for working with the

columns of a matrix, so one way to get the row sums is to transpose the

matrix, compute the column sums of the transpose, and then transpose the

result. For an additional way that avoids the double transpose use the

dimension argument for the sum function.

MATLAB has two transpose operators. The apostrophe operator (e.g., A')

performs a complex conjugate transposition. It flips a matrix about its main

diagonal, and also changes the sign of the imaginary component of any

complex elements of the matrix. The apostrophe-dot operator (e.g., A'.),

transposes without affecting the sign of complex elements. For matrices

containing all real elements, the two operators return the same result.

So

A'

produces

ans =

16 5 9 4

3 10 6 15
2 11 7 14

13 8 12 1

and

sum(A')'

produces a column vector containing the row sums

ans =

34

34

34

34

The sum of the elements on the main diagonal is obtained with the sum and

the diag functions:

diag(A)

produces

ans =

16

10

and

sum(diag(A))

produces

ans =

34
The other diagonal, the so-called anti diagonal, is not so important

Mathematically, so MATLAB does not have a ready-made function for it.

But a function originally intended for use in graphics, fliplr, flips a matrix

From left to right:

Sum (diag(fliplr(A)))

ans =

34

You have verified that the matrix in Dürer’s engraving is indeed a magic

Square and, in the process, have sampled a few MATLAB matrix operations.

Operators

Expressions use familiar arithmetic operators and precedence rules.

+ Addition

- Subtraction

* Multiplication

/ Division

\ Left division (described in “Matrices and Linear Algebra” in the

MATLAB documentation)

. ^ Power

' Complex conjugate transpose

( ) Specify evaluation order

Generating Matrices

MATLAB provides four functions that generate basic matrices.

zeros All zeros

ones All ones


rand Uniformly distributed random elements

randn Normally distributed random elements

Here are some examples:

Z = zeros(2,4)

Z=

0000

0000

F = 5*ones(3,3)

F=

555

555

555

N = fix(10*rand(1,10))

N=

9264874084

R = randn(4,4)

R=

0.6353 0.0860 -0.3210 -1.2316

-0.6014 -2.0046 1.2366 1.0556

0.5512 -0.4931 -0.6313 -0.1132

-1.0998 0.4620 -2.3252 0.3792

M-Files

You can create your own matrices using M-files, which are text files containing
MATLAB code. Use the MATLAB Editor or another text editor to create a file

Containing the same statements you would type at the MATLAB command

Line. Save the file under a name that ends in .m.

For example, create a file containing these five lines:

A = [...

16.0 3.0 2.0 13.0

5.0 10.0 11.0 8.0

9.0 6.0 7.0 12.0

4.0 15.0 14.0 1.0 ];

Store the file under the name magik.m. Then the statement

magik

reads the file and creates a variable, A, containing our example matrix.

Graph Components

MATLAB displays graphs in a special window known as a figure. To create

a graph, you need to define a coordinate system. Therefore every graph is

placed within axes, which are contained by the figure.

The actual visual representation of the data is achieved with graphics objects

like lines and surfaces. These objects are drawn within the coordinate system

defined by the axes, which MATLAB automatically creates specifically to

accommodate the range of the data. The actual data is stored as properties of

the graphics objects.


Plotting Tools

Plotting tools are attached to figures and create an environment for creating

Graphs. These tools enable you to do the following:

• Select from a wide variety of graph types

• Change the type of graph that represents a variable

• See and set the properties of graphics objects

• Annotate graphs with text, arrows, etc.

• Create and arrange subplots in the figure

• Drag and drop data into graphs

Display the plotting tools from the View menu or by clicking the plotting tools

icon in the figure toolbar, as shown in the following picture.


Editor/Debugger

Use the Editor/Debugger to create and debug M-files, which are programs you

write to run MATLAB functions. The Editor/Debugger provides a graphical

user interface for text editing, as well as for M-file debugging. To create or

edit an M-file use File > New or File > Open, or use the edit function.
program
************************************************************************
% removal of 50 hz noise (power line interference) from ecg signal using
% adaptive filtering
%creation of 50h noise signal

clear all
Fs = 1000;
N = 6000;
l=[0:999]';
i = [0 : N-1]';
p=ecg(500).';
p1=[p;p;p;p;p;p;p;p;p;p];
figure
plot(p1)
title('original ecg signal')
figure
plot(125*sin(2*pi*50*l/Fs))
title('noise signal')
%create the initial signal

k=1.2*sin(2*pi*50* i/Fs)
c=p1+k;
figure
plot(c)
title('signal after adding noise')
h = adaptfilt.lms(15,0.02);
y = filter(h,c,p1);
fvtool(h)
figure
plot(y)
title('signal after removal of noise')

simulation results:
% removal of 50 hz noise (power line interference) from ecg signal using
% iirnotch filter

clear all
Fs = 1000;
N = 6000;
%l=[0:999]';
i = [0 : N-1]';
p=ecg(500).';
p1=[p;p;p;p;p;p;p;p;p;p;p];
figure
plot(p1)
title('original ecg signal')
figure
%plot(3.25*sin(2*pi*50*l/Fs))
%title('noise signal')
k=1.2*sin(2*pi*50* i/Fs)
plot(k)
title('noise signal')
c=p1+k;
figure
plot(c)
title('ecg signal after addition of noise')
wo = 50/(1000/2); bw = wo/35;
[b,a] = iirnotch(wo,bw);
y = filter(b,a,c);

fvtool(b)
figure
plot(y)
title('signal after filtering');
Simulation results:
% removal of 50 hz noise (power line interference) from ecg signal using
% adaptive filtering algorithm

clear all

Fs = 1000;
N = 1000;
i = [0 : N-1]';
p=ecg(500).';
p1=[p;p];
figure
plot(p1)
title('original signal')
figure
plot(sin(pi*50*i/FS))
%create the initial signal
x =p1+ sin(2*pi*50* i/Fs)
figure
plot(x)
%create the reference signal of the adaptive filter
u =sin(2*pi*50* i/Fs);% sin(2*pi*60* i/Fs);

%adaptive filter architecture


L = 20;
step_size = 0.005;
w = zeros(1,L);

%run the adaptive filter


e(L) = x(L);
for k = L : N
regressor = flipud(u(k-L+ 1:k));
w = w + step_size * regressor' * e(k);
e(k+1) = x(k) - w * regressor;
end

%compute the spectrum of the initial signal and the filtered signal
f = [0 : Fs/N : Fs - Fs/N]';
F = abs(fft(x));
E = abs(fft(e));

%plot
figure;
subplot(411) ;plot(x); title('initial signal');
subplot(412) ;plot(e); title('initial signal after filtering');
subplot(413) ;plot(f,F( 1:length( f)));title( 'spectrum of
initialsignal');
subplot(414) ;plot(f,E( 1:length( f)));title( 'spectrum of initialsignal
after filtering');

simulation results:
% removal of 50 hz noise (power line interference) and 0.5hz noise
(base line wandering from ecg signal using
% adaptive filtering algorithm

clear all

Fs = 1000;
N = 1000;
i = [0 : N-1]';
p=ecg(500).';
p1=[p;p];
figure
plot(p1)
%create the initial signal
x =p1+ sin(2*pi*0.5* i/Fs)+sin(2*pi*50* i/Fs) %+ 0.66*sin(2*pi*
280*i/Fs) + 0.59*sin(2*pi*60*i/Fs) + 0.5^0.5*randn( N,1);
figure
plot(x)
%create the reference signal of the adaptive filter
u =sin(2*pi*0.5* i/Fs)+sin(2*pi*50* i/Fs);% sin(2*pi*60* i/Fs);

%adaptive filter architecture


L = 20;
step_size = 0.005;
w = zeros(1,l);

%run the adaptive filter


e(L) = x(l);
for k = l : N
regressor = flipud(u(k-L+ 1:k));
w = w + step_size * regressor' * e(k);
e(k+1) = x(k) - w * regressor;
end

%plot
figure;
subplot(411) ;plot(x); title('initial signal');
subplot(412) ;plot(e); title('initial signal after filtering');
***********************************************************
.

conclusion
BIBLIOGRSPHY

REFERENCES
Byron JP, He J, Hu S (2005). “ First International Conference on Neural
Interface and Control Proceedings, Wuhan, China. A Conceptual
Brain Machine Interface System, pp 112-116.
Cuiwei L, Chongxun Z, Changfen T (1995). Detection of ECG
Characteristic Points Using Wavelet Transform,” IEEE Trans on
Biomed Eng, 42(1): 21-28.
Gotman J (2002). “Automatic recognition of epileptic seizures in the
EEG” Electro encephalography and clinical neu. physiology 54, PP
530– 540.
Haykin S (1991). Adaptive Filter Theory' Prentice Hall,” London
Haykin S (1996). Neural Networks SP's Horizons' IEEE Signal
Processing Magazine,13(2): 24-29.
Latka M, Ziemowit (2002). Wavelet analysis of epileptic Spikes”.
Wroclaw university of Technology, Poland, Dec. 22, 2002.pp. 1-6.
Robert C (2002). "Electroencephalogram Processing using Neural
networks", Clinical Neurophysiology 113, pp. 694-701.
Selvan S, Srinivasan R (2001). Neural Networks-based Effecient
Adaptive Filtering Technique volume 82.

Das könnte Ihnen auch gefallen