Sie sind auf Seite 1von 42

1.

INTRODUCTION

Electroencephalogram is a recording of electric fields of signals emerging from neural currents within the brain and is measured by placing electrodes on the scalp. The electrical dipoles of eyes change by eye movements and blinks, producing a signal known as electrooculogram (EOG). A fraction of EOGs contaminate the electrical activity of the brain and these contaminating potentials are commonly referred to as ocular artifacts (OA). In current data acquisition, these OA are often dominant over other electrophysiological contaminating signals (e.g. heart and muscle activity, head and body movements), as well as external interferences due to power sources. Hence, devising a method for successful removal of OA from EEG recordings is still is a major challenge. Varieties of methods have been proposed for correcting ocular artifacts and are reviewed in. One common strategy is artifact rejection. The rejection of epochs contaminated with OA is very laborious and time consuming and often result in considerable loss in the amount of data available for analysis. Eye fixation method in which the subject is asked to close their eyes or fix it on a target is often unrealistic. Widely used methods for removing OAs are based on regression in time domain or frequency domain techniques. All regression methods, whether in time or frequency domain depend on having one or more regressing (EOG) channels. Also both these methods share an inherent weakness, that spread of excitation from eye movements and EEG signal is bidirectional. Therefore regression based artifact removal eliminates the neural potentials common to reference electrodes and to other frontal electrodes. Another class of methods is based on a linear decomposition of the EEG and EOG leads into source components, identifying artifactual components, and then reconstructing the EEG without the artifactual components. 1.1 PRINCIPAL COMPONENT ANALYSIS Principal Component Analysis (PCA) to remove the artifacts from EEG. It outperformed the regression based methods. However, PCA cannot completely separate OA from EEG, when
1

both the waveforms have similar voltage magnitudes. PCA decomposes the leads into uncorrelated, but not necessarily independent components that are spatially orthogonal and thus it cannot deal with higher-order statistical dependencies. 1.2 INDEPENDENT COMPONENT ANALYSIS An alternative approach is to use independent components analysis (ICA), which was developed in the context of blind source separation problems to obtain components that are approximately independent. ICA has been used to correct for ocular artifacts, aswell as artifacts generated by other sources. ICA is an extension of PCA which not only de-correlates but can also deal with higher order statistical dependencies. However, the ICA components lack the important variance maximization property possessed by the PCA components. ICA algorithms are superior to PCA, in removing a wide variety of artifacts from the EEG, even in the case of comparable amplitudes. The component based procedures used for artifact removal are not automated, and require visual inspection to select the artifactual components to decide their removal. An ICA based method for removing artifacts semi automatically was presented by Delorme et.al It is automated to flag trials as potentially contaminated, but these trials are still examined and rejected manually via a graphical interface. The results of these studies does not imply that the overall best approach for decomposing EEG sensor data into meaningful components, and has not been completely validated by the authors. The estimated source signals (obtained from any ICA algorithm) should be as independent as possible (or least dependent on each other) for better removal of artifacts from EEG. Since, either by visual inspection, or by automated procedure, only the estimated sources are classified as EEG or artifacts. But, the actual independence of the components (estimated sources) obtained from ICAalgorithms are not tested for their independence and uniqueness. 1.3 TATJANA ZIKOV wavelet based de-noising technique TatjanaZikov proposed a wavelet based de-noising technique for removal of ocular artifacts in EEG. This method neither relies upon the reference EOG nor visual inspection.

However, the threshold limit was estimated from the uncontaminated baseline EEG, which is recorded from the same subject. Various non-adaptive thresholding methods using different threshold limit and thresholding function for ocular artifact correction. They reported an appropriate threshold limit calculated from the statistical averages of the contaminated EEG signal and thresholding function for OA removal. This shows that the algorithm is data independent. However, the threshold limit is empirically selected and is non-adaptive, and is context sensitive and needs further investigation. In a nonlinear time-scale adaptive de-noising system based on wavelet shrinkage scheme has been used for removing OAs from EEG. The time-scale adaptive algorithm is based on Steins Unbiased Risk Estimate (SURE), and soft-like thresholding function is used which searches for optimal thresholds using gradient based adaptive algorithms. De-noising EEG using this algorithm yields better results, in terms of ocular artifact reduction and the re-attainment of the background EEG activity compared to nonadaptive thresholding methods. Since, the wavelet based EOG correction algorithm proposed and applied to the entire length of the EEG signal, it results in thresholding of both low frequency and high frequency components even in the non-OA zones. This project describes a novel, robustmethod for eliminating electro ocular contamination (EOG) from EEGsignals using StationaryWavelet Transform (SWT) algorithmhas been used to de-noise thecorrupted EEG signals. 1.4 STATISTICAL ANALYSIS The statistical analysis of electrical recordings of the brainactivity by an Electroencephalogram is a major problem inNeuroscience. Cerebral signals have several origins thatlead to the complexity of their identification. Therefore,the noise removal is of the prime necessity to make easierdata interpretation and representation and to recover thesignal that matches perfectly a brain functioning. Acommon problem faced during the clinical recording of theEEG signal, are the eyeblinks and movement of the eyeballs that produce ocular artifacts. It has been known forquite some time now that the Alpha rhythm of the EEG,which is the principal resting rhythm of the brain in adultswhile they are awake, is directly influenced by visualstimuli. Auditory and mental
3

arithmetic tasks with theeyes closed lead to strong alpha waves, which aresuppressed when the eyes are opened. This property ofthe EEG has been used, ineffectively, for a long period oftime to detect eye blinks and movements. The slowresponse of thresholding, failure to detect fast eye blinksand the lack of an effective de-noising technique forcedresearchers to study the frequency characteristics of theEEG as well.Current Independent Component Analysis (ICA) methodsof artifact removal require a tedious visual classification ofthe components. Proposeda method whichautomates this process and removes simultaneouslymultiple types of artifacts.Number of methods of dealing with ocular artifact in theEEG, focusing on the relative merits of a variety of EOGcorrection procedures. In EEG data sets, there may besome specific components or events that may help theclinicians in diagnosis. They may tend to be transient(localized in time), prominent over certain scalp regions(localized in space) and restricted to certain ranges oftemporal and spatial frequencies (localized in scale).There has been a tremendous amount of activity andinterest in the applications of wavelet analysis to signals,in particular methods of wavelet thresholding andshrinkage for the removal of additive noise fromcorrupted biomedical signals and images. 1.5 WAVELET ANALYSIS Waveletanalysis provides flexible control over the resolution withwhich neuro-electric components and events are localizedin time, space and scale. Thebasic concepts of wavelet analysis and other applications,discussed a method to automaticallyidentify slow varying ocular artifact zones and applyingwavelet based adaptive thresholding algorithm only to theidentified ocular artifact zones, which avoids the removalof background EEG information. The fundamentalmotivation behind these approaches is that the statistics ofmany real world signals, when wavelet transformed aresubstantially simplified. Wavelet transforms are used to analyze time varying,non-stationary signals and EEG fall into this category ofsignals. The ability of wavelet transform is to accuratelyresolve EEG into specific time and frequency componentslead to several analysis applications and one among themis de-noising. The wavelet transform of the noisy signalgenerates the wavelet coefficients which denote thecorrelation coefficients between the noisy EEG and thewavelet function.
4

Depending on the choice of mothercorresponding to the noise affected zones. The largercoefficients will be an estimate of noiseproposed a wavelet based de-noisingof the EEG signal to correct for the presence of the ocularartifact. In this paper, we proposed a simple statisticalempirical de-noising formula for removing artifacts in theEEG signals without using any reference signals. Thisformula very much reduces the complexity and time factor.

2. ARTIFACTS
5

Ocular activity creates significant artifacts in theelectroencephalogram (EEG). Epochscontaminated by ocular artifacts can be manuallyexcised, but at the cost of intensive human laborand substantial data loss. Alternatively, correctionprocedures can distinguish brain electrical activityfrom ocular potentials, using regression-based orcomponent-based models. Traditional ocular artifact correction proceduresuse a regression-based approach Regression analyses are usedto compute propagation factors or transmission coefficientsin order to define the amplitude relationbetween one or more electrooculogram (EOG) channelsand each EEG channel. Correction involvessubtracting the estimated proportion of the EOGfrom the EEG. One concern often raised about theregression approach is bidirectional contamination. Ifocular potentials can contaminate EEG recordings,then brain electrical activity can also contaminate theEOG recordings. Therefore, subtracting a linearcombination of the recorded EOG from the EEGmay not only remove ocular artifacts but alsointeresting cerebral activity. In order to reduce thecerebral activity in the EOG, suggested low-pass filtering the EOG signal used tocompute regression coefficients. However, they recognizedthat low-pass filtering removes all highfrequency activity from the EOG signal, both ofcerebral and ocular origin. In the current paper, weintroduce a new filtering approach for regressionbasedcorrection using Bayesian adaptive regressionsplines. This approach uses a locally defined nonlinearfilter to remove high frequency activity when theamplitude fluctuations are small and retain highfrequency activity when the amplitude fluctuationsare large. Such adaptively filtered EOG essentiallyisolates activity typically associated with ocularartifacts and removes cerebral activity. The use ofsuch adaptive filtering prior to applying regressioncorrection may substantially reduce problems frombidirectional contamination.Another class of methods is based on decomposingthe EEG and EOG signals into spatial components,identifying artifactual components andreconstructing the EEG without the artifactual components.

3. DE-NOISING TECHNIQUES

3.1 A SIMPLE DE-NOISING TECHNIQUE Suppose one has to measure a signal on which an externalnoise is superimposed. We call this EEG signal the truesignal S(t) and the external noise (t) , so that themeasured signal can be written in the form x(t) = S(t) + (t) ( 2 ) The only assumptions needed are that S(t) and (t) areuncorrelated and are stationary

processes, and can bewritten as equation (2). Thresholding is a technique usedfor signal and image de-noising. When we decompose asignal using the wavelet transform, we are left with a set ofwavelet coefficients that correlates to the high frequencysub-bands. These high frequency sub-bands consist of thedetails in the data set. If these details are small enough,they might be omitted without substantially affecting themain features of the data set. The de-noising of EEG signal is carried out by usingdifferent combinations of threshold limit, thresholdingfunction and window sizes. Choice of threshold limit andthresholding function is a crucial step in the de-noisingprocedure, as it should not remove the original signalcoefficients leading to loss of critical information in theanalyzed data 3.2 WAVELETS FOR ANALYSING EEG SIGNALS In statistical settings we are more usually concerned withdiscretely sampled, rather than continuous functions. It isthen the wavelet analogy to the Discrete WaveletTransform (DWT) which is of primary interest. Wavelettransform [6] has emerged as one of the superior techniquein analyzing non-stationary signals like EEG. Itscapability in transforming a time domain signal into timeand frequency localization helps to understand more thebehavior of a signal.The DWT means choosing subsets of the scales a andpositions b of the mother wavelet (t).

3.2.1 Discrete Wavelet Transform (DWT) The DWT means choosing subsets of the scales a andpositions b of the mother wavelet (t). Choosing scales and positions are based on powers of two,which are called dyadic

scales and positions {a j = 2 j;bj, k = 2 j k}(j and k are integers). Equation (1) showsthat it is possible to build a wavelet for any function bydilating a function (t) with a coefficient 2 j,

andtranslating the resulting function on a grid whose intervalis proportional to 2 j. Contracted (compressed) versionsof the wavelet function match the high-frequencycomponents, while dilated (stretched) versions match thelow-frequency components. Then, by correlating theoriginal signal with wavelet functions of different sizes,the details of the signal can be obtained at several scales.These correlations with the different wavelet functions canbe arranged in a hierarchical scheme calledmulti-resolution decomposition. The multi-

resolutiondecomposition algorithm separates the signal intodetails at different scales and a coarser representation ofthe signal named approximation. 3.2.2Stationary Wavelet Transform (SWT) The basic DWTalgorithm can be modified to give a Stationary WaveletTransform (SWT) that no longer depends on thechoice of origin. As a consequence of the sub samplingoperations in the pyramidal algorithm, the DWT does notpreserve translation invariance. This means that atranslation of the original signal does not necessarily implya translation of the corresponding wavelet coefficients. The SWT has been introduced in order to preserve thisproperty. Instead of sub sampling, the SWT utilizesrecursively dilated filters in order to halve the bandwidthfrom level to another. This decomposition scheme is shown in the figure 1.

Fig3.1Wavelet Decomposition Scheme.

4. WAVELET BASED DE-NOISING 4.1 WAVALET TRANSFORM The wavelet transform converts an original signal in the time domain into the time frequency domain. If noise contributes the same frequency bands as the originalsignal, conventional filtering approaches run into serious difficulty. Consequently, itis beneficial to use wavelet-based methods, where the signal is decomposed into anumber of frequency bands (scales), the transform coefficients are interpreted andprocessed scale by scale, and finally the inverse transform is performed. In DWT, a time-scale representation of the digital signal is obtained using digitalfiltering techniques. The signal to be analyzed is passed through a low pass G and ahigh pass filter H. The output of the low pass filter is known as approximatecoefficients A1[n] and the output of the high pass filter is known as detail coefficientsD1[n]. This process of decomposing the signal is again repeated on the approximatecoefficients A1[n] to yield a new set of approximate A2[n] and detail coefficientsD2[n]. The detail coefficients D1[n] from the previous level are retained as it is. Thisprocess of decomposition approximate coefficients can be continued further and, thisprocedure is called as multiresolution analysis of the signal. DWT is not a time-invariant transform. When wavelet decomposition is

performedprogressively, the length n of wavelet coefficients on each scale will become smaller.Thereby, de-noising threshold will be affected. In order to address these issues,effective de-noising methods are needed. Stationary wavelet transform (SWT)overcomes the above limitations of DWT [16]. The differences of SWT from DWTare that the signal is never subsampled and instead the filters are upsampled at eachlevel. Suppose we are given a signal s[n] of length N where N=2j for some integer j. Leth1[n]and g1[n] be the low-pass filter and the high-pass filter defined by a orthogonalwavelet. At the first level of SWT, the input signal s[n] is convolvedh1[n] with to obtain the approximation coefficients a1[n]and with g1[n]to obtain the detail coefficients d1[n]

10

Because no subsampling is performed, a1[n]and d1[n]are of length N instead of N/2 as in the DWT case. At the next level of the SWT, a1[n]is split into two parts using the same scheme, but with modified filters h2[n] and g2[n] obtained by dyadically up sampling h1[n]and g1[n].This process is continued Recursively. For j 1, 2,.. . , J0 -1, where J0 < J,

Where h j+1 [n] UpSample (hj[n]) and g j+1 [n]UpSample (gj[n]). Here UpSample(x[n]) is the upsampling operator that inserts azero between every adjacent pair of elements of (x[n]) 4.2 WAVELET FAMILIES There are a number of basic functions that can be used as the mother wavelet for Wavelet Transformation. Since the mother wavelet produces all wavelet functions used in the transformation through translation and scaling, it determines the characteristics of the resulting Wavelet Transform. Therefore, the details of the particular application should be taken into account and the appropriate mother wavelet should be chosen in order to use the Wavelet Transform effectively.

11

Figure 4.1

Figure e 4.1illustrates some of the commonly used wavelet functions. Haar wavelet is one of the oldest and simplest wavelet. Therefore, any discussion of wavelets starts with the Haar wavelet. Daubechies wavelets are the most popular wavelets. They represent the foundations of wavelet signal processing and are used in numerous applications. These are also called Maxflat wavelets as their frequency responses have maximum flatness at frequencies 0 and . This is a very desirable property in some applications. The Haar, Daubechies, Symlets and Coiflets are compactly supported orthogonal wavelets. These wavelets along with Meyer wavelets are capable of perfect reconstruction. The Meyer, Morlet and Mexican Hat wavelets are symmetric in shape. The wavelets are chosen based on their shape and their ability to analyze the signal in a particular application. 4.3 WAVELET THRESHOLDING Thresholding involves the reduction or complete removal of selected waveletcoefficients in order to separate out the noise within the signal. The thresholding method, used in wavelet based de -noising technique distinguishes between theinsignificant coefficients likely due to crosstalk noise, and the significant coefficientsconsisting of important signal components. 5. WAVELET ANALYSIS
12

5.1 TRANSFORMATION Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. In the following tutorial I will assume a time-domain signal as a raw signal, and a signal that has been "transformed" by any of the available mathematical transformations as a processedsignal. There are numbers of transformations that can be applied, among which the Fourier transforms are probably by far the most popular. Most of the signals in practice are TIME-DOMAIN signals in their raw format. That is, whatever that signal is measuring, is a function of time. In other words, when we plot the signal one of the axes is time (independent variable), and the other (dependent variable) is usually the amplitude. When we plot time-domain signals, we obtain a time-amplitude representation of the signal. This representation is not always the best representation of the signal for most signal processing related applications. In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency SPECTRUM of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal. Intuitively, we all know that the frequency is something to do with the change in rate of something. If something ( a mathematical or physical variable, would be the technically correct term) changes rapidly, we say that it is of high frequency, where as if this variable does not change rapidly, i.e., it changes smoothly, we say that it is of low frequency. If this variable does not change at all, then we say it has zero frequency, or no frequency. For example the publication frequency of a daily newspaper is higher than that of a monthly magazine (it is published more frequently). The frequency is measured in cycles/second, or with a more common name, in "Hertz". For example the electric power we use in our daily life in the US is 60 Hz (50 Hz elsewhere in the world). This means that if you try to plot the electric current, it will be a sine wave passing

13

through the same point 50 times in 1 second. Now, look at the following figures. The first one is a sine wave at 3 Hz, the second one at 10 Hz, and the third one at 50 Hz. Compare them.

So how do we measure frequency, or how do we find the frequency content of a signal? The answer is FOURIER TRANSFORM (FT). If the FT of a signal in time domain is taken, the frequency-amplitude representation of that signal is obtained. In other words, we now have a plot with one axis being the frequency and the other being the amplitude. This plot tells us how much of each frequency exists in our signal. The frequency axis starts from zero, and goes up to infinity. For every frequency, we have an amplitude value. For example, if we take the FT of the electric current that we use in our houses, we will have one spike at 50 Hz, and nothing elsewhere, since that signal has only 50 Hz frequency component. No other signal, however, has a FT which is this simple. For most practical purposes, signals contain more than one frequency component. The following shows the FT of the 50 Hz signal:

14

The FT of the 50 Hz signal One word of caution is in order at this point. Note that two plots are given in Figure. The bottom one plots only the first half of the top one. Due to reasons that are not crucial to know at this time, the frequency spectrum of a real valued signal is always symmetric. The top plot illustrates this point. However, since the symmetric part is exactly a mirror image of the first part, it provides no additional information, and therefore, this symmetric second part is usually not shown. In most of the following figures corresponding to FT, I will only show the first half of this symmetric spectrum. Why do we need the frequency information? Often times, the information that cannot be readily seen in the time-domain can be seen in the frequency domain. Let's give an example from biological signals. Suppose we are looking at an ECG/EEG signal (Electrocardiography, graphical recording of heart's electrical activity). The typical shape of a healthy ECG/EEG signal is well known to cardiologists. Any significant deviation from that shape is usually considered to be a symptom of a pathological condition. This pathological condition, however, may not always be quite obvious in the original time-domain signal. Cardiologists usually use the time-domain ECG/EEG signals which are recorded on strip-charts to analyze ECG/EEG signals. Recently, the new computerized ECG recorders/analyzers also utilize the frequency information to decide whether a pathological
15

condition exists. A pathological condition can sometimes be diagnosed more easily when the frequency content of the signal is analyzed. There are many other transforms that are used quite often by engineers and mathematicians. Hilbert transform, short-time Fourier transform (more about this later), Wigner distributions, the Radon Transform, and of course our featured transformation, the wavelet transform, constitute only a small portion of a huge list of transforms that are available at engineer's and mathematician's disposal. Every transformation technique has its own area of application, with advantages and disadvantages, and the wavelet transform (WT) is no exception. 5.2 THE WAVELET TRANSFORM The Wavelet transform is a transform of this type. It provides the time-frequency representation. (There are other transforms which give this information too, such as short time Fourier transforms, Wigner distributions, etc.) Often times a particular spectral component occurring at any instant can be of particular interest. In these cases it may be very beneficial to know the time intervals these particular spectral components occur. For example, in EEGs, the latency of an event-related potential is of particular interest (Event-related potential is the response of the brain to a specific stimulus like flash-light, the latency of this response is the amount of time elapsed between the onset of the stimulus and the response). Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. How wavelet transform works is completely a different fun story, and should be explained after short time Fourier Transform (STFT). The WT was developed as an alternative to the STFT. The STFT will be explained in great detail in the second part of this tutorial. It suffices at this time to say that the WT was developed to overcome some resolution related problems of the STFT.

16

Wavelet analysis represents the next logical step: a windowing technique with variablesized regions. Wavelet analysis allows the use of long time intervals where we want more precise low-frequency information, and shorter regions where we want high-frequency information.

Fig 5.1 Wavelet Analysis Heres what this looks like in contrast with the time-based, frequency-based, and STFT views of a signal:

Fig5.2 STFT Views of a Signal You may have noticed that wavelet analysis does not use a time-frequency region, but rather a time-scaleregion. For more information about the concept of scale and the link between scale and frequency, see How to Connect Scale to Frequency?

17

What Can Wavelet Analysis Do? One major advantage afforded by wavelets is the ability to perform local analysis, that is, to analyze a localized area of a larger signal.Consider a sinusoidal signal with a small discontinuity one so tiny as to bebarely visible. Such a signal easily could be generated in the real world,perhaps by a power fluctuation or a noisy switch.

Fig 5.3 Sinusoidal Signal A plot of the Fourier coefficients (as provided by the fft command) of this signal shows nothing particularly interesting: a flat spectrum with two peaks representing a single frequency. However, a plot of wavelet coefficients clearly shows the exact location in time of the discontinuity.

Fig Fourier and Wavelet Coefficients Signal Plot


18

5.4

Wavelet analysis is capable of revealing aspects of data that other signal analysis techniques miss aspects like trends, breakdown points, discontinuities in higher derivatives, and self-similarity. Furthermore, because it affords a different view of data than those presented by traditional techniques, wavelet analysis can often compress or de-noise a signal without appreciable degradation. Indeed, in their brief history within the signal processing field, wavelets have already proven themselves to be an indispensable addition to the analysts collection of tools and continue to enjoy a burgeoning popularity today. Compare wavelets with sine waves, which are the basis of Fourier analysis. Sinusoids do not have limited duration they extend from minus to plus infinity. And where sinusoids are smooth and predictable, wavelets tend to be irregular and asymmetric.

Fig 5.5 Comparison of Sine wave with Wavelet Fourier analysis consists of breaking up a signal into sine waves of various frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original (or mother) wavelet. Just looking at pictures of wavelets and sine waves, you can see intuitively that signals with sharp changes might be better analyzed with an irregular wavelet than with a smooth sinusoid, just as some foods are better handled with a fork than a spoon. It also makes sense that local features can be described better with wavelets that have local extent.
19

5.3 THE DESCRETE WAVELET TRANSFORM The Wavelet Series is just a sampled version of CWT and its computation may consume significant amount of time and resources, depending on the resolution required. The Discrete Wavelet Transform (DWT), which is based on sub-band coding, is found to yield a fast computation of Wavelet Transform. It is easy to implement and reduces the computation time and resources required. The foundations of DWT go back to 1976 when techniques to decompose discrete time signals were devised. Similar work was done in speech signal coding which was named as subband coding. In 1983, a technique similar to sub-band coding was developed which was named pyramidal coding. Later many improvements were made to these coding schemes which resulted in efficient multi-resolution analysis schemes. In CWT, the signals are analyzed using a set of basis functions which relate to each other by simple scaling and translation. In the case of DWT, a time-scale representation of the digital signal is obtained using digital filtering techniques. The signal to be analyzed is passed through filters with different cutoff frequencies at different scales. 5.4 MULTI-RESOLUTION ANALYSIS USING FILTER BANKS Filters are one of the most widely used signal processing functions. Wavelets can be realized by iteration of filters with rescaling. The resolution of the signal, which is a measure of the amount of detail information in the signal, is determined by the filtering operations, and the scale is determined by up sampling and down sampling (sub sampling) operations. The DWT is computed by successive low pass and high pass filtering of the discrete time-domain signal as shown in figure 2.2. This is called the Mallat algorithm or Mallat-tree decomposition. Its significance is in the manner it connects the continuous-time mutiresolution to discrete-time filters. In the figure, the signal is denoted by the sequence x[n], where n is an integer. The low pass filter is denoted by G0 while the high pass filter is denoted by H0. At each

20

level, the high pass filter produces detail information, d[n], while the low pass filter associated with scaling function produces coarse approximations, a[n].

Figure 5.6 Three-level wavelet decomposition tree At each decomposition level, the half band filters produce signals spanning only half the frequency band. This doubles the frequency resolution as the UN certainty in frequency is reduced by half. In accordance with Nyquists rule if the original signal has highest frequency of , which requires a sampling frequency of 2 radians, then it now has a highest frequency of /2 radians. It can now be sampled at a frequency of radians thus discarding half the samples with

no loss of information. This decimation by 2 halves the time resolution as the entire signal is now represented by only half the number of samples. Thus, while the half band low pass filtering removes half of the frequencies and thus halves the resolution, the decimation by 2 doubles the scale. With this approach, the time resolution becomes arbitrarily good at high frequencies, while the frequency resolution becomes arbitrarily good at low frequencies. The filtering and decimation process is continued until the desired level is reached. The maximum number of levels depends on the length of the signal. The DWT of the original signal is then obtained by concatenating all the coefficients, a[n] and d[n], starting from the last level of decomposition.

21

Figure 5.7 Three-level wavelet reconstruction tree. Figure 2.3 shows the reconstruction of the original signal from the wavelet coefficients. Basically, the reconstruction is the reverse process of decomposition. The approximation and detail coefficients at every level are up sampled by two, passed through the low pass and high pass synthesis filters and then added. This process is continued through the same number of levels as in the decomposition process to obtain and H the original signal. The Mallat algorithm works equally well if the analysis filters, G0 and H0, are exchanged with the synthesis filters, G1
1.

5.5 STATIONARYWAVELET TRANSFORMS The discrete stationary wavelet transform (SWT) is aundecimated version of DWT. The mainidea is to average several detailed co-efficient which are obtained by decomposition of the inputsignal without downssampling. This approach can be interpreted as a repeated application ofthe standard DWT method for different time shifts. The Stationary wavelet transform (SWT) is similar to the dwtexcept the signal is never subsampled and instead the filters are upsampled at each level of decomposition.

22

Figure 5.8A 3 level SWT filter bank Each level's filters are up-sampled versions of the previous.

SWT filtersThe SWT is an inherently redundant scheme as each set of coefficients contains the same number of samples as the input so for a decomposition of N levels there are a redundancy of 2N. 5.6 ONE-STAGE FILTERING Approximation and Details For many signals, the low-frequency content is the most important part. It is what gives the signal its identity. The high-frequency content on the other hand imparts flavor or nuance. Consider the human voice. If you remove the high-frequency components, the voice sounds different but you can still tell whats being said. However, if you remove enough of the lowfrequency components, you hear gibberish. In wavelet analysis, we often speak of approximations and details. The approximations are the high-scale, low-frequency components of the signal. The details are the low-scale, high-frequency components.

23

The filtering process at its most basic level looks like this:

Figure 5.9 Filtering Process The original signal S passes through two complementary filters and emerges as two signals. Unfortunately, if we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with. Suppose, for instance that the original signal S consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a total of 2000. These signals A and D are interesting, but we get 2000 values instead of the 1000 we had. There exists a more subtle way to perform the decomposition using wavelets. By looking carefully at the computation, we may keep only one point out of two in each of the two 2000length samples to get the complete information. This is the notion of own sampling. We produce two sequences called cA and cD.

24

Figure 5.10 Sampling The process on the right which includes down sampling produces DWT Coefficients. To gain a better appreciation of this process lets perform a one-stage discrete wavelet transform of a signal. Our signal will be a pure sinusoid with high- frequency noise added to it.

Figure 5.11Schematic Diagram

25

5.7 MULTIPLE-LEVEL DECOMPOSOTION The decomposition process can be iterated, with successive approximations being decomposed in turn, so that one signal is broken down into many lower resolution components. This is called the wavelet decomposition tree.

Figure 5.12Wavelet Decomposition Tree Looking at a signals wavelet decomposition tree can yield valuable information.

Figure 5.13 Wavelet Signal Decomposition Tree

26

5.8 NUMBER OF LEVELS Since the analysis process is iterative, in theory it can be continued indefinitely. In reality, the decomposition can proceed only until the individual details consist of a single sample or pixel. In practice, youll select a suitable number of levels based on the nature of the signal, or on a suitable criterion such as entropy.

27

6. CODE
functionvarargout = guimain(varargin) % GUIMAIN M-file for guimain.fig % GUIMAIN, by itself, creates a new GUIMAIN or raises the existing % singleton*. % % H = GUIMAIN returns the handle to a new GUIMAIN or the handle to % the existing singleton*. % % GUIMAIN('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in GUIMAIN.M with the given input arguments. % % GUIMAIN('Property','Value',...) creates a new GUIMAIN or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before guimain_OpeningFunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to guimain_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help guimain % Last Modified by GUIDE v2.5 10-Feb-2010 00:59:27 % Begin initialization code - DO NOT EDIT gui_Singleton = 1; gui_State = struct('gui_Name', mfilename, ... 'gui_Singleton', gui_Singleton, ... 'gui_OpeningFcn', @guimain_OpeningFcn, ... 'gui_OutputFcn', @guimain_OutputFcn, ... 'gui_LayoutFcn', [] , ... 'gui_Callback', []); ifnargin&&ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end ifnargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code - DO NOT EDIT

% --- Executes just before guimain is made visible. functionguimain_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure

28

% eventdata % handles % varargin

reserved - to be defined in a future version of MATLAB structure with handles and user data (see GUIDATA) command line arguments to guimain (see VARARGIN)

% Choose default command line output for guimain handles.output = hObject; % Update handles structure guidata(hObject, handles); % UIWAIT makes guimain wait for user response (see UIRESUME) % uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line. functionvarargout = guimain_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output;

% --- Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) t=xlsread('data.xls',1,'A1:IV1'); axes(handles.axes1) plot(t); title(' EEG signal with Artifacts '); %%%%%%%%%% denoising %%%%%%%%%%%%%%%%%%%%%% t=t(1:128); [SWA1,SWD1] = swt(t,1,'coif3'); S1=SWA1(1:128); figure;plot(S1);title('stationary wavelet coefficients at level-1') %%%%%%%%%% [SWA2,SWD2] = swt(t,2,'coif3'); S2=SWA2(1:128); figure;plot(S2);title('stationary wavelet coefficients at level-2') %%%%%%%%%%%%%%%%%%% [SWA3,SWD3] = swt(t,3,'coif3'); S3=SWA3(1:128); figure;plot(S3);title('stationary wavelet coefficients at level-3') %%%%%%%%%%% [SWA4,SWD4] = swt(t,4,'coif3'); S4=SWA4(1:128); figure;plot(S4);title('stationary wavelet coefficients at level-4') %%% applying thresholds %%%%%%%%%%%%%%%%%% m1=mean(S1);

29

s1=std(S1); T1=m1+0.1*s1 E1=[]; for i=1:length(S1) if S1(i)<T1 E1(i)=S1(i); end end %%%%%%%%%%%%%%%%%%%%% m2=mean(S2); s2=std(S2); T2=m2+0.1*s2 E2=[]; for i=1:length(S2) if S2(i)<T2 E2(i)=S2(i); end end %%%%%%%%%%%%%%%%%%%%%%% m3=mean(S3); s3=std(S3); T3=m3+0.1*s3 E3=[]; for i=1:length(S3) if S3(i)<T3 E3(i)=S3(i); end end %%%%%%%%%%%%%%%%%%% m4=mean(S4); s4=std(S4); T4=m3+1.2*s4 %E4=zeros(size(S4)); E4=[]; for i=1:length(S4) if S4(i)<(T4) E4(i)=S4(i); else gh(i)=S4(i); end end %%% Reconstruction of wavelet coefficients X4 = iswt(E4,'coif3'); figure, plot(t,'k') holdon plot(X4,'r') holdoff h = legend(' CONTAMINATED EEG','Estimated EOG',1); set(h,'Interpreter','none') g4 = iswt(gh,'coif3'); v4=X4(1:length(g4))-g4; axes(handles.axes2) plot(t,'r');

30

holdon plot(v4,'k') holdoff h = legend('CONTAMINATED EEG','CORRECTED EEG',4);

% --- Executes on button press in pushbutton2. function pushbutton2_Callback(hObject, eventdata, handles) % hObject handle to pushbutton2 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) closeall;

31

7. SIMMULATION RESULTS AND GRAPHS

Contam inated E E G signal 180 160 140 120 100 80 60 40 20

20

40

60

80

100

120

140

INPUT EEG SIGNAL

32

200 180 160 140 120 100 80 60 40 20 CO NTAM INATE D E E G E stim ated E O G

20

40

60

80

100

120

140

33

180 160 140 120 100 80 60 40 20 CO NTAM INATE D E E G E stim ated E O G

20

40

60

80

100

120

140

34

220 200 180 160 140 120 100 80 60 40 20 CO NTAM INATE D E E G E stim ated E O G

20

40

60

80

100

120

140

35

300 CO NTAM INATE D E E G E stim ated E O G 250

200

150

100

50

20

40

60

80

100

120

140

36

200 CO NTAM INATE D E E G CO RRE CTE D E E G

150

100

50

-50

-100

-150

20

40

60

80

100

120

140

37

200 CO NTAM INATE D E E G CO RRE CTE D E E G 150

100

50

-50

-100

20

40

60

80

100

120

140

38

200 CO NTAM INATE D E E G CO RRE CTE D E E G 150

100

50

-50

-100 0 20 40 60 80 100 120 140

39

300 250 CO NTAM INATE D E E G CO RRE CTE D E E G

200

150

100 50

-50

-100 0 20 40 60 80 100 120 140

40

x 10 10 9 8 7 6 5 4 3 2 1 0 0

P S D for CO NTAM INATE D E E G P S D forCO RRE CTE D E E G

10

20

30

40

50

60

70

41

8. CONCLUSIONS The accuracy of the technique has been checked on several artifact signals. In this project, a method to remove occular artifacts using a new threshold formula and threshold function is given. Our method gives a better result without any complexity and also retains the original information contained in the EEG signal .Power spectral density plot and correlation plots are used as performance matrices.In We conclude that our proposed statistical method gives lesser complexity and easier to remove the artifacts with the help of wavelet decomposition. It is an efficient technique for improving the quality of EEG signals in biomedical analysis.

42

Das könnte Ihnen auch gefallen