Sie sind auf Seite 1von 8

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO.

4, APRIL 2008

1319

An ECG Signals Compression Method and Its


Validation Using NNs
Catalina Monica Fira3 and Liviu Goras, Senior Member, IEEE

AbstractThis paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and LempelZivWelch (LZW)
coding. The algorithm has been verified using eight of the most
frequent normal and pathological types of cardiac beats and an
multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects
regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated
as well. A new compression measure called quality score, which
takes into account both the reconstruction errors and the compression ratio, is proposed.
Index TermsBiomedical signal processing, data compression,
neural networks (NNs), signal processing.

I. INTRODUCTION

OMPRESSION methods have gained in importance in recent years in many medical areas like telemedicine, health
monitoring, etc. All these imply storage, processing, and transmission of large quantities of data. Compression methods can
be classified into two main categories: lossless and lossy. Compression algorithms can be constructed through direct methods,
linear transformations, and parametric methods.
Even though many compression algorithms have been reported so far in the literature, not so many are currently used
in monitoring systems and telemedicine. The most important
reason seems to be the fear that the recovery distortions produced by compression methods with loss of information might
lead to erroneous interpretations. The aim of this paper is to
propose a new low complexity compression method leading to
compression ratios better that 15:1 and to suggest a qualitative validation of the compression results through classifications
based on neural networks (NNs).
The following provides a summary of previous work investigating the problem of ECG compression.
Direct methods as the turning point (TP) [1], amplitude zone
time epoch coding (AZTEC) [2], coordinate reduction time encoding system (CORTES) [3], scan along polygonal approximation (SAPA) [4], and entropy coding [5] are based on the extraction of a subset of significant samples.

Manuscript received February 27, 2007. Asterisk indicates corresponding author.


*C. M. Fira is with the Institute for Computer Science, Bd. Carol I 22A, Iasi
700505, Romania (e-mail: mfira@scs.etc.tuiasi.ro).
L. Goras is with the Faculty of Electronics and Telecommunications, Gh.
Asachi Technical University, Iasi 700505, Romania, and also with the Institute
for Computer Science, Iasi 700505, Romania (e-mail: lgoras@etc.tuiasi.ro).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TBME.2008.918465

The methods based on linear transformations use


various linear transformations (Fourier, Walsh, Cosine,
KarhunenLoeve, wavelet, etc. [6][9]) to code a signal
through the most significant coefficients of its representation
with respect to a particular basis chosen by means of an error
criterion.
Parametric methods, more recently reported in the literature, are combinations of direct and transformation techniques
methods, typical examples being beat codebook [10], artificial
NN [11], peak picking, and vector quantization [11].
The three important features of a compression algorithm are
the compression measure, the reconstruction error and the computational complexity, the first two being interdependent. The
computational complexity is directly related to practical implementation considerations and is needs to be as low as possible,
especially for portable equipment [12].
The compression ratio (CR) is defined as the ratio between
the number of bits needed to represent the original and the
compressed signal. The compression efficiency of an algorithm
can be also evaluated using the bit rate, bits per second (BPS)
(number of bits in the compressed data/original signal duration) and/or bits per sample (b/sample) (amount of bits in the
compressed data/number of samples of original signal).
For lossy compression techniques, the definition of the error
criterion to appreciate the distortion of the reconstructed signal
with respect to the original one is of paramount importance,
particularly for biomedical signals like the electrocardiogram
(ECG), where a slight loss or modification of information can
lead to wrong diagnostics. The measurement of these distortions is a difficult problem and it is only partially solved for
biomedical signals. In most ECG compression algorithms, the
percentage root-mean-square difference (PRD) measure defined
as

(1)

is employed, where
is the original signal,
is the reconis the length of the window over which
structed signal, and
the PRD is calculated. The normalized version of PRD, PRDN,
which does not depend on the signal mean value is defined as

0018-9294/$25.00 2008 IEEE

(2)

1320

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

Fig. 1. Scheme of the proposed compression method.

Other measures such as the root mean square error (RMS) and
the signal-to-noise ratio (SNR) are used as well [1]. In order
to evaluate the relative preservation of the diagnostic information in the reconstructed signal compared to the original one,
Zigel [13], [14] introduced a new measure, called weighted diagnostic distortion (WDD), which consists in comparing the Pand T-wave, and QRS complexes features of the two ECG signals; however, this is not always easy to use.
In all cases, the final verdict regarding the fidelity and clinical acceptability of the reconstructed signal should be validated
through visual inspection by the cardiologist physician.
II. COMPRESSION METHOD
The proposed compression method [15] is a combination of
signal processing (resembling the peak-picking compression
techniques [12], [16]) and information transmission theory
based techniques. It can be viewed as a cascade of two stages as
shown in Fig. 1. In the first stage, the essential information from
the ECG signal is extracted and in the second one, the resulted
information is delta and LempelZivWelch (LZW) coded.
A. Coding Method
The preprocessing stage consists of a filtering with a 6-degree Savitzky-Golay filter (SGF) using a 17-points constant
window. SGFs, also called digital smoothing polynomial filters
or least-squares smoothing filters, are typically used to smooth
out a noisy signal whose frequency span (without noise)
is large. Compared to finite-impulse response (FIR) filters
which are good in rejecting the high frequency noise, SGFs are
more efficient in preserving the high frequency components of
the signal [17]. The parameters of the SavitzkyGolay filter
have been empirically adopted after testing various degrees
of the polynomial and the dimensions of the window. It has
been found that smaller degrees of the polynomial and higher
dimensions of the window lead to amplitude distortion of the
R-waves while higher degrees of the polynomial and smaller
dimensions for the windows lead to insignificant filtering.
The next step consists in extracting and rounding the local
minima and maxima values from the filtered ECG signal, which
is equivalent to a nonuniform sampling followed by a quantization of both amplitude and position. We will call the resulting
discrete signal with nonuniformly spaced samples the signal
skeleton. Knowing the location and the amplitude of the local
extremes in a first approximation it is possible to reconstruct
most parts of the ECG signal without loss of relevant information.
In order to improve the compression rate without significantly
increasing the distortion error, some of the skeleton samples are
discarded while a few others are added, as discussed in the following.

Thus, samples for which the difference from previous ones is


less than a threshold TH are discarded. This is done in two steps
with an adaptive hysteretic filtering based on the statistics of the
ECG signal.
The calculation of TH consists in the computation of a first
threshold, denoted

(3)
where ST is the standard deviation of
and
represents the amplitude of the th
sample of the skeleton. The aim of this threshold is to select
samples with a relatively small variation that do not convey
relevant information (they can be considered noise) and that
will determine the level of TH.
The standard deviation of the skeleton samples having amless than
plitude variations
is then calculated. The threshold TH is determined using
the formula

(4)
where the most convenient values for , have been found to be
between 1.5 and 2.5. Values outside the above interval have been
tested as well. Those below 1.5 lead to a poor filtering of the
extreme values representing noise while values higher than 2.5
lead to distortion errors of the reconstructed signal as it will be
later shown in Fig. 6.
The reconstruction errors based on the skeleton obtained in
the previous manner proved to be acceptable except for the
zones of the QRS complexes where adjacent skeleton samples
are rather far from each other. The error can be further decreased by adding extra samples to the skeleton resulted after
the application of the threshold TH. The location of the intermediary points which are added to the skeleton is determined
as follows: where the absolute
through a third threshold,
value of the difference between two successive amplitudes is
, a sample of the original signal taken in the
higher than
middle of the distance between the skeleton samples is added
to the skeleton.
It has been found that a convenient formula for the value of
, threshold is
the

(5)
where
is the length of the signal and a convenient
has been empirically found giving various values for in (4)
and in (5). It has been observed that the previous constants
significantly affect the compression results based only on the
,
described preprocessing method. For example, for
a mean compression rate and a PRD of 4.55 and 0.77, respecand
the comtively, have been obtained while for
pression rate and the PRD were 6.51 and 0.91, respectively.

FIRA AND GORAS: ECG SIGNALS COMPRESSION METHOD AND ITS VALIDATION USING NNS

1321

Fig. 2. Original signal (continuous line) and skeleton after hysteretic filtering
(including extra samples).
Fig. 3. Segmented ECG signal.

Fig. 2 represents part of the record number 100 from the


MIT-BIH Arrhythmia database1 as well as the points of the enriched skeleton obtained using the procedure described before.
In the last two steps, the obtained skeleton is delta coded for
both amplitudes and distances and it is LZW coded. The LZW
coding [18], [19] is a lossless dictionary-based compression
algorithm which looks for repetitive sequences of data which
are used to build a dictionary.
B. Decoding
The reconstruction of the ECG signal from its compressed
version begins with the LZW decoding of the enriched
skeleton leading to two vectors representing the amplitudes and
the positions of the skeleton lines. The ECG reconstruction is
made through linear or cubic interpolation. Obviously, linear
interpolation implies a smaller computation complexity with
the price of a higher reconstruction error. Even though the
reconstruction errors expressed in PRD and PRDN are smaller
in the case of cubic interpolation, it has been found that the
reconstruction distortions evaluated by means of the method
of pattern classification based on NNs (to be further described
in this paper) give comparable results, without significant
differences in cardiac pattern classification as expected from
pure visual inspection.
C. Evaluation of the Compression Method
As shown before, compression methods can be evaluated by
means of classical distortion measures like PRD, PRDN, RMS,
and SNR.
In the following, we propose an alternative estimation of the
reconstruction errors based on a multi-layer perceptron (MLP)
NN trained and tested initially with original heartbeat patterns
and then tested with reconstructed signals. The validation of
the method has been performed for eight classes of segmented
heartbeats for which the confusion matrix has been studied. The
eight envisaged classification classes are the most frequent types
of heartbeats, i.e., atrial premature beat, normal beat, left bundle
branch block beat, right bundle branch block beat, premature
ventricular contraction, fusion of ventricular and normal beat,
paced beat, fusion of paced and normal beat.
1[Online].

Available: http://www.physionet.org/physiobank/database/mitdb/

The segmentation was done with respect to the R-wave localization using the wave detection method presented in [20].
The detection algorithm consists of the PanTompkins preprocessing technique and a modified R-wave detection with low
computational complexity. Following the R-wave detection, the
segmentation has been done as follows: a pattern begins from
the middle of the RR interval between the previous and current
heartbeat and finishes at the middle of the next RR interval. A
sequence of the original ECG signal and its segmentation into
cardiac beats based on the R-waves localization is presented in
Fig. 3. Each segment contains the P-wave, the QRS complex
and the T-wave. Failed or false detections of the R-wave will
determine wrong segmentations. This will not affect the compression rates but will lead to low recognition rates. Thus, the
proposed qualitative method of compression evaluation is significantly influenced by segmentation accuracy.
To evaluate the precision of the reconstructed signal, a trained
MLP neuronal network has been used for classification. For
training and testing, the original heartbeat patterns of the ECG
signals (normal and seven cardiac pathologies) from the database and, respectively, the corresponding heartbeat patterns of
the compressed signal have been used. As the segmented output
patterns had different dimensions, for training the MLP neural
network each pattern was resampled to 100 samples. This value
has been chosen to decrease the dimension of the patterns while
preserving the waveform. Even though the initial number of
samples was about 300, no practical loss of information occurred through resampling (visual inspection made by the specialist physician). The 300 samples cardiac patterns have been
decimated into 100 samples as follows. First, a resampling of
the ECG has been achieved introducing extra samples through
linear interpolation which were then passed through a low-pass
FIR filter with 90-Hz cutoff frequency followed by decimation.
The resampling of the ECG segments did not affect the waveform.
For training the MLP network a back-propagation algorithm
with gradient descent and cross-validation was used. 7500 patterns have been used of which 70% for training, 15% for the
validation set, and 15% for testing.

1322

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

TABLE I
AVERAGE RESULTS AFTER THE FEATURE EXTRACTION STAGE OF THE
COMPRESSION ALGORITHM FOR 24 RECORDS

TABLE II
AVERAGE FINAL RESULTS OF THE COMPRESSION ALGORITHM FOR 24 RECORDS

The patterns in the three sets were distinct. The following results have been obtained using the records no. 100, 101, 103,
104, 106,118, 119, 200, 201, 205, 207, 210, 213, 214, 217, 219
for training and the records no. 102, 105, 107, 202, 203, 208,
209, 212, 215 for testing and validation. From these last records,
the cardiac patterns have been divided into two data sets, validation set, and testing set. These sets have been obtained through
a random selection mechanism, each pattern being selected either in the testing set or in the validation set but never in both of
them.
The error function used was the mean square error and the
stop criterion was the least error for the validation set.
As an alternative to the previous method, the possibility of
the heartbeats classification using PCA applied on the patterns
obtained from both original and compressed ECG has also been
investigated using an MLP trained and tested with the principal
components corresponding to the most significant eigenvalues

Fig. 4. Original and reconstructed ECG signals for record no. 217 (best case:
,
, and
).
:

CR = 34 : 1 BPS = 127

PRD = 4 13%

III. EXPERIMENTAL RESULTS


In order to validate the proposed classification algorithm
and to compare it with other classification methods, 24 records
consisting of the first 10 000 samples from the MIT-BIH
Arrhythmia database1 have been used. The ECG signals were
digitized through sampling at 360 samples/s, quantized and
encoded with 11 bits.
Even though, as shown before, linear interpolation has been
used as well, the results reported in this paper are based only on
cubic interpolation.
Since the variability of the signal around its baseline is what
should be preserved and not the baseline itself, the performance
measure used to reveal the accuracy of the algorithm was the
variance of the error with respect to the variance of the signal.
The average compression ratios and average PRD and PRDN
for the 24 data records show very low vales as shown in Table I.
After the first preprocessing stage, the highest compression
and
) was achieved for
ratio (
record no. 217. The lowest compression ratio was achieved for
and
).
record no. 232 (
The average values for the first stage were 7.41:1 for the
compression ratio and 1.17% for PRD with extremes values of
0.29% and 4.13%. Only for five records much higher values than
the mean have been obtained due to the presence of noise.
In the second stage, the two vectors (amplitude and location)
representing the skeleton from the previous stage were delta

Fig. 5. Original and reconstructed ECG signals for record no. 232 (worst case:

CR = 6 37, BPS = 678, and PRD = 0 29%).


:

coded and then compressed with LZW. For the location and amplitude vectors the average compression rates were 2:1 and 3:1,
respectively.
The global average compression rate obtained with the algorithm was 18.27:1 (see Table II). Since the LZW compression
is lossless, the PRD is conserved.
High quality reconstructed signals were obtained as shown in
Figs. 4 and 5, where the best and worst cases were presented.
Good reconstruction has been obtained in all cases including
the one with the highest compression ratio, PRD, respectively
(record no. 232). After using the LZW algorithm, the highest
and lowest compression ratios were achieved for records no.
217 and 232, (CR of 34:1 and 6.37:1, respectively). The global
results for all 24 records are shown in Table III.
For all cases presented so far, the value of the parameter
used for the calculations of the TH was equal to 2. The compression results for the case when took values between 1.5
and 2.5 are shown in Table IV. However, a conclusion about
compression quality using only the values in Table IV cannot
be drawn. Visual inspection showed that the acceptable values

FIRA AND GORAS: ECG SIGNALS COMPRESSION METHOD AND ITS VALIDATION USING NNS

1323

TABLE III
RESULTS OF THE COMPRESSION ALGORITHM FOR 24 RECORDS

Fig. 6. Original and reconstructed ECG signals for record no. 100 and k = 2:5.

TABLE V
COMPARISON BETWEEN THE PROPOSED METHOD AND OTHER COMPRESSION
ALGORITHMS FOR RECORD NO. 117

TABLE IV
RESULTS OF THE COMPRESSION ALGORITHM FOR SEVERAL VALUES OF k
IN RELATION (4)

for should not surpass 2.3 in agreement with the small PRD
and the cardiologists opinion. From the example in Fig. 6, it is
involve unacceptable disobvious that the results for
tortions for the cardiologist.

The optimal value of has been established in close agreement with the opinion of a cardiologist who inspected visually
all 24 original and reconstructed records for various values of
between 1 and 3.
The compression evaluation found in the literature envisages
either the reconstruction distortions or a quantitative description
of the compression itself. To our knowledge, a measure taking
into consideration both aspects has not been used so far. This
is why we define a compression measure called quality score
(QS) as the ratio between the CR and the PRD. A high score represents a good compression. The QS may be very useful when it
is difficult to estimate the best compression method while taking
into consideration the compromise compression reconstruction
errors as well. As an example, it has been possible to compare
three compressed records with close values of CR and PRD, i.e.,
,
), 119 (
,
records 117 (

1324

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

TABLE VI
COMPARISON BETWEEN THE PROPOSED METHOD AND OTHER COMPRESSION
ALGORITHMS FOR AVERAGE VALUES FOR 24 RECORDS

TABLE VII
RESULTS OF PATTERN CLASSIFICATION WITH VARIOUS MLP CONFIGURATIONS
(ORIGINAL PATTERNS)

), and 104 (
,
) the previous
for the record 117,
for
scores were:
for the record 104.
the record 119, and
It is implicitly assumed that the ECG decoded signals have
been validated through visual inspection by the physician. For
the 24 records analyzed, an average qualitative score (QS) of
20.73 was obtained, with the maximum value of 45.56 for record
no. 100, and minimum value of 7.69 for record no. 214 (see
Table III).
The results obtained from the classification with ECG patterns depend on the MLP configuration. In the case of the classification using segmented original ECGs, the results are presented in Table VII. For the configuration MLP 100-50-50-8 a
classification rate of 92.25% has been obtained, compared to
91.54% obtained using the configuration 100-50-8. The complexity of 2 layers NN, its disadvantages over the one layer as
well as the good classification results obtained with the one layer
network (100-50-80) were reasons for not using the two-layer
networks. The good classification results obtained with an NN
trained and tested with original patterns certified the relevance
of the cardiac beat classification method in the eight proposed
classes. The confusion matrix (see Table IX) obtained in the case
of the 100-50-8 configuration presents a uniform repartition of
the distribution between the eight classes used.
Starting from the previous results, using the training of an
MLP (100-50-8 configuration) with original patterns and testing

TABLE VIII
CONFUSION MATRIX FOR ORIGINAL PATTERN CLASSIFICATION WITH A
100-50-8 MLP (THE SYMBOLS ON THE ROWS AND COLUMNS REPRESENT THE
NETWORK OUTPUT AND THE CORRECT OUTPUT CLASS, RESPECTIVELY)

TABLE IX
CONFUSION MATRIX FOR A 100-50-8 MLP TRAINED WITH ORIGINAL
PATTERNS AND TESTED WITH RECONSTRUCTED SIGNALS (CLASSIFICATION
83.5%) (THE SYMBOLS ON THE ROWS AND COLUMNS REPRESENT
ACCURACY
THE NETWORK OUTPUT AND THE CORRECT OUTPUT CLASS, RESPECTIVELY)

with patterns derived from the compressed signal, a classification ratio of 83.5% has been obtained. The confusion matrix
presented in Table IX proves the same uniform distribution of
the classification rate across the eight classes and validates the
compression method from a qualitative point of view.
In another set of experiments made with the aim of decreasing the complexity of the MLP and of the training time,
PCA has been used. Even though PCA is a compression method
with losses, its use for cardiac pattern analysis and compression
keeping the first principal components does not involve significant distortion, which has been confirmed by the good results
obtained for cardiac pattern classification. Preserving only the
first 19 principal components resulted from applying PCA on
the original patterns matrix, and using these components for
training and testing the MLP, similar results as in the case of
using the actual patterns have been obtained. The results are
presented in Table X. Table XI presents the uniform distribution
of classification in the eight classes used.
When PCA has been applied on patterns obtained from the
compressed signal, is has been found that in order to reconstruct the patterns with good quality the first 30 principal components are necessary. In this case, the MLP configuration becomes 30-50-8.

FIRA AND GORAS: ECG SIGNALS COMPRESSION METHOD AND ITS VALIDATION USING NNS

TABLE X
RESULTS REGARDING PATTERN CLASSIFICATION WITH VARIOUS MLP
CONFIGURATIONS USING PCA

1325

PRD, CR, and QS for record no. 117 are presented and from
Table VI where the mean values of the same indices are given .
V. CONCLUSION

TABLE XI
CONFUSION MATRIX FOR ORIGINAL PATTERN CLASSIFICATION WITH A 19-50-8
MLP USING PCA (THE SYMBOLS ON THE ROWS AND COLUMNS REPRESENT
THE NETWORK OUTPUT AND THE CORRECT OUTPUT CLASS, RESPECTIVELY)

A new algorithm for ECG signal compression based on


local extreme extraction, adaptive hysteretic filtering and LZW
coding has been presented.
The algorithm was tested for the compression of eight of the
most frequent normal and pathological types of cardiac beats
ECG signals from the MIT-BIH database and has been validated
using neural networks trained with original heartbeat patterns
including PCA and tested with the reconstructed signals. The
mean value of the CR for the 24 records analyzed was 18.2725
, all clinical information being preserved
and the
as validated through visual inspection for all cases by the cardiologist physician. The method is fast and easy to implement.
A new compression measure called quality score, which takes
into account both the reconstruction errors and the compression
ratio has been proposed.
ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers for
their help in improving the quality of this paper.
REFERENCES

IV. DISCUSSION
In the following, a short comparison of our findings to previously reported results is given. To the authors knowledge,
the number of papers dealing with ECG classification into a
higher number of classes is rather small, most of them treating
the problem of simultaneous detection or of classification of
few pathologies. Among them, De Chazal [21] reports a classification accuracy of 97.4% for five classes (with a total of 15
pathologies) while Prasad [22] and Osowski [23] using wavelet
and SVM, respectively, report 96%.
We may also remark that the proposed compression method
validation has been done through reconstruction errors measurements using PRD, PRDN, RMS, SNR, and QS as well as by the
visual inspection of a cardiologist with whom the value of
in the TH formula based on the results in Table IV has been established.
Since the objective of the method was not that of finding an
optimal method for classifying heartbeats but to verify the compression quality using classifiers, we consider that classifying
rates 90% validate a good compression quality from the distortion point of view. We are aware that using other NN architectures or classification algorithms better classification results can
be obtained.
A comparison with the classifying results reported in the
literature is rather difficult to make due, among others, to the
databases used. The accuracy of the classification used for
the validation of the compression method compares favorably
with other compression techniques (JPEG2000 [6], wavelet
[7], SPHIT [8], Djohn [9], Hilton [9], AZTEC [24], TP [24],
CORTES [24], SAPA [24]), as seen from Table V, where the

[1] W. C. Mueller, Arrhythmia detection program for an ambulatory ECG


monitor, Biomed. Sci. Instrument., no. 14, pp. 8185, 1978.
[2] J. R. Cox, F. M. Nolle, H. A. Fozzard, and C. G. Oliver, AZTEC,
a preprocessing program for real time ECG rhythm analysis, IEEE
Trans. Biomed. Eng., vol. BME-15, no. 4, pp. 128129, Apr. 1968.
[3] J. P. Abenstein and W. J. Tompkins, A new data reduction algorithm
for real time ECG analysis, IEEE Trans. Biomed. Eng., vol. BME-29,
no. 1, pp. 4348, Jan. 1982.
[4] M. Ishijima, S. B. Shin, G. H. Hostetter, and J. Sklansky, Scan along
polygon approximation for data compression of electrocardiograms,
IEEE Trans. Biomed. Eng., vol. BME-30, no. 11, pp. 723729, Nov.
1983.
[5] D. A. Huffman, A method for the construction of minimum redundancy coders, Proc. IRE, vol. 40, no. 9, pp. 10981101, 1952.
[6] A. Bilgin, M. W. Marcellin, and M. I. Altbach, Compression of electrocardiogram signals using JPEG2000, IEEE Trans. Consumer Electron., vol. 49, no. 4, pp. 833840, Nov. 2003.
[7] A. Al-Shrouf, M. Abo-Zahhad, and S. M. Ahmed, A novel compression algorithm for electrocardiogram signal based on the linear prediction of the wavelet coefficients, Digit. Signal Process., vol. 13, pp.
604622, 2003.
[8] Z. Lu, D. Y. Kim, and W. A. Pearlman, Wavelet compression of ECG
signals by the set partitioning in hierarchical trees (SPIHT) algorithm,
IEEE Trans. Biomed. Eng., vol. 47, no. 7, pp. 849856, Jul. 2000.
[9] M. L. Hilton, Wavelet and wavelet packet compression of electrocardiograms, IEEE Trans. Biomed. Eng., vol. 44, no. 5, pp. 394402, May
1997.
[10] P. S. Hamilton, Adaptive compression of the ambulatory electrocardiogram, Biomed. Inst. Technol., vol. 27, no. 1, pp. 5663, Jan. 1993.
[11] A. Cohen, P. M. Poluta, and R. Scott-Millar, Compression of ECG signals using vector quantization, in Proc. IEEE-90 S. A. Symp. Commun.
Signal Process., 1990, pp. 4554.
[12] R. W. McCaughern, A. M. Rosie, and F. C. Monds, Asynchronous
data compression techniques, in Proc. Purdue Centennial Year Symp.
Inf. Process., Apr. 1969, vol. 2, pp. 525531.
[13] Y. Zigel, A. Cohen, and A. Katz, ECG signal compression using analysis by synthesis coding, IEEE Trans. Biomed. Eng., vol. 47, no. 10,
pp. 13081316, Oct. 2000.
[14] Y. Zigel, A. Cohen, and A. Katz, The weighted diagnostic distortion
(WDD) measure for ECG signal compression, IEEE Trans. Biomed.
Eng., vol. 47, no. 11, pp. 14221430, Nov. 2000.

1326

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

[15] M. N. Fira and L. Goras, On a compression algorithm for ECG signals, presented at the 13th Eur. Signal Process. Conf. (EUSIPCO),
Antalya, Turcia, Sep. 2005.
[16] E. A. Giakoumakis and G. Papakonstantinou, An ECG data reduction
algorithm, Comput. Cardiol., pp. 675677, 1986.
[17] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing. Cambridge,
MA: University Press, 1992, ch. 14, pp. 650655.
[18] K. R. Rao and P. C. Yip, The Transform and Data Compression Handbook. Boca Raton, FL: CRC Press, 2001.
[19] M. Crochemore and T. Lecroq, Text data compression algorithms,
in Algorithms and Theory of Computation Handbook, M. J. Atallah,
Ed. Boca Raton, FL: CRC Press, 1998.
[20] M. N. Fira and L. Goras, The R-wave detection with low computation
complexity based on the Pan-Tompkins algorithm, in Buletinul Institutului Politehnic Din Iasi, Tomul L (LIV) 2004, vol. Fasc 3-4, Electrotehnica, Energetica, Electronica.
[21] P. De Chazal, M. ODwayer, and R. B. Reilly, Automatic classification of heartbeats using ECG morphology and heartbeat interval features, IEEE Trans. Biomed. Eng., vol. 51, no. 7, pp. 11961206, Jul.
2004.
[22] G. K. Prasad and J. S. Sahambi, Classification of ECG arrhythmias
using multi resolution analysis and neural networks, in Proc. Convergent Technol. Asia-Pacific Region (Tencon), 2003, vol. 21, pp. 1517.
[23] S. Osowski, L. T. Hoai, and T. Markiewicz, Support vector machine
based expert system for reliable heartbeat recognition, IEEE Trans.
Biomed. Eng., vol. 51, no. 4, pp. 582589, Apr. 2004.
[24] S. M. S. Jalaleddine, ECG data compression techniquesA unified
approach, IEEE Trans. Biomed. Eng., vol. 37, no. 4, pp. 329343, Apr.
1990.

Catalina Monica Fira received the B.S. and M.S.


degrees in biomedical engineering from the Gr.
T. Popa University of Medicine and Pharmacy
Iasi, Iasi, Romania, in 2001 and 2002, respectively,
and the Ph.D. degree in electronics engineering
from Gh. Asachi Technical University Iasi, Iasi,
Romania, in 2006.
She is now with the Institute for Theoretical Informatics of the Romanian Academy, Iasi, Romania.
Her research interests include electrical heart activity
analysis, biomedical signal processing, and neural
networks.

Liviu Goras (M92SM05) was born in Iasi, Romania, in 1948. He received the Diploma Engineer
and the Ph.D. degree in electrical engineering from
the Gh. Asachi Technical University (TU) Iasi, Iasi,
Romania, in 1971 and 1978, respectively.
Since 1973, he was successively Assistant, Lecturer, Associate Professor and, since 1994, he has
been a Professor with the Faculty of Electronics
and Telecommunications, TU Iasi. From September
1994 to May 1995, he was on leave, as a senior
Fulbright Scholar, with the Department of Electrical
Engineering and Computer Sciences, University of California at Berkeley,
Berkeley. His main research interests include nonlinear circuit and system
theory, cellular neural networks, and signal processing. He is the main organizer
of the International Symposium on Signal, Circuits and Systems, ISSCS, held
in Iasi every two years since 1993.
Dr. Goras was the recipient of the IEEE Third Millennium Medal.

Das könnte Ihnen auch gefallen