Sie sind auf Seite 1von 20

Time Frequency Analysis and Wavelet Transform Tutorial

Haar Transform and Its Applications


Pei-Yu Chao
D00945005

Abstract
The Haar transform is one of the simplest and basic transformation from the
space/time domain to a local frequency domain, which reveals the space/time-variant
spectrum. The attracting features of the Haar transform, including fast for
implementation and able to analyse the local feature, make it a potential candidate in
modern electrical and computer engineering applications, such as signal and image
compression. In this tutorial, the mathematics and applications of Haar transform will
be explored.

Chapter 1 Introduction
The Haar transform was proposed in 1910 by a Hungarian mathematician Alfred Haar
[1]. The Haar transform is one of the earliest transform functions proposed.

Conventionally, Fourier transform has been used extensively to analyse the spectral
content of a signal. However, Fourier transform is not able to represent a nonstationary signal adequately; whereas time-frequency analysis function, e.g., Haar
transform, is found effective as it provides a simple approach for analysing the local
aspects of a signal.

The Haar transform uses Haar function for its basis. The Haar function is an
orthonormal, rectangular pair. Compared to the Fourier transform basis function
which only differs in frequency, the Haar function varies in both scale and position.

The Haar transform is compact, dyadic and orthonormal. The Haar transform serves
as a prototype for the wavelet transform, and is closely related to the discrete Haar
wavelet transform [3].

Chapter 2 The Haar Transform


2.1 The Haar Function
The family of N Haar functions

are defined on the interval

[2]. The

shape of the Haar function, of an index k, is determined by two parameters: p and q,


where
2

and k is in a range of

When

, the Haar function is defined as a constant

; when

the Haar function is defined as

From the above equation, one can see that p determines the amplitude and width of
the non-zero part of the function, while q determines the position of the non-zero part
of the Haar function [2].

2.2 The Haar Matrix


The discrete Haar functions formed the basis of the Haar matrix H

where

and

is the Kronecker product.

The Kronecker product of

, where

is an

matrix and

is a

matrix, is expressed as

When

where

is a

matrix, and

is a Haar function.

The Haar matrix is real and orthogonal, i.e.,

, i.e.,

An un-normalized 8-point Haar matrix

is shown below [3]

From the definition of the Haar matrix H, one can observe that, unlike the Fourier
transform, H matrix has only real element (i.e., 1, -1 or 0) and is non-symmetric.

The first row of H matrix measures the average value, and the second row H matrix
measures a low frequency component of the input vector. The next two rows are
sensitive to the first and second half of the input vector respectively, which
corresponds to moderate frequency components. The remaining four rows are
sensitive to the four section of the input vector, which corresponds to high frequency
components. Fig. 1 shows the Haar function at each row of H matrix. Notice the
width and location of the Haar function is changed. The Haar function with narrower
width is responsible for analysing the higher frequency content of the input signal.

Fig. 1 Haar functions for composing 8-point Haar transform matrix [2].

The inverse 2k-point Haar matrix is described as

For

[3]

, un-normalised inverse 8-points Haar transform.

2.3 The Haar Transform


The Haar transform

of an N-input function

is the

element vector

The Haar transform cross multiplies a function with Haar matrix that contains Haar
functions with different width at different location.

For example:

The Haar transform is performed in levels. At each level, the Haar transform
decomposes a discrete signal into two components with half of its length: an
approximation (or trend) and a detail (or fluctuation) component. The first level of

approximation

for

is defined as

, where

is the input signal. The multiplication of

that the Haar transform preserves the energy of the signal. The values of

the average of successive pairs of

represents

value.

The first level detail

for

ensures

is defined as

. The values of

represents the difference of successive pairs

of

value.

The first level Haar transform is denoted as

. The inverse of this transformation can

be achieved by

The successive level of Haar transform, the approximation and detail component are
calculate in the same way, except that these two components are calculated from the
previous approximation component only.

An example:

, the first level approximation and detail

components are

Chapter 3 Application
3.1 Signal Compression
Lets define the energy E of a signal X as the sum of the square of its value, i.e.,

For

From this example, we can see that Haar transform preserve energy, i.e.,

and

. Furthermore, we can see

that the energy of the approximation component is much higher than the energy of

detail component. In the first level of transformation, the energy of the approximation

is about

of the energy of the signal , and in the second level of

transformation, the energy of the approximation

is about

Which means that, after first level of Haar transform,

into a signal

of energy is concentrated

that is half of the length of , and after the second Haar transform,

of the energy is concentrated into a signal

that is quarter of the length of .

This is called the compaction of energy [4], and it will occur whenever the magnitude
of the detail component is significantly smaller than the approximation component.
Thus, compression without seriously affecting the information of the original signal
can be achieved.
9

There are two basic categories of compression techniques [4]. The first category is the
lossless compression. As the name stand, the de-compressed signal is error-free.
Typical lossless methods are Huffman compression, LZW compression, arithmetic
compression and run-length compression.

The other type is the lossy compression. Even though this type of compression
method produces error in the de-compressed signal, the error should only by marginal.
The advantage of lossy techniques is that higher compression ratio can be achieved,
when compared to the lossless compression technique. The Haar transform is a type
of lossy compression.

The steps involved in a simple signal compression are described in Fig. 2.

Fig. 2 Block diagram illustrate signal compression


This algorithm is applied to a signal shown in Fig. 3(a), and the outcome of a 10-level
Haar transform is shown in Fig. 3(b). A threshold of 0.3536 is chosen based on the
cumulative energy distribution of the Haar transformed signal. Thereafter, compressed
signal is obtained via inverse Haar transform, which is shown in Fig. 3(c). The
compressed signal is almost identical to the original signal. The maximum error
10

calculated over all values of approximated signal is no more than

[4].

Hence, a compression factor of 20 with minimal error is achieved.

Fig. 3 Signals during the steps of compression. (a) The original signal, (b) 10-level
Haar transform of the original signal, and (d) is the compressed signal (inverse Haar
transform)

When the same algorithm is applied to the signal shown in Fig. 4(a), performance of
the signal compression is poorer. The compressed signal, as shown in Fig. 4(c), has
higher error with lower compression ratio (10:1).
11

Fig. 4 Signals during the steps of compression. (a) The original signal, (b) 12-level
Haar transform of the original signal, and (d) is the compressed signal (inverse Haar
transform)

3.2 De-noising
When signal is received after transmission over some distance, it is often distorted by
noise. De-noising is a process which is used to recover the noise-buried speech, which
enhances the recognisability of the speech signal.

The steps involved in a simple de-noising process are described in Fig. 5. After Haar
transform is performed, a thresholding is used, i.e., any values of the transformed
signal lie below the noise threshold is set to 0. Thereafter, inverse Haar transform is
12

performed to reveal the approximated signal.

Fig. 5 Block diagram illustrate de-noising


Two signal from Fig. 4(a) and Fig. 5(a) are distorted with additive noise, which are
shown in Fig. 6(a) and Fig. 7(a). The de-noising process is applied to the noisedistorted signals. It can be clearly observed that there are large numbers of fluctuation
in the Haar transformed signal which is contributed by the random noise. After
thresholding and inverse Haar transform, the de-noised signals are revealed, which are
shown in Fig. 6(d) and Fig. 7(d).

Fig. 6 Signals during the steps of de-noising. (a) The original signal (noise-distorted),

13

(b) 10-level Haar transform of the original signal. The two horizontal lines represents
the noise threshold = 0.25. (c) The signal after thresholding, and (d) is the de-noised
signal (inverse Haar transform)

Fig. 7 Signals during the steps of de-noising. (a) The original signal (noise-distorted),
(b) 12-level Haar transform of the original signal. The two horizontal lines represents
the noise threshold = 0.2. (c) The signal after thresholding, and (d) is the de-noised
signal (inverse Haar transform)

It was found that Fig. 6(d) is an closer approximate of the original, non-noisedistorted signal than Fig. 7(d). It is due to that the energy of the signal Fig. 6(a) is
concentrated into a few high-energy values and additive noise is concentrated in lowenergy value after Haar transformation. Therefore, it is possible to segregate the
signal component from the noise component; whereas in the case shown in Fig. 7, the
energy of the signal is not concentrated into a few high-energy value, e.g., it is spread
across several value, and the noise contaminated those transformed signal value which
14

make the thresholding technique less effective.

3.3 Image Compression


Haar transform can be used in compressing an image of

, where both

and

are multiple of two. Image compression is an expansion of one-dimensional signal


compression. To illustrate the process, a simple example is shown below [7].
A two-dimensional input signal matrix S is set to be:

Firstly, the first-level Haar transform is applied to the rows of the input signal S. The
first approximation and detailed matrix of the rows are obtained.

Secondly, first-level Haar transform is applied to the columns of the resultant matrix

. The first approximation and detailed matrix of the columns are obtained.

Following denotation is used:

15

A: approximation area that includes information of the average of the image.

H: horizontal area that includes information about the vertical edges/details in


the image.

V: vertical area that includes information about the horizontal edges/details in


the image.

D: diagonal area that includes information about the diagonal details, e.g.,
corners, in the image.

From Section 3.1 we known that, after Haar transform, the approximation component
contains most of the energy. Hence, it is clear that exclude information from
approximation area will result in biggest distortion to the compressed image; and
exclude information from diagonal area will result in least distortion to the
compressed image.

Following figures were extracted from [7].

16

Fig. 8 (Top) Original Image, (Left) 2-level Haar Transform, (Right) Reconstructed.
Image.

Fig. 9 (Top) Original Image, (Left) 2-level Haar Transform, (Right) Reconstructed

17

Fig. 10 (Top) Original Image, (Left) 2-level Haar Transform, (Right) Reconstructed

The error (MSE) of the reconstructed images are summarized in the Table below [7]
MSE

Fig. 8
167.469

Fig. 9
125.777

Fig. 10
264.772

It is clear that as the complexity of the image increased, i.e., Lena image is the most
complex image of these three, the error of the reconstructed image become greater.
Hence, the performance of the Haar transform is limited.

Chapter 4 Conclusion
The background and derivation of the Haar transform is presented in the first half of
this tutorial. The simplicity and localised property of the Haar transform can be
observed. The applications of the Haar transform are presented in the second half of
this tutorial, where the work process, examples and the performance of the Haar
18

transform in each of these applications were demonstrated. From the results shown in
the application section, it is clear that Haar transform has its limitation, that it may not
be suitable for processing certain types of signal. Nevertheless, Haar transform is a
good time-variant spectral tool which can be used for applications that requires high
memory efficiency.

19

References
[1] R.S. Stankovi and B.J. Falkowski. The Haar wavelet transform: its status and
achievements. Computers and Electrical Engineering, Vol.29, No.1, pp.25-44,
January 2003.
[2] R.

Wang.

Haar

transform.

Internet

Web

Address:

http://fourier.eng.hmc.edu/e161/lectures/Haar/index.html, December 04, 2008.


[3] J.J. Ding. Time-Frequency Analysis and Wavelet Transform. Lecture Notes,
National Taiwan University.
[4] J.S. Walker. A Primer on Wavelets and their Scientific Application. CRC Press
LLC, 1999.
[5] M. Alwakeel and Z. Shaaban. Face Recognition Based on Haar Wavelet
Transform and Principal Component Analysis via Levenberg-Marquardt
Backpropagation Neural Network. European Journal of Scientific Research,
Vol.42, No.1, pp.25-31, 2010.
[6] P. Porwik and A. Lisowska. The Haar-Wavelet Transform in Digital Image
Processing: Its Status and Achievements. Machine graphics & vision, Vol.13,
No.1-2, pp. 79-98, 2004.
[7] A. Bhardwaj and R. Ali. Image Compression Using Modified Fast Haar
Wavelet Transform. World Applied Science Journal, Vol.7, No.5, pp.647-653,
2009.

20

Das könnte Ihnen auch gefallen