Sie sind auf Seite 1von 4

Time Frequency Distribution using Neural Networks

Imran Shafi * , Jamil Ahmad, Syed Ismail Shah and Faisal M Kashif † ,
imran.shafi@gmail.com, jamil@iqraisb.edu.pk, ismail@iqraisb.edu.pk, fmkahsif@mit.edu

Abstract
N −1 π
In this paper we present a method of obtaining a Time EQ = − ∑ ∫ Q ( n , ω ) lo g 2 Q ( n , ω ) d ω ≥ 0 ; (1.1)
Frequency Distribution (TFD) of a signal whose frequency n=0 −π
components vary with time. The method employs Neural
Networks (NN) which are trained by using the The lower the entropy of a distribution, the more
spectrograms of several training signals as input and concentrated it is.
TFDs that are highly concentrated along the
instantaneous frequencies of the individual components
present in the signal as targets. The trained neural 2. The Method
network is then presented with the spectrogram of
unknown signals and highly concentrated TFDs are While combination of spectrogram or spectrogram with
obtained. adaptive window selection [10] may result in a TFD that
has reduced blurring effect, the job is not finished as some
blurring still remains. The goal is to obtain a TFD that is
1. Introduction free of any blurring effect. Furthermore, no knowledge of
the component is assumed to be known ahead of time. For
It has been shown that the bilinear distributions this the spectrogram of several known signals are used as
including the spectrogram results in a blurred version of input to train a neural network (figure 2, 3). As a target a
the true time frequency distribution [1, 2]. Several TFD is used that is constructed by adding the expected
methods have been proposed to remove this blurring effect TFD for each of the individual component present in the
[2, 3, 4]. Another area of research has been the design of signal. Figure 1 outlines the technique to attain cleaner
TFDs along the instantaneous frequencies of the individual TFDs. The expected individual TFDs are constructed by
components present in the signal [5]. In both the considering the IF of each of the component present in the
approaches the main aim have been to obtain a TFD signal. Thus the target TFD is not only highly
Q (n,w) that is free of undesired cross components and concentrated along the individual IFs but also the results
highly concentrated along the Instantaneous Frequency obtained are free of cross components.
(IF) . Certain constraints such as the marginals are also The technique takes advantage of the NN learning
imposed in some cases [4]. However, the main aim has capabilities to estimate the sharper version of original
been to obtain a highly concentrated positive TFD. spectrogram of different signals at hand. Even though
Another development in this area has been to filter out good performance was expected from the model, the speed
different portions of the signal using some kind of time of image convergence was impressive under difficult
varying filter, compute the TFD of the individual processing conditions.
components and than combine these individual TFDs to
obtain the combined TFD of the signal [6]. However, this 3. NN Architecture and Algorithm
requires the knowledge of the components present in the
signal. In this paper binarized TFDs are considered and the
In this paper entropy [9] of Q (n,w) is considered as Levenberg-Marquardt [7] back propagation forward neural
measure of concentration given by: network architecture is used with 5 neurons in the single
hidden layer. Levenberg-Marquardt training algorithm can
train any network as long as its weight, net input, and
transfer functions have derivative functions.

*
Corresponding author

Faisal M Kashif was with Iqra University Islamabad Campus, H-9 Islamabad, Pakistan. He is now with Laboratory
for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge MA 02139, USA.
Back propagation is used to calculate the Jacobian ix of 3.2 Neural Network Training
performance with respect to the weight and bias variables
x. Each variable is adjusted according to Levenberg- To train the NN, the spectrograms of two signals are
Marquardt, as under: used as input. The first is a sinusoidal Frequency
Modulated (FM) signal, while the second signal is one
ii = ix * ix (3.1)
with two parallel chirps. The first training signal is given
by:
ie = ix * E (3.2) ⎛1 ⎞
jπ ⎜ −ω ( n ) n ⎟
x ( n) = e ⎝2 ⎠

dx = (
− ii + I * μ ) (3.3)
ie Where ω ( n ) = 0.1sin ( 2π n / N )
Where E is all errors and I is the identity matrix. While the second signal is given by:
The adaptive value μ is increased by MU_INC until it
changes above results in a reduced performance value. Y ( n ) = x1 ( n ) + x2 ( n )
The change is then made to the network and μ is decreased
by MU_DEC.
Where x1 ( n ) = e jω ( n)n 1
with ω1 ( n ) = π n / 4 N and
The parameter MEM_REDUC indicates how to use π πn
memory and speed to calculate the Jacobian jx. If x2 ( n ) = e jω ( n )n with ω2 ( n ) =
2
+ ;
MEM_REDUC is 1, then the training algorithm runs the
3 4N
fastest, but can require a lot of memory. Increasing Here N represents the total number of points in the signal.
MEM_REDUC to 2, cuts some of the memory required by The binary spectrograms of these signals are shown as
a factor of two, but slows the algorithm somewhat. Higher Figures 2 and 3. The respective target time-frequency
values continue to decrease the amount of memory needed plane images are shown in Figures 4 and 5. Using the
and increase training times. technique described in the section 2, training of the above-
Unlike other training functions, Levenberg-Marquardt mentioned neural network is performed.
training algorithm assumes the network has the Mean
Square Error (MSE) performance function. In back 4. Experimental Results
propagation, the gradient vector of the error surface is
calculated. A bat echolocation chirp provides an excellent motivation
for time-frequency based signal processing. Figure 6 shows
the TFD obtained by the optimum kernel method [10]. The
spectrogram (depicted in Figure 7) of the same echolocation
3.1 The Target Definition
chirp signal is used as an input to trained neural network and
In this paper the spectrogram is considered as a after processing it through technique described above, the
grayscale image, which is converted to binary by setting results are shown as in Figure 8. As can be seen it is highly
some threshold value. Start of blur is found in each row i.e concentrated. Furthermore it has the lowest entropy of all as
the transition from null to unity along with total width of shown in Table 1.
blur. The indices are also kept in memory, i.e. the location
of first variation and the last one. The total width of blur in 5. Conclusions
each row is made uniform by removing the overshoots of
blur. For example if a vector of length 10 is found to be The article presented a method of computing informative
the shortest among all then rest are made equivalent to this and highly concentrated TFDs of signals whose frequency
minimum. components vary with time. Experimental results
demonstrate the effectiveness of the approach.
The total width of the blur (vectors containing 1’s) in
each row is mapped onto the concentrated target TFD
image restoring the indices values. This result is achieved 6. References
through a simpler NN. Results thus obtained in form of
vectors are located to right positions to form the cleaner [1] L. Cohen, Time Frequency Analysis, Prentice-Hall, NJ, 1995.
time-frequency image, which is highly concentrated and
[2] J. Pitton, P. Loughlin and L. Atlas, "Positive time-frequency
free of cross components. Figure 1 describes the above distributions via maximum entropy deconvolution of the
mentioned steps graphically. evolutionary spectrum," Proc. IEEE Intl. Conf. Acous., Speech
and Sig. Processing.'93, vol. IV, pp. 436-439, 1993.
[3] S. I. Shah, L. F. Chaparro and A. El-Jaroudi,"Generalized
Transfer Function Estimation using Evolutionary Spectral
Deblurring'', IEEE Trans. on Signal Processing, Vol. 47, Number Area of
8, pp. 2335-2339, August 1999. concern

[4] M. K. Emresoy and P. J. Loughlin, "Weighted Least Square


Cohen-Posch Time-Frequency Distribution Functions", IEEE
Trans. on Signal Processing, Vol. 46, No. 3, pp. 753-757, March
1998.

[5] L. Stankovic, "Highly Concentrated Time-Frequency


Distributions: Pseudo Quantum Signal Representation”, IEEE Pre-
Trans. on Signal Processing, Vol. 45, No. 3, pp. 543-551, March processing
1997.

[6] R. Slueesathira, L. F. Chaparro and A.Akan, "Discrete


Evolutionary Transform for Positive Time Frequency Signal
Analysiss", Journal of Franklin Institute, Vol. 337, Number 4, pp. Normalized
347-364, July 2000. images Target

[7] K. Jain, J. Mao and K. M. Mohiddin, “Artificial Neural


Network: A tutorial”, IEEE Trans. on Computers, pp. 31-44,
1996.

[8] M. Bertoluzzo, G.S. Buja, S. Castellan and P. Fiorentin,


“Neural Network Technique for the Joint Time-Frequency
Analysis of Distorted Signals, IEEE Trans. on Industrial
X1, X2, ---, Xj
Electronics, Vol.50,No.6, Dec 2003.

[9] R.M. Gray, “Entropy and Information Theory”. New York


Springer-Verlag, 1990.
Figure 1: Input image training explanation
[10] R.G. Baraniuk and D.L. Jones, “Signal-Dependent Time-
Frequency Analysis using a Radially Gaussian Kernel”, IEEE
Trans. on Signal Processing, Vol. 32, pp. 263-284, June 1993.
vol. 5 no. 6, Nov 1994.

TABLE I
COMPARISON OF ENTROPY ( EQ )
Description Proposed Technique Spectrogram
Approach used by
using NN [10]
7.301 11.826 12.798
EQ
(bits)

Figure 2: Input training image of sinusoidal FM signal


Figure 3: Input training image of parallel chirps
Figure 6: TFD obtained by the method of [10]

Figure 4: Target image for the sinusoidal FM signal

Figure 7: Spectrogram of the bat echolocation chirp signal.

Figure 5: Target TFD of parallel chirp signal Figure 8: The deblurred TFD obtained after passing the
spectrogram of the test signal through the trained neural
network.

Das könnte Ihnen auch gefallen