Beruflich Dokumente
Kultur Dokumente
The nonlinear nature of neural networks, the ability of neural networks to learn from their environments in supervised as well as unsupervised ways, as well as the universal approximation property of neural networks make them highly suited for solving difficult signal processing problems.
Wx
The objective is to adjust the weight matrix W to minimize the error E. The derivative of the scalar quantity E with respect to individual weights can be computed as follows:
... theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals by digital or analog devices or techniques. The term signal includes audio, video, speech, image, communications, geophysical, sonar, radar, medical, musical, and other signals.
(from the IEEE (Institute of Electrical and Electronics Engineering) Signal Processing Society)
To find the signal component model and noise probability density function (pdf). The following assumptions, A1. The unknown signal model in vector form as xp = sp + np Where, xp - N-dimensional random input vector
A3. The noise vector n has independent elements with a jointly Gaussian pdf.
A4. The mapping s() is one-to-one. Rewrite above equation as approximates of sp & np ,
xp = sp + np
approximate the nth element of sp with an inverse neural net, for 1 n N
fp(k,wi) can represent a multinomial function of parameter vector in a functional link network or a hidden unit output in an MLP. The error function for the nth output node,
The signal model is determined from the noisy data by using a gradient approach, such as output weight optimization (OWO). To minimize Ex(n) with respect to w, whose elements are denoted by w(m). The gradient of En(n) is,
The most general blind signal processing (BSP) problem can be formulated as, A set of signals from an MIMO nonlinear dynamic system, where its input signals are generated from a number of independent sources. The objective is to find an inverse neural system.
3) convergence of the learning algorithm and its speed with the related problem of how to avoid being trapped in local minima.
4) accuracy of the reconstructed source signals.
x(t)=Hs(t) + n(t)
y(t)=w(t) x(t)
Two types of ambiguities: 1). Even if we can extract the independent signals correctly from their mixtures, we do not know their order of arrangements. 2). The scales of the extracted signals are unknown.
The performance of the source separation is evaluated by the composite matrix, T = W*H = PD Where, D = Scaling diagonal matrix P = nn permutation matrix
Multichannel Blind Deconvolution / Equalization Problem Now formulate a more general and physically realistic model where the observed sensors signals are linear combinations of multiply time-delayed versions of the original source signals and/or mixed signals.
o NEURAL NETWORK SIGNAL PROCESSING by, YU HEN HU & JENQ-NENG HWANG o S. Amari Natural gradient works efficiently in learning Neural Computation
o S. Amari, A. Cichocki, and H. H. Yang, A new learning algorithm for blind signal separation in Advances in Neural Information Processing Systems, Vol. 8