Sie sind auf Seite 1von 17

Prepared by, Romil Patel (172)

The nonlinear nature of neural networks, the ability of neural networks to learn from their environments in supervised as well as unsupervised ways, as well as the universal approximation property of neural networks make them highly suited for solving difficult signal processing problems.

Basic Artificial Neural Network (ANN) Models


1). McCulloch and Pitts Neuron Model

2). Multilayer Perceptron (MLP) Model

Finding the Weights of a Single Neuron MLP

Wx

The objective is to adjust the weight matrix W to minimize the error E. The derivative of the scalar quantity E with respect to individual weights can be computed as follows:

3). Radial Basis Networks

... theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals by digital or analog devices or techniques. The term signal includes audio, video, speech, image, communications, geophysical, sonar, radar, medical, musical, and other signals.
(from the IEEE (Institute of Electrical and Electronics Engineering) Signal Processing Society)

To find the signal component model and noise probability density function (pdf). The following assumptions, A1. The unknown signal model in vector form as xp = sp + np Where, xp - N-dimensional random input vector

p - M-dimensional random parameter vector


sp signal vector np - noise component of xp.

A2. The elements (k) of are statistically independent of n.

A3. The noise vector n has independent elements with a jointly Gaussian pdf.
A4. The mapping s() is one-to-one. Rewrite above equation as approximates of sp & np ,

xp = sp + np
approximate the nth element of sp with an inverse neural net, for 1 n N

Where, wo(n,k) - coefficient of fp(k,wi) in the approximation to sp(n),

fp(k,wi) - kth input or hidden unit in the network,


wi - a vector of weights connecting the input layer to a single hidden layer, Nu - the number of units feeding the output layer.

fp(k,wi) can represent a multinomial function of parameter vector in a functional link network or a hidden unit output in an MLP. The error function for the nth output node,

The signal model is determined from the noisy data by using a gradient approach, such as output weight optimization (OWO). To minimize Ex(n) with respect to w, whose elements are denoted by w(m). The gradient of En(n) is,

An MLP network for obtaining the signal model

The most general blind signal processing (BSP) problem can be formulated as, A set of signals from an MIMO nonlinear dynamic system, where its input signals are generated from a number of independent sources. The objective is to find an inverse neural system.

Require a priory knowledge,


1) solvability of the problem (the existence of the inverse system). 2) stability of the inverse model.

3) convergence of the learning algorithm and its speed with the related problem of how to avoid being trapped in local minima.
4) accuracy of the reconstructed source signals.

General block diagram,

x(t)=Hs(t) + n(t)

y(t)=w(t) x(t)

Two types of ambiguities: 1). Even if we can extract the independent signals correctly from their mixtures, we do not know their order of arrangements. 2). The scales of the extracted signals are unknown.

The performance of the source separation is evaluated by the composite matrix, T = W*H = PD Where, D = Scaling diagonal matrix P = nn permutation matrix

So, y(t) = T(t) s(t)


The separation is perfect when it tends to a generalized permutation matrix which has exactly one nonzero element in each row and each column.[1 0 0],[0 0 1] This corresponds to the indeterminacies of the scaling and order of the estimated signals.

We can also use feedback model,


x(t) + y(t) _

Relationship between feed-forward & feedback weights,

Multichannel Blind Deconvolution / Equalization Problem Now formulate a more general and physically realistic model where the observed sensors signals are linear combinations of multiply time-delayed versions of the original source signals and/or mixed signals.

Detailed architectures for multichannel blind deconvolution feed-forward neural network

o NEURAL NETWORK SIGNAL PROCESSING by, YU HEN HU & JENQ-NENG HWANG o S. Amari Natural gradient works efficiently in learning Neural Computation

o S. Amari, A. Cichocki, and H. H. Yang, A new learning algorithm for blind signal separation in Advances in Neural Information Processing Systems, Vol. 8

Das könnte Ihnen auch gefallen