Sie sind auf Seite 1von 7

Proc. of Int. Conf.

on Control, Communication and Power Engineering, CCPE

Neural Network Approach for Synthesis of Uniform


Linear Antenna Arrays
V. V. Vidyadhara Raju1, M.Padmanabha Raju2, P.Akhendra kumar 3
1,2

Department of Electronics and Communication Engineering


Shri Vishnu Engineering College For Women: Bhimavaram, India
1
vidyadhar19@gmail.com,2mudunuri63@gmail.com,3 akhendra.p@gmail.com
Abstract In this paper, the application of back-propagation and weighted LMS algorithms
are used for the synthesis of antenna arrays. The neural networks help in the modelling of
antenna arrays by acting on the geometric and radio-electric parameters. The main
synthesis problem is to find the weights of the Uniform Linear Antenna array elements that
are optimum to provide the radiation pattern with maximum reduction in the side lobe
level. This technique is used to prove its effectiveness in improving the performance of the
antenna array. The neural network not only allows in establishing important analytical
equations for the synthesis of antenna array, but also provides a great flexibility between the
system parameters in input and output which makes the synthesis possible due to the
explicit relation given by them. . Thus ANN is able to generate effective results of synthesis
compared to that of other classical approaches such as Fourier series, Dolph-chebychev [1]
etc.
Index Terms ANN, MLP,RBFN, SLL, HPBW

1. INTRODUCTION
In modern communication systems there is a need to design an antenna array which meets the desired
radiation characteristics. Antenna arrays are complex radiating structures whose radiation pattern can be
considered as the interference between electro-magnetic fields of each radiating element. The radiation
schemes of the Antenna arrays are designed either by selecting the appropriate inter element spacing, or by
the modification of the amplitudes and phase of the excitations applied to the respective elements in the array.
To achieve the desired side-lobe level(SLL) or half-power beamwidth (HPBW) or both, there is need to solve
the synthesis problem which consists of finding theexcitation distribution of the antenna elements or interelementspacing between different elements. The consideration of analysis problem lies in the determination
of theradiation pattern from a given excitation or geometrically using numerical tools and the synthesis
problem can besolved as the inverse problem of the analysis.
Neural network has the ability to provide successful solutions tomany complex problems which acts as a
massively parallel distributedstructure. Application of neural networksin the field of antenna arrays helps to
reduce the complexity of the model by offering an efficient way toincorporate with the real radiating
properties and the couplingeffects between the antenna elements in the synthesisprocess whereas other EM
models arecomputatively intensive and time-consuming.
Elsevier, 2014

The developed neural models give instantaneous responses due to performing only basicmathematical
operations and calculating elementarymathematic functions.The most importantcharacteristic of neural
models is their generalizationcapability. To estimate the separation of antenna array elements, ANNs are be

trained in order to obtain desired radiation pattern, Half-power beam width and side lobe levels resultingfrom
various space distributions of antenna elements.
A neural network model is developedby defining the input and output variables of the structure. The
generated data are separated into two groups training data and test data. Neural network is trained
usingtraining data. Once the model has reached requiredconditions for accuracy of predicting outputs, it can
be used for simulation. In this paper, the model inputs aregeometrical parameters, relative positions of
antennaelements in a linear array, and model outputs are electricalparameters, such as side lobe level.
This paper considers the application of RBFN and MLP for regular antenna array synthesis. For given values
of the radiation pattern we estimate the phase difference of the excitations between the neighbouring antenna
elements.
II. LINEAR ARRAY PATTERN SYNTHESIS

Fig. 1 Far field geometryof N-element array of isotropic sources positioned along the Z-axis

An N-element distributed along the z-axis with unequal inter-element spacing is considered. The array factor
considering for the figure .1 can be given as,

jk d n cos

an exp
N1

AF =
n=0

Here, the angle is measured from the line of the array,wavenumber k = 2/; being the wavelength
ananddnare the weighting coefficient at the location of the nth element, respectively.
For symmetrical current distribution (an= an) with equal inter-element spacing d between the elements in
may take a simpler form. With the help of the auxiliary variable defined by,
= kd*cos()
The array factor for even number of elements can be given by,
N
2

( (( ) ))

| AF|=2 a n cos N
n =1

1
(2)
2

The array factor for odd number of elements can be given by,
N /2

| AF|=a0 +2 an cos( N
n=1

1
) (3)
2

) )

Wherea0denotes the current of the centre element.


The two design parameters ananddoare tailored in order to obtain the desired radiation pattern with an
acceptable tolerance.
III. ARTIFICIAL NEURAL NETWORKS
A group of processing elements (neurons) constitute to form an artificial neural network [ANN].The neurons
are grouped and ordered in various layers which have a network of interconnection weights Wj (j,1,....,L)
called as synaptic weights[1]. The input data X j is processed with the help of activation function f(x).The
basic model of a single neuron is shown in[2] Figure 2.
L

( w j x j +b ) (4 )
j=1

y =f
Whereb is the bias parameter of the activation function f(x)

Fig.2 Basic model of a neuron

IV. MULTIPLE LAYER PERCEPTRON(MLP)


MLPs are feed-forward networks of simple processing units with at least ONE hidden layer.Each
processing unit is similar to a perceptron, except for the threshold function is replaced by a differentiable
non-linearity.In order to make the gradient to compute a nonlinearity function which is differentiable is
needed,The nonlinearity at the hidden layer is one of the critical feature of the MLP. If all neurons in an MLP
had a linear activation function, the MLP could be replaced by a single layer of perceptron, which can only
solve linearly separable problems.The multi-layers networks comprises of an input layer in which neurons of
them code the information which is presented to the network, a variable number of internal layers called
"hidden layers" and an output layer as the desired responses. The neurons of the same layer are not
connected between them. The learning process of thisnetwork is a supervised learning. The algorithm for this
learning process is known as the Back-Propagation Learning algorithm (BPL)[6]. It includes two steps:
To present a input configuration to the network and to propagate it to the later parti.e.from input layer to the
hidden layer and then to the output layer.
After the process of propagation, the obtained error should be minimized upon the given examples which are
given as a function of synaptic weights (w).This error represents the squared sum of distances between the
determined responses (S) and the desired ones (Y) in the whole learning process. This process of
recalculating the synaptic weight of the network is continued until the number of epochs is reached or the
error is less than the desired goal and is shown in figure 3.
Multilayer perceptions with sigmoid activation functions can form smooth decision boundaries rather than
Piecewise linear boundaries and the architecture of MLP with the three is shown in figure 4.
3

V. RADIAL BASIS FUNCTION NETWORK(RBFN)


The RBFN is a 3-layer network in which the input layer hasan fan-out layer and does not involve any
processing in it. The role of the hidden (second) layer is to perform the non-linear mapping from input space

Fig.3) back propagation learning rule

Fig.4) multiple layer network

to a higher dimensional space which yields the linear separable pattern.The function of the final layer is to
perform a simple weighted sum to give a linear output[5] and the architecture for RBFN network is shown in
figure.5.
The function approximation or matching the real number is done by RBFN network then the output obtained
matches the desired output. Whenever a pattern classification[6] is required a hard-limiter or sigmoid
function can be placed on the output neurons to give 0/1 output values. The area which is symmetrical around
the radial cluster centre then the non-linear function is known as the radial-basis function.The most
commonly used radial-basis function is a Gaussian function

x1
y1
x2
y2
x3
output layer
(linear weighted sum)

input layer
(fan-out)

hidden layer
(weights correspond to cluster centre,
output function usually Gaussian)

Fig.5) radial basis function network

The Euclidean distance is the distance measured from the centre of the cluster. If, ris the distance from the
cluster centre.For each neuron in the hidden layer, the weights represent the co-ordinates of the centre of the
cluster.

(
n

r j=

i=1

x iw ij ) (5)

(( x iw ij ) 2/ 2
i=1

( hidden unit ) j=exp


Where x is the input, w is the weighted vector,The variable sigma, , defines the width or radius of the bellshape and is something that has to be determined empirically. When the distance from the centre of the
Gaussian reaches, the output drops from 1 to 0.6.
4

Training:
The weights in the hidden layer in a RBF network have units which correspond to the vector representation
of the centre of a cluster.The traditional clustering algorithm such as the k-means algorithm is used to find the
weights[4]. As the training is unsupervised the number of clusters k, are set in advance. The best fit to the
clusters are founded out by the algorithm implemented on them. The initial classification is done by choosing
the closest centre for each item of data, so all the items of data are assigned a class from 1 to k.
The reclassification of data and measurement of the distance between the centres are repeated until the sums
of the distances aremonitored and training is halted when the total distance is calculated. The hidden layer is
trained with unsupervised learning[6] and training of the output layer is done by standard gradient descent
technique[7] (Least Mean Squares algorithm).
V. SIMULATION RESULTS
MLP output radiation pattern:Mean square error performance
0
-5
-10

Arrayfactor gain(indb)

-15
-20
-25
-30
-35
-40
-45
-50

20

40

60

80
100
thetha(in degrees)

120

140

160

180

Fig.6(a) the arrayused in the simulations is an 15-elements Fig.6(b)The Mean square error performance of an MLP network.
Linear array with inter-element spacing d =/2

The training-set examples included sector-width intervals of 20,SLL intervals of -20 dB.
The performance of the mean square error for MLP Network is shown in the above figure.6 and the
excitation coefficients at the last epoch are noted down for N=15 array elements are shown in table 1.
TABLE I. UPDATED EXCITATION COEFFICIENT VALUES FOR MLP
S.NO
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Amplitude excitation using


MLP
0.0713
0.5285
0.5840
0.5076
0.1012
0.1708
0.0189
0.0645
0.5975
0.4494
0.4721
0.6480
0.6251
0.4118
0.4581

RBFN output pattern:


5

The array used in the simulations is an 15-elements linear array with inter-element spacing d = /2. The
training-set examples included sector-width intervals of 20, SLL intervals of -20 dB are given in fig 7,

5
0
-5

gain in db

-10
-15
-20
-25
-30
-35
-40

20

40

60

80
100 120
theta (in radians)

140

160

180

Fig.7) radiation pattern of linear array using RBFN.Fig.8) performance of RBF networkMean square error performance

The performance of the mean square error for RBF Network is shown in the above figure.8 and the excitation
coefficients at the last epoch are shown in table2.
TABLE II. UPDATED EXCITATION COEFFICIENT VALUES FOR RBFN
s.no

Amplitude excitation using RBFN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

0.9361
0.4731
0.2239
0.8102
0.9958
0.6888
0.0412
0.6268
0.9848
0.8557
0.3033
0.3990
0.9040
0.9619
0.5440

Comparison of classical approach (Fourier method) with neural networks:


The figure.9 shows the comparison of the neural approach with fourier method approach[3] for an linear
array with 15-elements and with inter-element spacing d = /2. The training-set examples included sectorwidth intervals of 20, SLL intervals of-20dB.
From the figure it can be clearly seen that the neural network approach matches with the desired output rather
than the classical approach.
V. CONCLUSION
Two different neural-network architectures have been tested and their effectiveness for
array-pattern synthesis has been compared. The results show that the two networks
exhibit good performances matching the desired output .Thesetechniques can be used
6

in a vast range of applicationsin which the networks can be trained offline with suitable
data to carry out real-time processing in order to obtain the arrayelements excitations.

Fig.9) comparison of neural network approach with the basic classical approach

REFERENCES
[1] R. Shaved And I. Taig, Comparison Study Of Pattern Synthesis Techniques Using Neural Networks, Microwave And
Optical Technology Letters / Vol. 42, No. 2, July 20 2004.
[2] S.Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, New Jersey, 1999.
[3] Balanis, Basics of antenna and their design, Mcgraw Hill edition,2006.
[4] K. Hornik, M. Stinchcombe, And H. White, Multilayer Feed forward Networks Are Universal Approximators,
Neural Networks 2 (1989), 359366.
[5] R. Haupt, An Introduction To Genetic Algorithms For Electromagnetics,Ieeeantennas And propagation, 31-2,April
1995, Pp. 7-15.
[6] L. Merad, F. T. Bendimerad, S. M. Meriah And S. A.Djennas, Neural Networks For Synthesis And Optimizationof
Antenna Arrays, Radioengineering, Vol. 16, No. 1,April 2007.
[7] R. G. Ayestarn, F. Las-HerasAnd J. A. Martnez, Non Uniform-Antenna Array Synthesis Using Neural Networks,
Journal Of Electromagnetic Waves And Applications, Vol. 21, No. 8, 1001-1011,2007.

Das könnte Ihnen auch gefallen