Beruflich Dokumente
Kultur Dokumente
The signals from the antennas are combined or processed in order to achieve
improved performance over that of a single antenna. The antenna array can be
used to:
increase the overallgain
provide diversity reception
cancel out interference from a particular set of directions
"steer" the array so that it is most sensitive in a particular direction
determine the direction of arrival of the incoming signals
to maximize the Signal to Interference Plus Noise Ratio (SINR)
An antenna array is a set of N spatially separated antennas. The number of
antennas in an array can be as small as 2, or as large as several thousand (as in
the AN/FPS-85 Phased Array Radar Facility operated by U. S. Air Force). In
general, the performance of an antenna array (for whatever application it is
being used) increases with the number of antennas (elements) in the array; the
drawback of course is the increased cost, size, and complexity.
The antennas are spaced one-half wavelegnth apart (centered at z=0). The E-
field of the plane wave (assumed to have a constant amplitude everywhere) can
be written as:
In the above, k is the wave vector, which specifies the variation of the phase as
a function of position.
The (x,y) coordinates of each antenna is (0,0); only the z-coordinate changes
for each antenna. Further, assuming that the antennas are isotropic sensors, the
signal received from each antenna is proportional to the E-field at the antenna
location. Hence, for antenna i, the received signal is:
The received signals are distinct by a complex phase factor, which depends on
the antenna separations and the angle of arrival on the plane wave. If the
signals are summed together, the result is:
Figure 2 shows that the array actually processes the signals better in some
directions than others. For instance, the array is most receptive when the angle
of arrival is 90 degrees. In contrast, when the angle of arrival is 45 or 135
degrees, the antenna array has zero output power, no matter how much power is
in the incident plane wave. In this manner, a directional radiation pattern is
obtained even though the antennas were assumed to be isotropic. Even though
this was shown for receiving antennas, due to reciprocity, the transmitting
properties would be the same.
Suppose (as in Figure 4 here) that the signals from the elements are each
multiplied by a complex weight ( ) and then summed together to form the
array output, Y.
The output of the array will vary based on the angle of arrival of an incident
plane wave (as described here). In this manner, the array itself is a spatial filter
- it filters incoming signals based on their angle of arrival. The output Y is a
function of , the arrival angle of a wave relative to the array. In addition,
if the array is transmitting, the radiation pattern will be identical in shape to the
receive pattern, due to reciprocity.
where k is the wave vector of the incident wave. The above equation can be
factor simply as:
Side Note: If the elements are identical (array made up of all the same type
of antennas), and have the same physical orientation (all point or face the
same direction), then the radiation (or reception) pattern for an antenna
array is simply the Array Factor multiplied by the radiation pattern
. This concept is known as pattern multiplication.
The weights are fundamental in controlling the behavior of the array. Some
methods are now presented, which also serve to explain the versatility of
antenna arrays.
1.Schelkunoff Polynomial
Method
Antenna-Theory.com -
Weighting Methods Arrays Main Page
Home
Instead of steering an antenna array (in which case we want to
receive or transmit primarily in a particularly direction), suppose instead we want
to ensure that a minimum of energy goes in particular directions. The weights of
an antenna array can be selected such that the radiation pattern has nulls (0 energy
transmitted or received) in particular directions. In this manner, undesirable
directions of interference, jamming signals, or noise can be reduced or completely
eliminated.
It turns out that this isn't real hard to do, either. In general, an N element array can
place N-1 independent nulls in its radiation pattern. This just requires a little math
to work through, and will be illustrated via an example. Let's assume we have an
N element linear array with uniform spacing equal to a half-wavelength and lying
on the z-axis.
To start, the array factor of a uniformly spaced linear array with half-wavelength
spacing can be rewritten using a variable substitution as:
The above equation is simply a polynomial in the (complex) variable z. Recall that
a polynomial of order N has N zeros (which may be complex). The polynomial for
the AF above is of order N-1 zeros. If the zeros are numbered starting from zero,
the zeros will be 0, 1, ..., N-2. The AF is rewritten then as:
We've introduced variables, and have gotten rid of the weights. Hence, we can
choose the zeros to be whatever we want, and then figure out what the weights
should be to give us the same pattern.
To make the example concrete, let N=3. Suppose we want the array's radiation
pattern to have zeros at 45 and 120 degrees. We can simply use the equation for z
above, and substitute these values for the angle. We then obtain the corresponding
zeros, . The z values corresponding to the zeros are:
For simplicity, we'll let . The AF then becomes:
We already know what the are, so we automatically have the weights. Using
these weights to plot the magnitude of the array factor, we obtain the result in
Figure 1.
Figure 1. Magnitude of array factor.
Observe that the radiation pattern has zeros at 45 and 120 degrees, exactly as we
specified. This method can be used for whatever directions you want; however if
N-1 nulls are selected for an N element array, the designer no longer has control
over where the maximum of the radiation pattern is.
This method can easily be performed on linear arrays with many more elements.
This method can be The Schelkunoff Polynomial Method easily extends to planar
and multi-dimensional arrays. The simplicity of placing nulls in the radiation
pattern adds a powerful advantage for using arrays in practice.
In this section, the LMS algorithm is introduced. This algortihm was developed by
Bernard Widrow in the 1960s, and is the first widely used adaptive algorithm. It is
still widely used in adaptive digital signal processing and adaptive antenna arrays,
primarily because of its simplicity, ease of implementation and good convergence
properties.
The goal of the LMS algorithm is to produce the MMSE weights for the given
environment. The definitions of all the terms used on this page follows that from
the MMSE page, which should be understood before reading this page. The goal
of the LMS algorithm is to adaptively produce weights that minimize the mean-
squared error between a desired signal and the arrays output; loosely speaking, it
tries to maximize reception in the direction of the desired signal (who or what the
array is trying to communicate with) and minimize reception from the interfering
or undesirable signals.
Just as in the MMSE case, some information is needed before optimal weights can
be determined. And just as in the MMSE weighting case, the required information
is the desired signal's direction and power. The direction is specified via the
desired signal's steering vector ( ) and the signal power is written as . Note
that these parameters can vary with time, as the environment is assumed to be
changing. The directions and power can be determined using various direction
finding algorithms, which analyze the received signals at each antenna in order to
estimate the directions and power.
Recall that the Mean-Squared Error between the desired signal and the array's
output can be written as:
The gradient (vector derivative with respect to the weight vector) can be written
as:
The LMS algorithm then approximates the gradient of the MSE by substituting in
the above simple approximation for the autocorrelation matrix:
The adaptive weights will be written as W(k), where k is an index that specifies
time. The LMS weighting algorithm simply updates the weights by a small
amount in the direction of the negative gradient of the MSE function. By moving
in the direction of the negative gradient, the overall MSE is decreased at each time
step. In this manner, the weights iteratively approach the optimal values that
minimize the MSE. Moreover, since the adaptive algorithm is continuously
updating, as the environment changes the weights adapt as well.
The weights are updated at regular intervals, and the weight at time k+1 is related
to time k by:
The parameter controls the size of the steps the weights make, and affects the
speed of convergence of the algorithm. To guarantee convergence, it should be
less than 2 divided by the largest eigenvalue of the autocorrelation matrix.
Substituting in the estimate for the gradient above, the LMS update algorithm can
be written as a simple iterative equation:
The algorithm simplicity is the primary reason for its widespread use. The above
update equation does not require any complex math, it just uses the current
samples of the received signal at each antenna (X).
The algorithm is starting assuming a weight vector of all ones (the starting weight
vector ideally has no impact on the end results):
Using random noise at every step, the algorithm is stepped forward from the initial
weight. The resulting MSE at each time step is shown in the following figure,
relative to the optimal MSE.
The LMS algorithm is fairly efficient in moving towards the optimal weights for
this case. Since the algorithm uses an approximateion of the autocorrelation matrix
at each time step, some of the steps actually increase the MSE. However, on
average, the MSE decreases. This algorithm is also fairly robust to changing
environments.
Several adaptive algorithms have expanded upon ideas used in the original LMS
algorithm. Most of these algorithms seek to produce improved convergence
properties at the expense of increased computational complexity. For instance, the
recursive least-square (RLS) algorithm seeks to minimize the MSE just as in the
LMS algorithm. However, it uses a more sophisticated update to find the optimal
weights that is based on the matrix inversion lemma. Both of these algorithms (and
all others based on the LMS algorithm) have the same optimal weights the
algorithms attempt to converge to.