Sie sind auf Seite 1von 17

An antenna array (often called a 'phased array') is a set of 2 or more antennas.

The signals from the antennas are combined or processed in order to achieve
improved performance over that of a single antenna. The antenna array can be
used to:
 increase the overallgain
 provide diversity reception
 cancel out interference from a particular set of directions
 "steer" the array so that it is most sensitive in a particular direction
 determine the direction of arrival of the incoming signals
 to maximize the Signal to Interference Plus Noise Ratio (SINR)
An antenna array is a set of N spatially separated antennas. The number of
antennas in an array can be as small as 2, or as large as several thousand (as in
the AN/FPS-85 Phased Array Radar Facility operated by U. S. Air Force). In
general, the performance of an antenna array (for whatever application it is
being used) increases with the number of antennas (elements) in the array; the
drawback of course is the increased cost, size, and complexity.

The following figures show some examples of antenna arrays.

Figure 1. Four-element microstrip antenna array.


Figure 2. Cell-tower array. These are typically used in groups of 3 (2 receive
antennas and 1 transmit antenna).

The general form of an array can be illustrated as in Figure 3. An origin and


coordinate system are selected, and then the N elements are positioned, each at
location given by:

The positions are illustrated in the following Figure.


Figure 3. Geometry of an arbitrary N element antenna array.

Let represent the output from antennas 1 thru N, respectively.


The output from these antennas are most often multiplied by a set of N weights
- - and added together as shown in Figure 4.
Figure 4. Weighting and summing of signals from the antennas to form the
output.

The output of an antenna array can be written succinctly as:

This is what is going on in an antenna array. However, I haven't answered what


the benefits of doing this are. To understand what happens in an antenna array,
click the next link.

To understand the benefits of antenna arrays, we will consider a set of 3-


antennas located along the z-axis, receiving a single arriving from an angle

relative to the z-axis of , as shown in Figure 1.


Figure 1. Example 3-element array receiving a plane wave.

The antennas are spaced one-half wavelegnth apart (centered at z=0). The E-
field of the plane wave (assumed to have a constant amplitude everywhere) can
be written as:

In the above, k is the wave vector, which specifies the variation of the phase as
a function of position.

The (x,y) coordinates of each antenna is (0,0); only the z-coordinate changes
for each antenna. Further, assuming that the antennas are isotropic sensors, the
signal received from each antenna is proportional to the E-field at the antenna
location. Hence, for antenna i, the received signal is:
The received signals are distinct by a complex phase factor, which depends on
the antenna separations and the angle of arrival on the plane wave. If the
signals are summed together, the result is:

The interesting thing is if the magnitude of Y is plotted versus (the angle of


arrival of the plane wave). The result is given in Figure 2.
Figure 2. Magnitude of the output as a function of the arrival angle.

Figure 2 shows that the array actually processes the signals better in some
directions than others. For instance, the array is most receptive when the angle
of arrival is 90 degrees. In contrast, when the angle of arrival is 45 or 135
degrees, the antenna array has zero output power, no matter how much power is
in the incident plane wave. In this manner, a directional radiation pattern is
obtained even though the antennas were assumed to be isotropic. Even though
this was shown for receiving antennas, due to reciprocity, the transmitting
properties would be the same.

The value and utility of an antenna array lies in its


ability to determine (or alter) the received or
transmitted power as a function of the arrival angle.
By choosing the weights and geometry of an array properly, the antenna array
can be designed to cancel out energy form undesirable directions and receive
energy most sensitively from other directions.

Before considering weight and geometry selection, we first turn to the


fundamental function of array theory, the Array Factor.
We'll now derive the most important function in array theory - the Array
Factor. Consider a set of N identical antennas oriented in the same direction,
each with radiation pattern given by:

Assume that element i is located at position given by:

Suppose (as in Figure 4 here) that the signals from the elements are each
multiplied by a complex weight ( ) and then summed together to form the
array output, Y.

The output of the array will vary based on the angle of arrival of an incident
plane wave (as described here). In this manner, the array itself is a spatial filter
- it filters incoming signals based on their angle of arrival. The output Y is a
function of , the arrival angle of a wave relative to the array. In addition,
if the array is transmitting, the radiation pattern will be identical in shape to the
receive pattern, due to reciprocity.

Y can be written as:

where k is the wave vector of the incident wave. The above equation can be
factor simply as:

The quantity AF is the Array Factor. It is a function of the positions of the


antennas in the array and the weights used. By tayloring these parameters the
array's performance may be optimized to achieve desirable properties. For
instance, the array can be steered (change the direction of maximum radiation
or reception) by changing the weights.

Using the steering vector, the AF can be written compactly as:

In the above, T is the transpose operator. We'll now move on to weighting


methods (selection of the weights) used in antenna arrays, where some of the
versatility and power of antenna arrays will be shown.

Side Note: If the elements are identical (array made up of all the same type
of antennas), and have the same physical orientation (all point or face the
same direction), then the radiation (or reception) pattern for an antenna
array is simply the Array Factor multiplied by the radiation pattern
. This concept is known as pattern multiplication.

A weighting method is a means of selecting the weights that multiply the


signals from the antennas:
The weights are fundamental in controlling the behavior of the array. Some
methods are now presented, which also serve to explain the versatility of
antenna arrays.

A weighting method is a means of selecting the weights that multiply the


signals from the antennas:

The weights are fundamental in controlling the behavior of the array. Some
methods are now presented, which also serve to explain the versatility of
antenna arrays.

1.Schelkunoff Polynomial
Method
Antenna-Theory.com -
Weighting Methods Arrays Main Page
Home
Instead of steering an antenna array (in which case we want to
receive or transmit primarily in a particularly direction), suppose instead we want
to ensure that a minimum of energy goes in particular directions. The weights of
an antenna array can be selected such that the radiation pattern has nulls (0 energy
transmitted or received) in particular directions. In this manner, undesirable
directions of interference, jamming signals, or noise can be reduced or completely
eliminated.

It turns out that this isn't real hard to do, either. In general, an N element array can
place N-1 independent nulls in its radiation pattern. This just requires a little math
to work through, and will be illustrated via an example. Let's assume we have an
N element linear array with uniform spacing equal to a half-wavelength and lying
on the z-axis.

To start, the array factor of a uniformly spaced linear array with half-wavelength
spacing can be rewritten using a variable substitution as:

The above equation is simply a polynomial in the (complex) variable z. Recall that
a polynomial of order N has N zeros (which may be complex). The polynomial for
the AF above is of order N-1 zeros. If the zeros are numbered starting from zero,
the zeros will be 0, 1, ..., N-2. The AF is rewritten then as:

We've introduced variables, and have gotten rid of the weights. Hence, we can
choose the zeros to be whatever we want, and then figure out what the weights
should be to give us the same pattern.

To make the example concrete, let N=3. Suppose we want the array's radiation
pattern to have zeros at 45 and 120 degrees. We can simply use the equation for z
above, and substitute these values for the angle. We then obtain the corresponding
zeros, . The z values corresponding to the zeros are:
For simplicity, we'll let . The AF then becomes:

This AF must equal the original AF, so:

The weights then can be easily found to be:

We already know what the are, so we automatically have the weights. Using
these weights to plot the magnitude of the array factor, we obtain the result in
Figure 1.
Figure 1. Magnitude of array factor.

Observe that the radiation pattern has zeros at 45 and 120 degrees, exactly as we
specified. This method can be used for whatever directions you want; however if
N-1 nulls are selected for an N element array, the designer no longer has control
over where the maximum of the radiation pattern is.

This method can easily be performed on linear arrays with many more elements.
This method can be The Schelkunoff Polynomial Method easily extends to planar
and multi-dimensional arrays. The simplicity of placing nulls in the radiation
pattern adds a powerful advantage for using arrays in practice.

Least Mean-Square Error (LMS)


Adaptive Weights
Antenna-Theory.com -
Weighting Methods Arrays Main Page
Home
Antennas (and antenna arrays) often operate in dynamic
environments, where the signals (both desired and interfering) arrive from
changing directions and with varying powers. As a result, adaptive antenna arrays
have been developed. These antenna arrays employ an adaptive weighting
algorithm, that adapts the weights based on the received signals to improve the
performance of the array.

In this section, the LMS algorithm is introduced. This algortihm was developed by
Bernard Widrow in the 1960s, and is the first widely used adaptive algorithm. It is
still widely used in adaptive digital signal processing and adaptive antenna arrays,
primarily because of its simplicity, ease of implementation and good convergence
properties.

The goal of the LMS algorithm is to produce the MMSE weights for the given
environment. The definitions of all the terms used on this page follows that from
the MMSE page, which should be understood before reading this page. The goal
of the LMS algorithm is to adaptively produce weights that minimize the mean-
squared error between a desired signal and the arrays output; loosely speaking, it
tries to maximize reception in the direction of the desired signal (who or what the
array is trying to communicate with) and minimize reception from the interfering
or undesirable signals.

Just as in the MMSE case, some information is needed before optimal weights can
be determined. And just as in the MMSE weighting case, the required information
is the desired signal's direction and power. The direction is specified via the
desired signal's steering vector ( ) and the signal power is written as . Note
that these parameters can vary with time, as the environment is assumed to be
changing. The directions and power can be determined using various direction
finding algorithms, which analyze the received signals at each antenna in order to
estimate the directions and power.

Recall that the Mean-Squared Error between the desired signal and the array's
output can be written as:

The gradient (vector derivative with respect to the weight vector) can be written
as:

The LMS algorithm requires an estimate of the autocorrelation matrix in order to


obtain weights that minimize the MSE. The LMS algorithm estimates the
autocorrelation matrix ( ) using only the current received signal at each
antenna (specified by the vector X). The weights are updated iteratively, at
discrete instances of time, denoted by an index k. The estimate of the
autocorrelation matrix at time k, written with a bar overhead, is written as:

The LMS algorithm then approximates the gradient of the MSE by substituting in
the above simple approximation for the autocorrelation matrix:

The adaptive weights will be written as W(k), where k is an index that specifies
time. The LMS weighting algorithm simply updates the weights by a small
amount in the direction of the negative gradient of the MSE function. By moving
in the direction of the negative gradient, the overall MSE is decreased at each time
step. In this manner, the weights iteratively approach the optimal values that
minimize the MSE. Moreover, since the adaptive algorithm is continuously
updating, as the environment changes the weights adapt as well.

The weights are updated at regular intervals, and the weight at time k+1 is related
to time k by:

The parameter controls the size of the steps the weights make, and affects the
speed of convergence of the algorithm. To guarantee convergence, it should be
less than 2 divided by the largest eigenvalue of the autocorrelation matrix.
Substituting in the estimate for the gradient above, the LMS update algorithm can
be written as a simple iterative equation:

The algorithm simplicity is the primary reason for its widespread use. The above
update equation does not require any complex math, it just uses the current
samples of the received signal at each antenna (X).

Example of LMS Algorithm


Assume a linear array of antennas, with half-wavelength spacing and N=5
elements in the array. We'll assume the Signal-to-Noise Ratio (SNR) is 20 dB and
that the noise is Gaussian and independent from one antenna to the next. Assume
there are two interferers arriving from 40 and 110 degrees, with an interfering
power of 10 dB (relative to the desired signal). The desired signal is assumed to
come from 90 degrees.

The algorithm is starting assuming a weight vector of all ones (the starting weight
vector ideally has no impact on the end results):

The convergence parameter is chosen to be:

Using random noise at every step, the algorithm is stepped forward from the initial
weight. The resulting MSE at each time step is shown in the following figure,
relative to the optimal MSE.
The LMS algorithm is fairly efficient in moving towards the optimal weights for
this case. Since the algorithm uses an approximateion of the autocorrelation matrix
at each time step, some of the steps actually increase the MSE. However, on
average, the MSE decreases. This algorithm is also fairly robust to changing
environments.

Several adaptive algorithms have expanded upon ideas used in the original LMS
algorithm. Most of these algorithms seek to produce improved convergence
properties at the expense of increased computational complexity. For instance, the
recursive least-square (RLS) algorithm seeks to minimize the MSE just as in the
LMS algorithm. However, it uses a more sophisticated update to find the optimal
weights that is based on the matrix inversion lemma. Both of these algorithms (and
all others based on the LMS algorithm) have the same optimal weights the
algorithms attempt to converge to.

Das könnte Ihnen auch gefallen