Sie sind auf Seite 1von 38

1.

INTRODUCTION
The purpose of this paper is to discuss various types
Developments in Radar of imaging radars. These radars take a number of forms
according to the intended application. The forms range
Imaging from synthetic aperture radars (SARs) carried on moving
platforms, which are intended to be used to image strips
or patches of terrain, to stationary radars for imaging
objects placed on rotating platforms, objects moving by
the radar such as aircraft or orbiting objects, or celestial
DALE A. AUSHERMAN, Member, IEEE objects like the Moon and planets.
ADAM KOZMA, Member, IEEE
Although these radars take different forms and have
various applications, all are coherent radars which utilize
JACK L. WALKER, Member, IEEE
Environmental Research Institute of Michigan
the range-Doppler principle to obtain the desired image.
That is, the image is made using conventional techniques
HARRISON M. JONES
to obtain fine-range resolution and using the Doppler
ENRICO C. POGGIO frequency gradient generated by the rotation of the object
M.I.T. Lincoln Laboratory
field relative to the radar to obtain a cross-range
resolution that is much finer than that obtainable by the
radar's beamwidth.
Using range and Doppler information to produce radar images is
In this tutorial paper, we give an introduction to
a technique used in such diverse fields as air-to-ground imaging of
range-Doppler radar imaging and briefly describe various
forms this technique takes. A historical perspective of the
objects, terrain, and oceans and ground-to-air imaging of aircraft,
development of the imaging technique, along with a
space objects, and planets. A review of the range-Doppler
number of examples, is given in Section 11. In Section
technique is presented along with a description of radar imaging III, we develop the fundamentals of range-Doppler
forms including details of data acquisition and processing imaging in detail and discuss various processing
techniques. approaches which deal with motion through resolution
cells. We treat the general three-dimensional case,
including the concept of three-dimensional processing. In
Section IV, we include a detailed discussion of radar
imaging techniques, including the data acquisition and
details of the data-processing techniques.

A. Introduction to Radar Imaging Concepts


The Doppler frequency gradient required to obtain
fine cross-range resolution is generated by the motion of
the object relative to the radar; this motion is generated in
a variety of ways which can be related to the simplified
case of a stationary monostatic radar illuminating a
rotating object. Fig. I portrays a three-dimensional object
as projected into the x - y plane, with the object rotating
with a uniform angular motion about the z axis.
However, as discussed in the following paragraphs, such
restrictive assumptions can be removed and three-
Manuscript received March 13, 1984; revised June 26, 1984; released
dimensional bodies rotating about an arbitrary axis with
for publication July 9, 1984. nonuniform rotation rates can be imaged. In addition,
bistatic radar operation also can be accommodated.
This work was supported by the U.S. Air Force, the U.S. Army, and
the U.S. Navy. The U.S. Government assumes no responsibility for the
If the object, contained within the beam of the radar,
information presented. is rotating about the point A at w radians per second and
the coherent radar is located a distance ra from the
Authors' addresses: D.A. Ausherman, A. Kozma, J.L. Walker,
Environmental Research Institute of Michigan, P.O. Box 8618, Ann object, then the range to an object point with initial
Arbor, MI 48107; H.M. Jones and E.C. Poggio, Lincoln Laboratory, (t = 0) coordinates (ro, 00 zo) is given by
Massachusetts Institute of Technology, P.O. Box 73, Lexington, MA (1)
02173.
r = [r2 + r2 + 2rarO sin(00 + wt) + z2]½.
If the distance to the object is much larger than the size
0018-9251/84/0700-0363 $1.00 © 1984 IEEE of the object (ra >> ro, zO), a good approximation is

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984 363
IsoDoppler Planes provide a range resolution pr determined by the
bandwidth Bw of the pulse. Hence
Isorange Planes Pi c/2Bw (6)
where c is the velocity of propagation of the radar
energy.
We see from (5) that we can achieve a cross-range
resolution Ax = p, if we can measure Doppler
frequencies with a resolution of
A Afd = 2wptl/x. (7)
ra Since a frequency resolution Afd requires a coherent
processing time interval of approximately AT = 1 1/Af,
cross-range resolution is given by
pa X/2wAT = X/2A0
= (8)
where AO = wAT is the angle through which the object
rotates during the coherent processing time.
Fine cross-range resolution implies coherent
Coherent
Radarl processing over a large AG; however, (2) and (3) indicate
that both the range and Doppler frequency of a particular
Fig. 1. Range-Doppler imaging. point scatterer can vary greatly over a large processing
interval. This means that during a processing time
r - rc1 + x0 sin wt + Yo cos wt. (2) interval sufficiently long to give the desired cross-range
resolution, points on the rotating object may move
Similarly, the Doppler frequency fd of the returned radar through several resolution cells. Therefore, the usual
signal is range-delay measurement and Doppler-frequency analysis
2 dr 2x0w 2vow implied by (4) and (5) will result in degraded imagery for
A d-
t'l
= Acos Wt-
t sin wt (3) the large processing interval case.
To avoid image degradation caused by motion through
where X is the radar wavelength. resolution cells while using the simple range-Doppler
If the radar data are processed over a very small time analysis described above, we must limit the size of the
interval centered at t = 0, (2) and (3) can be coherent processing time AT. In the special case
approximated as described above (constant rotation rate and RLOS
r r, + Yo
= (4) perpendicular to the axis of rotation), no motion through
a range resolution cell and a Doppler resolution cell will
f-b= 2x0w/X. (5) occur if
Therefore, by analyzing the returned radar signal in terms AT < 2pr/wDD (9)
of range delay and Doppler frequency, the (x0, yo)
components of the position of the point scatterer can be and
estimated. The surfaces of constant range are parallel AT<c `(X/Dr.)'2 (10)
planes perpendicular to the radar line of sight (RLOS),
and the surfaces of constant Doppler are parallel planes respectively, where Dr and D, are the maximum range
and cross-range dimensions, respectively, of the object.
parallel to the plane formed by the rotation axis and the Consequently, one must limit the resolution of the
RLOS. This constitutes the usual range-Doppler imaging
procedure. The presence of the object rotation rate X in imaging system such that
(5) implies that in order to obtain a properly scaled image p2 > XDr/4 (1 1)
of the object, the magnitude of X must be known. Most (12)
techniques for estimating the rotation rate depend on a Pa Pr > XDa/4.
priori knowledge and/or analysis of periodicities in the In general, the image scene dimensions are not the
radar signal level. Another implicit assumption is that the only parameters regulating the extent of the coherent
distance r,, from the radar antenna to the center of the processing interval and hence the cross-range resolution
object is constant and known. In applications where r, is of conventional range-Doppler images. When the angular
a function of time, the effects of time-varying range must rate is variable and/or the radar range directions are not
be removed from the received signal in the radar receiver coplanar in a coordinate system that rotates with the
and/or processor. object, the constraint of no motion through a Doppler
The resolution in range is achieved by conventional resolution cell (10) may have to be modified to a more
means using a train of short or long coded pulses which stringent one, leading to even smaller values of AT.
364 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
Often, a finer cross-range resolution is desired, and array" since it is equivalent to supplying an essentially
hence points in the object move through range and/or quadratic phase shift to the synthetic array generated by
Doppler resolution cells during the coherent integration the radar as it moves past the terrain. Motion through
time. In this case, simple frequency analysis will yield resolution cells does not occur if Ar, as shown in Fig. 2,
degraded imagery; the effect of motion through resolution is smaller than the desired range resolution. Under this
cells must be compensated. Several techniques of condition, the quadratic phase correction described above
compensating for this motion through cells have been is by itself sufficient to give the desired resolution. This
developed over the years. These range from linear condition holds for most SARs; however, for SARs [2]
piecewise approximations to account for the motion, to which operate at extreme ranges, such as the NASA
sophisticated "extended" methods, to elegant methods of satellite-borne SEATSAT [3] radar, range cell migration
formatting the data prior to the image formation correction is also required. Correction for both range cell
processing. Some of these methods are discussed in migration and Doppler frequency change of scatterers can
Sections 1II and IV. be accomplished by two-dimensional correlation of the
received signals with a replica of the expected return
B. Applications of Radar Imaging from a fixed point in each resolution cell in the scene.
This cross-correlation function's magnitude plotted as a
Application of these principles has yielded various function of position in the scene is the displayed image.
forms of range-Doppler radars as stated above. The well- Rotating platform radars also fit the model shown in
known stripmap SAR technique [ 11 is a special case Fig. I (except that the RLOS is not always perpendicular
where the Doppler gradient is achieved by the relative to the rotation axis). These radars are often used to obtain
rotation produced by scanning an antenna fixed to a radar cross-section measurements and to produce images
moving platform over a strip of terrain. This is illustrated to obtain radar signatures [4, 51. Processing to eliminate
in Fig. 2. Here a coherent radar carried on a moving the effects of migration through resolution cells is almost
platform at a velocity v illuminates a stationary point 0 on always required. A form of airborne terrain mapping
radar, called the spotlight radar, also fits this simple
model [6-10] (except that the relative rotation rate can
vary). In this form, the radar is carried in a moving
vehicle and the antenna illuminates a fixed spot on the
terrain from a continuously changing look angle, as
shown in Fig. 3. If the gross Doppler due to the change
of distance from the aircraft to the center of the spot is
compensated, it can be shown that the spot of terrain can
be treated as a rotating object field illuminated by a
distant stationary radar.
-Line of Flight

Fig. 2. Stripmap synthetic aperture radar.

the terrain at broadside range r, from the flight line. The


point 0 is first illuminated by the forward edge of the
antenna beam and is last illuminated when the aft edge of
the beam passes by the point. The apparent total rotation
of the object in the neighborhood of point 0 is equal to
the angle subtended by the radar's antenna beam, which
is approximately AO = X/L, where L is the length of the
radar antenna in cross range. Using this relation in (8)
yields the well-known formula for the cross-range
resolution of an SAR, namely, p, = L12.
With the stripmap SAR, the apparent rotation rate of
the object (i.e., the relative rotation between the object
and the RLOS) is not constant and hence the Doppler
frequency produced by a scatterer is a function of time.
To achieve a fine cross-range resolution, it is necessary to
make a correction for the change in frequency. Such a
correction is often called "focusing the synthetic aperture Fig. 3. Spotlight synthetic aperture radar.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 365


Ground-based radars [11] which image moving which has been called inverse synthetic aperture radar
vehicles such as aircraft or objects moving in orbit also (ISAR) 113], it is also necessary to make an estimate of
fit the model, provided the radar tracks the object in its the magnitude and direction of the undulation from the
trajectory. The gross Doppler due to the trajectory is data since these parameters are unknown.
removed and the Doppler gradient appears as if the
motion were due only to the rotation of the object relative
to the RLOS. This relative rotation is caused by the 11. HISTORICAL PERSPECTIVE
translational motion of the target along its trajectory and
by the rotational motion of the target itself (both rotations
must be described in the same coordinate system). The A. Synthetic Aperture Radar
trajectory and rotational motions which provide the The earliest statement that Doppler-frequency analysis
Doppler gradient are not always known a priori, and one could be used to obtain fine cross-range resolution is
of the main problems in the image formation is to attributed to Carl Wiley of the Goodyear Aircraft
correctly estimate these motions from the radar data. Corporation in June 1951. At the same time, a group at
An important ground-based application of range- the University of Illinois 114] was conducting
Doppler radar is to image the Moon or planets in radar experimental studies which revealed that radar returns
astronomy [121. The technique is called delay-Doppler from certain terrain samples produced frequency spectra
imaging in astronomy. The radar is located at a fixed site containing sharp lines. In a report dated March 1952,
on the Earth and illuminates the Moon or a planet. they noted these lines were due to strong fixed targets
Contours of constant delay appear as annuli on the planet, within the beam of the observing radar and concluded this
as shown in Fig. 4. Contours of constant Doppler appear effect could provide a radar system with greatly improved
angular resolution. This group constructed an X-band
Axis of Rotation radar and in early 1953 used it to produce a radar map
using frequency analysis techniques to obtain high
Annulus Defined resolution in cross range. The radar that produced this
by Time Delay map was an unfocused system; that is, there was no
Receding Edge phase correction provided to compensate for the changing
t% J Approaching Edge Doppler frequency 19].
In 1953, an Army summer study, called Project
Wolverine, was convened at the University of Michigan
for the purpose of recommending research and
Strip Defined development programs leading to better battlefield
by Doppler Range surveillance techniques. As a part of this study, the
Doppler-frequency technique was examined in more
detail. Participants in this study included representatives
from universities and industry, including the Universities
of Illinois and Michigan, Goodyear, Philco, General
Electric, and Varian. The result of this study was a
Fig. 4. Delay-Doppler imaging in radar astronomy, after Green [32].
development program which proceeded, under Army
sponsorship, to further develop the range-Doppler radar
principle.
as straightline strips parallel to the rotation axis of the A part of this program was to develop a practical data
planet. The intersection of an annulus and a strip define processor which could accept wideband signals and carry
the delay-Doppler resolution patches, as shown in black out the necessary Doppler-frequency analysis at each
in the figure. The size of the resolution patches are resolvable range interval so that a useful image could be
determined by the bandwidth of the pulse modulation and produced. A group at the Willow Run Laboratories of the
the Doppler-frequency resolution, or by the coherent University of Michigan, under L.J. Cutrona, was
integration time. The image obtained using this technique assigned the problem of developing an optical computer
is ambiguous since the returns from A and B at the for this purpose. Processing techniques considered by
intersection of an annulus and a strip cannot be other groups included electronic processors, recirculating
distinguished from each other. However, using an delay lines, and storage tubes.
interferometer, one patch can be nulled with respect to In the ensuing years, the Willow Run group
the other and the reflectivity of a patch can be isolated. constructed an X-band radar and built an optical
It is also possible to form images of objects which computer. The equipments were completed in the summer
have no appreciable trajectory motion relative to the of 1957 and the first, fully focused SAR map was
radar. Instead, the natural undulation of the object, such produced in August 1957. Very soon after this, the Army
as a rocking motion of a ship due to wave action, is used requested that a demonstration system be constructed.
to create the Doppler gradient. In this type of imaging, This system, the AN/UPD-1, was produced by the

366 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
Willow Run group in conjunction with Texas 1301 have announced intentions to place SARs in orbit
Instruments. Five radar systems were built and various during the next decade.
demonstration flights were conducted in the spring of Recently, ERIMI has built a SAR designed to support
1960 11. engineering operations in the Arctic. This system, called
In subsequent years, the state of the art of SAR for the sea ice and terrain assessment radar (STAR), is
the military was further developed by a number of currently being operated by Intera, Ltd., in support of
organizations. Currently, a SAR is used as a standard two Canadian oil companies drilling in the Beaufort Sea.
reconnaissance tool by the Air Force. This radar system, A block diagram of this radar and a picture of the
called the UPD-4, was built by Goodyear Aerospace 1151. equipment is shown in Figs. 6 and 7, respectively. The
In late 1972, a three-wavelength SAR was included in the system is installed in a light twin-engine aircraft and flies
Apollo 17 lunar mission. The objectives of the Apollo 17 mapping missions of the ice fields surrounding the
Lunar Sounder Experiment (ALSE) were to detect drilling rigs. The data is processed in real time in the
subsurface geologic structures, to generate a continuous aircraft by an analog/digital processor, and the ice map is
lunar profile, and to image the Moon at radar telemetered to a ground station where a mosaic of the
wavelengths. A great deal of important data on the area surrounding the drill rig is assembled. This map is
surface and subsurface features were gathered during this used by ice experts aboard the rig to assess the ice
experiment 1161. During the last decade, SAR has also conditions. A sample of the type of imagery produced by
been applied to such diverse civilian applications as this system is shown in Fig. 8.
terrain mapping 117, 18], oceanography 119-211, and ice
studies 122, 231. In 1978, NASA launched the SEASAT B. Radar Astronomy
satellite which carried an L-band SAR. During its
relatively short life, it imaged many parts of the world Independent of the work that was being done in SAR.
and provided a great deal of important data to Green formulated the concept of delay-Doppler imaging
oceanographers and other scientists [24-261. An example in the 1950s with the aim of improving the resolution of
of the type of image produced by this instrument is the radars being used for making measurements of the
shown in Fig. 5. NASA is continuing to develop SAR for Moon and planets 131, 32]. In the late 1950s, Pettingill
used the technique to produce radar images of the Moon
[33]. He used the Millstone Hill radar, operating
coherently at 440 MHz, to produce 26 range cells of
75-km resolution each across the Moon. In each range
cell, Pettingill was able to resolve Doppler frequencies to
+ 1/10 Hz by processing a series of pulses existing over
a 10-s duration. In 1961, several organizations obtained
radar echos from Venus [34-38]. In addition, radar
contacts have been made with Mercury and Mars [39-
421.
The planet Venus can be imaged with good sensitivity
only near inferior conjunction, i.e., when Venus is
approximately between the Earth and the Sun. At this
distance, even the narrow beam of the National
Astronomy, and Ionosphere Center's Arecibo radar,
produced by the 300-m dish, has about twice the diameter
of the planet; so Doppler is needed to obtain good cross-
range resolution. An image of Venus using Arecibo data
taken in 1975, 1977, and 1980, is shown in Fig. 9.
Fig. 5. A 100-km by 100-km frame from the L-band SEASAT SAR
collected on August 19, 1978. It shows the English Channel (Strait of C. Imaging of Orbiting Objects
Dover) between Rams Gate Head on the left and the French coast in the
vicinity of Dunkerque and Calais on the right. The linear features in In the early 1960s, it was recognized that the range-
midchannel and the distinctive surface patterns around Rams Gate Head Doppler technique could be applied to imaging of
both are the result of tidal currents flowing over sand ridges at the
bottom of the channel. The ground resolution of the image is
orbiting objects. A radar for this purpose, called the
25 x 25 m (courtesy of NASA/JPL). synthetic spectrum radar, was built by Westinghouse
under Defense Advanced Research Projects Agency
space applications with its shuttle imaging radar (SIR) (DARPA) sponsorship. This radar was an instantaneously
series. SIR-A was carried aboard the shuttle flight in narrowband radar which used frequency stepping
November 1981 [27] and plans for subsequent flights for '1In1973, the Willow Run Laboratories separated from the University
SIR-B and SIR-C are being carried out. In addition, the of Michigan and became the Environmental Research Institute of
European Space Agency [28], Canada [29], and Japan Michigan (ERIM), a not-for-profit research organization.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 367


Fig. 6. Block diagram of the STAR system. The radar operates at X band and uses a swept YIG oscillator to generate a linear frequency-
modulated pulse for transmission. The bandwidth of the pulse over 30 psec is 15 MHz and 30 MHz for 6- and 12-m resolution, respectively. The
returned pulse is compressed by a separate SAW device for each resolution. The range swath covered is 22.4 km and 44.7 km at 6- and 12-m range
resolution, respectively. The azimuth compression is performed by the digital real-time signal processor which produces seven 6-m resolution
images which are incoherently superimposed. These data are sent via a downlink to a ground station where the stripmap is recorded on film and on
a tape recorder. The image data are also recorded aboard the aircraft.

techniques to achieve a wide bandwidth. In the late upgrades to the ALCOR radar, such as an increase in
1960s, Rome Air Development Center (RADC) PRF to 200 Hz and the ability to record pulse compressed
developed the Floyd Site radar for imaging orbiting data in up to three adjacent 30-m range windows. Data
objects. This radar was built by General Electric and the acquisition procedures and range-Doppler image
processing techniques were developed by Syracuse processing efforts for many classes of near-Earth space
Research Corporation. objects were fully developed.
A 94-GHz radar for space object identification (S01) In the late 1970s, the results of the ALCOR SO5
was constructed by Aerosapce Corporation in the 1960s. program led to the development of the long-range
This radar has a 1-GHz bandwidth and produces a time- imaging radar (LRIR) [1 1 ] at Lincoln Laboratory. Once
bandwidth of 106 using a 1-ms pulse length [43]. the LRIR became operational, significant image
The first high-quality images of near-Earth space processing developments were achieved. The LRIR is an
objects were obtained in the early 1970s using data X-band radar with a bandwidth which is 10 percent of the
collected by the ARPA, Lincoln Laboratory, C-band, center frequency. It was specifically designed to be able
observables radar (ALCOR). These data were processed to image satellites at synchronous range. The wide
by Lincoln Laboratories and the Syracuse Research bandwidth allows for 25-cm range resolution, and the
Corporation. Even though ALCOR was not designed for maximum PRF of about 1000 Hz allows for imaging of
radar imaging, successful results were made possible by rapidly rotating space objects and provides added imaging
the 50-cm range resolution, by coherent data recording, sensitivity.
and by sufficient sensitivity to image low altitude Significant progress was made in the late 1970s and
satellites. early 1980s in processing data from the LRIR. A
In the middle 1970s, the success of the early ALCOR technique called extended coherent processing (ECP) was
results persuaded DARPA to sponsor an SO1 program at developed. ECP is an efficient general imaging technique
Lincoln Laboratories. Included in this program were which speeds up processing of image data and allows

368 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
Laboratories under Brown. This interest was stimulated
by the results of a summer study on space object
identification sponsored by the Electronics Systems
Division of the Air Force. Brown recognized that such
radar imaging is substantially equivalent to SAR since
SAR can be described in terms of a general pulse-
Doppler radar for which the range-Doppler image
corresponds to a geometric image of the scene [441.
A rotating platform and a coherent ground-based radar
were built and work was carried out in data gathering and
the development of data-processing techniques under
Army and Air Force sponsorship. The principal data-
processing problem addressed was processing in the
presence of motion through resolution cells. The
processing technique devised consisted of taking the
Fig. 7. A view of the STAR system as installed in a Cessna 441
Fourier transform of the range data, followed by a gentle
Conquest aircraft. The rack on the left contains the radar control distortion of the range transform plane. After these steps,
computer. the VISOR hard copy recorder, and the antenna control. The a two-dimensional Fourier transform was used to produce
lower part of the rack in the middle is the RF and mounted atop is the the image 1451.
controller for the real-time signal processor. The remainder of the Walker began work with this rotating platform in
processor along with the buffer/presumimer is located in a rack further
aft which is not shown in the picture. The rack to the right contains the
1970. His work resulted in a more general formulation of
downlink formatter, the downlink. and the tape recorder. The small rack the range-Doppler imaging theory and the introduction of
forward contains the Litton LTN-76 inertial measurement unit. The the polar-format storage technique which solved the
slotted array antenna is located in a radome under the aircraft. Total general problem of processing with motion through
weight of the system is 340 kg. resolution cells. In addition, extensive experimental
results were produced [51.
The rotating platform radar facility used for this and
subsequent work is shown in Fig. 10. The facility
currently has the capability of forming images using radar
illumination at a center frequency of 10 GHz. 35 GHz,
and 94 GHz. A radar image of a vehicle produced by the
facility, along with an optical image, is shown in Fig. 11
[461.
In addition to the work just discussed, Mensa et al.
147, 481 at the pacific Missile Test Center and a group
under Wehner at Naval Ocean Systems Center have
worked on imaging of rotating objects 1491, as have Chen
and Andrews 150. 51]. Recently, a number of authors
have studied the relation between techniques used in
tomography and range-Doppler imaging [52-541. Their
conclusion is that range-Doppler imaging can be
analyzed using the projection-slice theorem from
computer-aided tomography (CAT). Conversely, it has
Awl ~t, roWA been suggested that processing techniques borrowed from
Fig. 8 STAR imagery of an area in western Pennsylvania. south of tomography may advance the state of radar processing
Altoona. shows the radar's 6- by 12-m resolution wide swath mode techniques [55].
(44.7 km). The sensor was flown at a 26 000-ft altitude. Evitts
Mountain and Dunning Mountain are the ridges running south to north Ill. RANGE-DOPPLER IMAGING FUNDAMENTALS
on the left; to the right (east of these) is the Juniata River.
In Section 1, we introduced the basic concept of using
range and Doppler (range-rate) time signals to provide
carrying out new applications such as wide-angle two-dimensional images of a rigid object field. In this
imaging, stroboscopic imaging, and three-dimensional section, we develop in more detail the principles of
imaging. range-Doppler imaging of rotating objects to serve as a
background for subsequent discussions of general imaging
D. Rotating Platform Imaging radar configurations. The fundamentals presented here
involve a three-dimensional imaging geometry with
In the early 1960s, work began in the development of separate (bistatic) transmitting and receiving antennas
techniques for imaging rotating objects at the Willow Run moving along arbitrary trajectories. Important special

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 369


takHnoiX~s1C
Lig. 9. Radar imagery of the surface ot Venus

1.:E if';
reveals the varied and complex nature of its surface terrain. This mosaic was obtained with the
12.6-cm radar interferometer of the National Astronomy and Ionosphere Center and covers the area from 30'N to 70'N latitude and from 100 W to
40WE longitude. The large radar-dark pear-shaped feature at top center is Planum Lakshmi, a broad flat plateau surrounded by steep scarps. The very
bright feature to its right is Maxwcll Montes. which measures 75(0 km north to south and includes the planet's highest evaluation. 11 km above the
planetary mean (courtesy D.B. Campbell. NAIC).

_~~~~~~~~~
cases such as stripmap SAR, spotlight SAR. and space-
object imaging with a fixed radar are treated in Section
iv.

A. General Three-Dimensional Radar Imaging


In this section, we consider a more general range-
Doppler imaging situation involving a bistatic transmitter/
receiver configuration and a three-dimensional rigid
object as shown in Fig. 12. Both the object and the
antennas can have arbitrary motion, although only the
relative motion of the scatterers with respect to the
antennas is important for the radar imaging methods
considered here. For vehicle-borne terrain imaging radars.
this motion is often measured by means of inertial
navigation-based systems and supplemented by data-
derived motion estimates as required. For ground-based
space-object imaging radars, the motion is usually derived
by fitting radar data to obtain precise models which
Fig. 10. Rotating platform radar facility uses separate transmitting and describe the object's orbital and rotational motion.
receiving horns shown located on the tower. The tower is located about
40 m from the platform which is about 6 m in diameter and has a
The fundamental task of a radar imaging system is to
rotation period of 168 s. The radar transmitter and receiver are located estimate the reflectivity cr of each element of the object
inside the building. as a function of the spatial coordinate ro. That is, the

370 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
reflectivity function is to be approximated by an image
function G(ro) which is calculated from the returned radar
signals. Because of the limitations of the radar data. the
function G(r0) will be a blurred representation of cr(r0,).
This blurring is characterized by the ''point target
response" function. h(r0,) which is the image function
G(r0) calculated from the signal returned from an isolated
point scatterer.
To achieve good image quality. it is important that h1|
have its maximum value over ro corresponding to the
location of the point scatterer and have as sharp a peak as
possible with low sidelobes. In general, objects of interest
contain many elemental scatterers and, under the
assumption of linearity. the image G can be represented
as a superposition of point target response functions.
For a transmitted signal s(t), the signal received from
a point scatterer is

R, + R28
S,.(t) =
(t C. (13)

IW Bwhere R, time-varying two-way range to the


+ R2 is the
object point,
and or is the reflectivity associated with the
_ _ 1; |point. An image of the object can be achieved if R, + R,

Fig. 11.
The~
~~ ~ ~ ~~~~~ ~iswith
Rotating platform radar image and optical image of scattering
object. In principle,
a different

set elements of the


functionthe
oftotal

object can
for each signal
time received from
point on theall

a of reference
a functions of thebe cross correlated
form given by
Volkswagen. The radar image is a superposition of data obtained over a(13) to produce such an image, G(ro).
360° rotation of the table.(1)tprdcsuhaImgGr.
In practice, various approximations and limiting
assumptions are often made which have led to a number
of different methods for processing the received radar
data to form an image of the scene. For example, in
Z Section 1, we discussed the conventional range-Doppler
Transmitter approximation for a two-dimensional rotating scene
(where the RLOS was perpendicular to the rotation axis)
and showed that yo and x0 were directly related to range
and range-rate measurements made over sufficiently
R./,/ small time intervals which leads to a relatively simple
range and Doppler-frequency analysis type of signal
processor.
In this section, we consider larger coherent processing
/ \Reci~ time intervals in order to achieve fine resolution over
large scenes, and therefore, more general image
72 7formation methods are required. All of the image
Object formation methods described are based on the same
Y fundamental process of measuring range and changes in
range produce image resolution. Some are
to
distinguished from one another by virture of the different
approximations which are made to minimize hardware
O
complexity and/or maximize processing speed. Others are
b merely different mathematical formulations of the same
/ X fundamental technique such as time-domain (spatial-
Fb= r domain) versus frequency-domain analysis. We have not
attempted herein to provide a complete taxonomy of radar
Fig. 12. Three-dimensional radar imaging geometry showing bisector image formation techniques but will review four
vector. (Symbols with overbars correspond to boldface symbols in text.) representative methods to serve as a background for the

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 371


more detailed description of radar imaging techniques in scattering of the signal from the object, such as
Section IV. shadowing and multiple scattering.
For image processing, it is convenient to rewrite (14)
(1) Pulse-by-Puilse Correlation Imaging. The cross- in terms of the relative range
correlation image function G(ro), calculated over a set of
discrete pulses of radar data, using ( 13) as the reference D(p) = D(ro, p) = R(ro, t) - R(O, t) (15)
signal, can be written as a sum of single-pulse cross- where 0 is the origin of the displacement vector ro. In
correlation functions. The cross-correlation function for Fig. 12, 0 is the origin of the x, v, z coordinate system,
the pth pulse can be expressed very simply as a phase-
and R(0, t) = (1 r,1 + r2|)/2. This coordinate system
corrected pulse-compressed radar return sampled at the origin can be any convenient point in the object. In terms
bistatic range R(ro, p) = (R1 + R2)12 to the point ro in of D(p), (14) becomes
the object (see Fig. 12). To permit this simple
calculation, the pulse-compression system should be G(ro) = E W(p)S[D(p)J exp[-j4irD(p)/1X (16)
configured to give a response from a point scatterer {I}
located at r0 which has the constant phase [4i-rR(r0, p/X
+ constant] across the main peak of the response. The where
additive constant can be ignored. From such a pulse- StD(p)1 =
S[R(ro, p)] expi-j4,nR(O, p)IXJ. (17)
compression system, the response from a point scatterer
at range R, would have the form SIR] = A(R - R) The formulation of G(ro) in terms of the relative
expjj4-rrRj1X, where A(R -R,.) is a real function with range separates out the phase corrections depending on
its peak at R = R,. The usual practical implementation of ranges to the origin of the coordinate system origin (17)
pulse compression for a chirp waveform results in a weak from those depending on aspect (16). The term aspect
quadratic dependence of phase on the sampling range R. denotes an orientation of the RLOS relative to the target.
We assume such effects to be negligible. For If the radar is bistatic, the aspect depends on the
convenience, the pulse-compressed signal is calibrated so orientations of both lines of sight relative to the target,
that a point scatterer's radar cross section (RCS) is given i.e., the orientation of the bisector vector ro shown in
by the peak value of A2. Fig. 12. The remainder of this paragraph discusses only
Under the above circumstances, the cross-correlation the monostatic case, but similar conclusions can be
function over the set of pulses {p} is given by reached in the bistatic case. That the phase corrections in
(16) depend only on aspect can be understood by noting
G(r,) > W(p)S[R(r0, p)lI expl -j41TR(ro p)/XI. (14) that, since imaging object sizes are usually much smaller
than radar ranges, the far-field approximation of
electromagnetic scattering theory is valid. In this case,
For each point ro, the range is calculated from the known the relative range D(ro) is given, to an excellent
motion of the object, the transmitter, and the receiver at approximation, by the scalar product between the vector
the time on target t(p) of the pth pulse. The symbol (ro) and the unit vector along the RLOS direction.
SIR(ro, p)] denotes the pulse-compressed return sampled Consequently, for a given point (ro), D(ro) depends only
at the calculated range R(ro, p). This calculated range on aspect.
also determines the phase correction in (14). The real Furthermore, it is well known that radar returns at one
weights W(p) can be used in various ways to optimize far-field range and at a given target aspect can be
the image quality. They can be used to suppress cross- predicted from returns measured at other far-field ranges,
range sidelobes. If data from multiple target rotations are at the same aspect, by making a phase correction for the
used, they can, in some cases, be selected to suppress range difference. The calibration of IS(p)12 to give the
cross-range ambiguous images. The weights are RCS includes the usual range-squared amplitude
normalized W(p) - 1 so that in the image of a point correction. Thus the returns S(p) obtained from
scatterer, the peak value of |G 2 is the scatterer's RCS. calculating (17) would be the same regardless of the
Except for the effects of sidelobe suppression weights ranges at which the returns S(p) were obtained.
used in pulse compression and the possibly nonuniform Consequently, one can conclude that the properties of
weights W(p) in (14), this function G(ro) optimizes the G(ro) depend mainly on the target aspects sampled by the
signal-to-noise ratio for detecting a scatterer at r(. It also data and (to a lesser extent) on the weights W(p) used in
does well in separating scatterers from each other if the calculating the image.
point target response function h(ro) has low sidelobes and It can be shown that the formulation of the image
a single sharp peak within the extent of the object. Since function in (14) is equivalent to the backprojection
the function G(ro) is linear in the received signal, the processing method [54], a common tool in the field of
effects of the scatterers in the target are linearly CAT. The backprojection algorithm applied to a coherent
superposed in the complex function G. If signal saturation imaging system forms an image via a coherent summation
is avoided and the signal quantization step is small (for each resolvable image element) of samples of
compared with the noise, the only nonlinear effects to multiple functions representing the total reflectivity of the
confuse the image are those that occur physically in the scene as projected onto the line of sight to the scene.

372 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
Thus the backprojection algorithm is equivalent to the stage FFT 156] or, more generally, the multiple-subpatch
operation implied by (16), where S(p) exp -j4irD(p)/X] approach. In any case, by choosing the diameter D of the
is the projected reflectivity of the scene. The phase subpatches to be
adjustment is required to account for the propagation D ' 4p2/X (18)
effects associated with measuring projected reflectivity
from a remote location. an image of the entire scene with resolution p can be
achieved by a final mosaicking operation.
In practice, the multiple-subpatch method is most
(2) Multiple-Subaperture Processing. Equation (14) applicable to vehicle-borne radar imaging of large scenes.
can be used in principle to calculate well-focused images An example of how this method can be implemented for
of scenes or objects of arbitrary dimensions, using processing spotlight mode radar data is described in
arbitrarily long coherent data intervals. In many practical Section IV.
applications, however, the pulse-by-pulse correlation
imaging method, which is computationally inefficient, (4). Polar Format Processing. Another method [51
can be reliably replaced by a more efficient method for dealing with the problem of motion through resolution
known either as subaperture image processing in spotlight cells involves interpreting the radar data in an appropriate
SAR applications, or as extended coherent processing in three-dimensional spatial frequency space. The radar
rotating space-object applications. pulses are first converted to a range-frequency form
In this method, the sum over pulses is replaced by a (Fourier transform of compressed range data) which
coherent sum of conventional range-Doppler images correspond to polar line segments in the three-
calculated over smaller subintervals of the total coherent dimensional frequency space of the target. Each segment
processing data. The size of these subintervals is oriented according to the angular coordinates of the
(subapertures) is chosen to be sufficiently small that no radar at the time of transmission. Depending on the
motion through resolution cells occurs for their duration. relative motion of the radar and target during the time
With subintervals of such size, the range-Doppler images that a sequence of pulses is transmitted, a portion of the
can be calculated by FFT processing, which is at least three-dimensional frequency space is collected (usually a
one order of magnitude faster than pulse-by-pulse curved surface). An image of the target can then be
processing. formed by taking a three-dimensional Fourier transform
The subimages are subsequently aligned in range and of the collected data.
range-rate to account for the relative motion of scatterers The fundamental features of this method can be
occurring between separate subintervals. The extended derived by observing that for each compressed range
image is obtained by coherently summing all aligned pulse u(t), the complex signal received from a target field
subimages. is given by
A more detailed description of the structure of such
an algorithm is presented in Section IVB dealing with RI + R2 dro (19)
imaging of rotating space objects.
Sr(t) = or (ro) u t
C'O
where R, + R2 is the two-way range to the differential
(3) Multiple-Subpatc h Processing. As was described scattering volume element dro, located at ro, as shown in
previously, the migration of points through resolution Fig. 12, and where a (ro) is the reflectivity density and,
cells can be avoided if one chooses sufficiently small for convenience, includes two-way propagation effects
coherent processing time intervals and/or if the object size and various system gains. The integration is carried out
is sufficiently small. The previously described multiple over the volume of the target.
subaperture method relies on a sequence of conventional If we take the Fourier transform of this range data,
range-Doppler processing operations over short time
intervals followed by a coherent summation to form the Sr(f) = f sr(t) exp[ -j2Trft] dt (20)
final image. Similarly, one can achieve fine resolution
over scenes larger than those permitted by the inequalities we obtain
(11) and (12) if the large scene is divided into an array of
smaller subpatches. We then compensate for the motion Sr(f) fI (ro)U(f) exp -j 'nf(R, + R,) dro (21)
between the radar and the center of each subpatch, and
the situation reduces to the case of an array of smaller where U(f) represents the non-negative frequency
rotating scenes. response in range. Furthermore, we have assumed that R,
The division of the large scene into smaller scenes + R2 does not change significantly during a range pulse.
involves dividing the range extent of the target field into a The time-varying effects of the two-way range (rl +
number of subswaths and partitioning the total Doppler r2) to the origin can be removed by multiplying the
spectrum into a number of frequency sub-bands followed received signal with a reference function proportional to
by the usual Doppler-frequency analysis of each sub-band
to form the final set of subimages. One particular Mref =
exP[ +21Tf(r r2)1 (22)
implementation of this method has been called a two-
AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 373
This represents the fundamental motion compensation
step of the radar imaging system and, as is discussed
later, must be performed with great precision to produce y
high-quality imagery. If the ranges to the transmitter and
receiver (rl and r2) are large compared with the size of
the object, we can let

R1 = r -
roe =:
r -
ro - (23)
r1

R2 = 1r2 - roj -:- r2 - rO -


r2
(24)
(a)
and the resulting range-frequency data can then be fz
expressed as
0,
Sr(f) exp[+j2nf(r +
fy
)
Length of Line Segment
Determined by
Transmitter Bandwidth
U f n(ro) exp( + j .
ro) dro (25)
where rb is the transmitter/receiver bisector vector as
indicated in Fig. 12 and is given by
=b (r + r2) (26) (b)
Fig. 13. Signal surface in frequency space corresponding to changes in
We have assumed that the antennas are moving relative to aspect angle during radar data collection. (a) Object space.
the object and that therefore rb varies slowly from pulse (b) Frequency space.
to pulse.
An examination of (25) indicates that each radar pulse
produces a polar line segment of the three-dimensional An image of the target, i.e., an estimate of cr(r0), is
Fourier transform of the target reflectivity function o-(ro) achieved by carrying out an inverse Fourier transform of
by proper interpretation of frequency space. That is, we S(f). The image G resulting from this operation was
can define a three-dimensional spatial frequency variable indicated previously as being characterized by the point
f as target response function h which is the three-dimensional
2f Fourier transform of H(f). Ideally, h should have a very
f -= frb. (27) narrow extent in all three dimensions, i.e., a three-
c
dimensional delta function which would imply that the
This implies that the radar data for a sequence of pulses aperture function H should be unity over the entire
can be represented in three-dimensional frequency space frequency space. This occurs only in the limit where an
as infinite bandwidth signal is transmitted and returns are
collected over all aspect angles (0 ' f C 2-r,
S(f) = H(f) f x(ro) exp[±+j2 7rrj'] dro (28) 0 . K b C fT).
In practical cases, only a small portion of the
where H(f) is the three-dimensional aperture function. frequency space is observed, as is depicted in Fig. 13(b),
The effective length of each polar line segment of the with the attendent limitation on the point target response
aperture is determined by the bandwidth of the in each dimension. For example, straight line flight paths
transmitted signal U(]). As the radar observes the target produce planar data collection surfaces and the general
from different aspects (4tb, Kb), indicated in Fig. 12, f three-dimensional processing problem reduces to a two-
maps out a surface in three-dimensional space which dimensional Fourier transformation with a resulting two-
constitutes the complete three-dimensional aperture dimensional image of the object, i.e., no resolution in the
function of the imaging system. direction normal to the collection plane. A wide variety
The bistatic path shown in Fig. 13(a) is determined by of radar configurations [57] can be envisioned for
the pointing direction of the bisector vector rb as the observing other portions of the three-dimensional
transmitting and receiving antennas move along their frequency space; e.g., a stationary two-dimensional array
trajectories. For the monostatic spotlight mode case, the of mutually coherent wideband radars would generate
bistatic path then reduces to the path of the vehicle samples in a volume and a single moving continuous
carrying the spotlight radar and rb corresponds to the wave (CW) radar would sample the frequency space only
RLOS. along a curve.
374 7EEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
It can be shown mathematically that when the object image" for the three object points is also indicated in
is small comapred with the radar ranges (as assumed Fig. 14(c) as three cone-like distributions whose points of
above) and when the data aspects are closely spaced so concentration correspond to the locations of the object
that good imaging is possible, then the image function, points (2 on the x,y plane and 1 above). If we project the
defined as the inverse three-dimensional Fourier transform data stored on the conical surface onto the f, f plane in
of S(T), is essentially equivalent to the cross correlation frequency space as shown in Fig. 14(d) and follow with a
function given by (14). two-dimensional Fourier transform, we obtain the image
The independent derivation of this image function shown in Fig. 14(e). By the projection-slice theorem
given in this subsection describes an alternative way to [54], this is equivalent to an x,y plane slice in three-
form the image. This derivation also shows the resolution dimensional image space.
properties of all these equivalent methods of image Although a three-dimensional data collection and
formation. It provides a useful context in which to deal processing approach can be used to obtain images of
with optimization of resolution by adjusting the weighting three-dimensional objects free from degradations caused
function H within the boundaries in the three-dimensional by motion through resolution cells even in very general
frequency space set by the available data. It also permits radar configurations, two-dimensional processing
quick iterative calculations relating resolution to the approaches are often desirable for practical
available data aspects and to the radar bandwidth. For implementations. This stems in part from processing
example, to obtain approximately equal image resolutions speed considerations and the operational difficulty in
in three orthogonal directions in object space, one needs obtaining video signal samples over a large volume of
an approximately cubic boundary in frequency space processor space.
outside of which the aperture function H vanishes. If a Two-dimensional processing is optimum when the
monostatic radar has a bandwidth that is 10 percent of the relative radar/object motion is such that the bistatic vector
center frequency, the radial extent of this cube is a tenth rb remains in a plane and/or if the object points to be
of its distance from the origin. The other two dimensions imaged lie on a plane. In the latter case, the three-
of the cube must correspond approximately to a solid dimensional data are projected onto the plane containing
angle of aspects measuring 0. 1 rad by 0.1 rad. the object points selected for optimum focus, as shown in
As an illustration of the three-dimensional processing Fig. 14(d). Scattering centers of the object which are
concept for the rotating object case (mathematically located out of the selected compensation plane sometimes
equivalent to a fixed object and moving antenna), let us called the focused target plane (FTP) will be degraded in
consider an object consisting of three point scatterers as the final image. This degradation is expected from
indicated in Fig. 14(a). The resulting "three-dimensional projection-slice considerations or by observing that the

z
fz

3D FT

Radar Data
Collection (c) 3D Image
B Y~~~~~ fx
(b) Data Surface in Frequency Space
Sli
Slice
(a) Point Targets in Object Space I Project onto fx, fy Plane
z

.y
2D FT

X
(e) X, Y. Plane Slice of Image
fx
(d) Projected Data in Frequency Space
Fig. 14. Illustration of the three-dimensional processing concept. (a) Point targets in object space. (b) Data surface in frequency space. (c) Three-
dimensional image. (d) Projected data in frequency space. (e) X Y plane slice of image.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 375


z

X Z

(a)

( Normal to
Unit Sphere
Surface )

! Plane

(b)
Fig. 15. (a) Target aspects sampled by RLOS when both K and ki change a few degrees. AK = K, - KI, 1+l = +. - I, A0 = [(AK)2 + (AK
sin K)2]I/. (b) Detail enlargement from Fig. 16(a) at aspects sampled on surface of unit sphere. Dots represent pulse aspects.

relative spacing of the three-dimensional fringe structure (1) Dependence on Observed Target Aspects. As
in frequency space associated with each object point is emphasized previously, the properties of an image depend
preserved after projection only for points located on the mainly on aspects sampled by the pulses used in
compensation plane. calculating the images. When the radar samples a planar
angle of target aspects, the image will necessarily be two
B. Properties of Three-Dimensional Radar Images dimensional. When the radar densely samples a solid
angle of target aspects, the image will be three
In this subsection, we discuss in more detail some of dimensional. The aspects sampled depend on the
the important properties of radar images calculated using rotational motion (if any) of the object as well as on the
the methods described above. Specifically, we emphasize orientation of the RLOS and its variation with time. They
the dependence of the point spread function on the target also depend on the subsets of radar pulses chosen for
aspects which are sampled by the radar pulses. Important imaging.
properties include cross-range resolution and the spacing For a space object, it is generally convenient to deal
of cross-range ambiguous images. The results are with object rotations and RLOS rotations relative to the
applicable to monostatic radars or to bistatic radars with distant stars, i.e., relative to "inertial space." Artificial
small bistatic angles. For small bistatic angles, the satellites as well as natural objects in the solar system
equivalent monostatic RLOS bisects the bistatic angle. generally rotate with a constant angular velocity vector in
This subsection also assumes the object to be small inertial space. For objects in the Earth's atmosphere
compared with the radar range. (including stationary scenes, objects on rotating
376 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
platforms, ground vehicles, boats, and aircraft), rotational A0 [(A K)2 + (A i sin K)21J/. (30)
motions are generally simplest to specify relative to the
Earth. Thus RLOS orientations as well as object This is the angle that determines the cross-range (x')
orientations are described in a coordinate system fixed in resolution of the two-dimensional image.
the Earth. The following discussion is valid whether the If the object is rotating rapidly while the RLOS
object's rotational motion is specified in inertial space or rotates slowly, data can become available over many
in a background coordinate system that rotates with the rotation periods with i >> K. This can occur with a
Earth, as long as the RLOS directions are specified in the rapidly rotating object either on the ground or in deep
same way. space. Under these circumstances, a solid angle of
The properties of the point spread function h are aspects can be densely sampled by the data, as illustrated
conveniently calculated in a coordinate system that rotates in Fig. 16. This can permit three-dimensional imaging if
with the object, such as the (x, y, z) system of Fig.
15(a). The z axis is aligned with the object's angular
velocity vector. (For a fixed scene, the angular velocity is
zero and the z-axis direction is arbitrary.) Define the unit iof RLOS Relative
sphere to be fixed with respect to the object so it shares
the object's rotational motion. If the RLOS is directed
along the radius of the unit sphere, the azimuthal angle 4i
and the polar angle K, the angle between the RLOS and
the angular velocity vector, called the aspect deviation
angle, will define the aspects sampled by the pulses.
Also, aspects can be represented graphically by drawing Y
the points on the unit sphere where the RLOS punctures
the spherical surface.
The simplest description of the resolution and
ambiguity properties of h (ro) occurs in a rectangular (a)
coordinate system aligned with the, aspect sampling
geometry, such as the (x', y', z') coordinate system of
Fig. 15(b) or Fig. 16(b). This coordinate system also
rotates with the target. The y' axis is chosen to point in
the RLOS direction at the center time of the imaging
interval. The x' axis is oriented along the direction of
increasing values of the target aspect angle 0, the angle
swept out by the RLOS in the target coordinate system at
the image center time. The rate of change of 0, 6, equals RD Image Plane
for Single Rotation
the magnitude of the RLOS angular velocity vector
relative to the target I 1*..... K

6 = [(K)2 + (4 sin K)211/2 (29) -~~~A -bAd Sin It-|


(b)
where K and x, are the rates of change of the angles K Fig. 16. (a) Target aspects sampled for a three-dimensional image (,k'
and 4,, respectively. >> K). (b) Detail enlargement from Fig. 17(a) at aspects sampled on
The (x', y') plane is thus tangent to the surface swept unit sphere. Dots represent pulse aspects.
out in the target coordinate system by the RLOS. The
second cross-range direction is chosen perpendicular to the aspects are sampled densely enough. In such cases,
this plane so as to complete a right-hand coordinate 6 +sin4i K and A0 Ad~sin K. The x' cross-range axis
system. as defined above lies along the direction of increasing 4,
The RLOS's are approximately coplanar with respect while the z' cross-range axis is in the direction of
to the object if the changes in the angles 4, and K, A4, decreasing K, as shown in Figs. 16(a) and (b). These
AT and AK KAT, respectively, are small. When the
,A figures are drawn with a small positive value for K. Fig.
RLOS directions are coplanar with respect to the object, 16(b) is an enlargement of a portion of Fig. 16(a).
the images will necessarily be two dimensional in nature. In Fig. 16(b), target rotation causes the RLOS to
The radar returns will not be affected by the z' coordinate rapidly sweep in the 4, direction. Successive pulses during
of any scatterer so the function G(ro) cannot depend on such a sweep sample the aspects shown by a row of dots.
z'. For the two-dimensional case then, the cross-range Pulses that do not fall within the AtJ interval are not used
axis x' will be oriented along the direction of increasing in the image. The slow change in K due to RLOS rotation
values of the angle 0, such that during the imaging causes the sampled aspects to be displaced downward to
interval AT, from (29), the next row of dots on the next target rotation. Over
AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING
377
many tens of rotations, this process densely samples a to as resolution) in the three dimensions follow from the
solid angle of aspects. The image will be three principles given previously. That is, the range resolution
dimensional with resolution in the x' direction determined is determined by the transmitted radio frequency (RF)
by the aspect change A0 = Ad' sin K. Resolution in the bandwidth (BW),
z' direction is determined by the aspect change AK.
If K is too small to give a significant change AK over p(y') = k c/2BW (35)
the available data, an image using data from the interval and the two cross-range dimensions have resolution given
Ad over many target rotations will be two dimensional by
since the RLOS are approximately coplanar. This class of
images is known as "stroboscopic" and is discussed in
p(x') = k X/2A0 (36)
Section 1VB. and
(2) Cross-Range Ambiguous Interval and Cross- p(z') = k X/2AK (37)
Range Resolution. When an approximately coplanar set respectively. Here k is a parameter which encompasses
of aspects is sampled, as illustrated in Fig. (16), the both the definition of resolution in terms of IPR width,
cross-range ambiguous images are separated in the x' i.e., IPR width at 3 dB down versus 6 dB down, and the
direction by amb(x') = X/(280), where 60 is the change effect of IPR mainlobe broadening due to the aperture
in aspect between pulses. To calculate 60, divide 0 given weighting function selected for IPR sidelobe control.
by (29) by the radar's PRF. If the radar's PRF is too low,
cross-range ambiguous images will overlap the true
image. The resolution in the x' direction is proportional
to X/(2A0), where A0 is given by (30). Since the image C. Motion Measurement Requirement
function does not depend on z', one can say that the
resolution in the z' direction is "infinite." These coherent radar imaging techniques all require
When a solid angle of aspects is densely sampled as precise knowledge of the time-varying position of the
in Fig. 16(b), the resolutions in the two cross-range radar relative to the target scene in order to form good
directions x' and z' depend on the extent of aspect quality images. Ideally, the relative range to each image
change, AO and AK, respectively. In addition, the grid point must be known to some fraction of a
discrete sampling of aspects with steps 60 (per pulse) and wavelength over the integration period being used to
AK (per rotation) causes cross-range ambiguous images in obtain fine cross-range resolution. Since we are
the x' and z' directions, respectively. If either 60 or SK is correlating range-derived phase information over some
too large (because of the values of PRF, 4' and K), then coherent aperture, any error in knowledge of relative
the cross-range ambiguous images may overlap the true position RE will give rise to a phase error given by
image of the target, making the image difficult or (38)
= 4ITRE/X
impossible to interpret.
The cross-range ambiguous interval in the x' which will cause perturbations in the cross-range IPR of
direction, amb(x'), and that in the z' direction, amb(z'), the radar in a manner analogous to antenna pattern
are given by perturbations caused by mechanically or electrically
induced phase errors across a real antenna aperture.
amb(x') = X/(280) (31) The effects upon image quality of such phase errors
and depend upon the form of the errors, as is determined by
standard antenna theory. For example, motion-
amb(z') = X/(28K) (32) measurement errors which give rise to phase errors which
respectively. vary linearly across the aperture cause shifting of the
If the values of amb(x') and amb(z') are larger than position of the image response. Errors which vary
the corresponding maximum cross-range extents of the quadratically across the aperture cause mainlobe
targets, the images will be unambiguous. For calculating broadening. Higher order errors cause perturbations
amb(x'), one can use further out on the impulse response sidelobes. For
example, errors which vary sinusoidally cause discrete
80 4' sin K = 4' sin K/PRF paired-echo sidelobes some distance from the mainlobe.
= 2 Tr sinK/(T PRF) (33) Wideband random phase errors cause noiselike sidelobes
distributed across the entire scene. Energy scattered into
where T is the target's rotation period and PRF is the the sidelobes by any of these errors comes at the expense
radar pulse repetition frequency. Similarly, to get of mainlobe energy, and hence these errors all cause
amb(z'), one can use apparent loss of target RCS. Further, these effects can be
8K = TK = 2-rrK/k'. scene position invariant, or position invariant, depending
(34) upon whether the motion errors are applicable to the
When producing three-dimensional images, the entire scene or target or are dependent upon the
impulse response (IPR) widths (sometimes loosely referred individual resolution cell under consideration.
378 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
It is not possible to set a universal threshold on provides the relative motion between sensor and target
motion determination accuracy. Such a limit depends required to perform imaging.
upon the quality required of the image, as well as upon There are two generic types of fixed-target imaging
the form, or frequency content, of the phase error systems. The conventional stripmap mode SAR provides
function. In some cases, the effects of low frequency for wide-area coverage by producing imagery of a strip of
errors, which manifest themselves in the relatively high terrain illuminated by an antenna whose boresight angle is
signal-to-noise mainlobe, can be extracted from the image nominally fixed with respect to the vehicle velocity
data and used to derive a correction to the collected data. vector. For such a system, vehicle travel over time, in
Often several wavelengths of quadratic error can be conjunction with antenna ground-range coverage,
corrected in this manner. On the other hand, higher determines total image coverage. Cross-range resolution
frequency errors are not only detrimental for a given is determined by the effective scene rotation during
amplitude but are also more difficult to measure from the illumination as determined by the antenna azimuth
image data. Thus, high frequency errors, and hence the beamwidth. The alternative approach is to decouple
position measurement errors which cause them, are often antenna boresight angle from the vehicle velocity vector
restricted to be less than some small fraction of a in order to provide longer illumination dwell on the area
wavelength. of interest. This approach provides for finer cross-range
In the case of airborne systems observing stationary resolution at the expense of total image coverage. This
objects on the ground, the relative motion must be latter approach is commonly referred to as spotlight mode
measured onboard the aircraft using some type of motion- SAR.
sensing equipment such as an inertial measurement unit (1) Conventional Stripmap Mode SAR. The
(IMU), perhaps augmented by ground-based aids to fundamentals of conventional stripmap mode synthetic
navigation such as beacons. In the case of Earth-fixed aperture radar has been extensively documented in
systems observing space objects, the relative motion is available literature [44, 59, 60, 611. We provide a quick
determined by appropriate modeling and tracking of the review here in order to note its relationship to other forms
object's orbit, along with using radar derived data of range-Doppler imaging.
regarding rotational motion of the object. Much of the The data acquisition geometry associated with
technical challenge in implementing coherent imaging stripmap SAR is depicted in Fig. 17. In such a system,
radars is in accomplishing these accurate determinations
of relative position, and substantial effort has been
directed toward this problem. An adequate treatment of
these techniques is beyond the scope of this paper.

IV. RADAR IMAGING TECHNIQUES


The previous section described the fundamental
processes required to form images from radar signals
using knowledge of target and sensor vehicle motion.
Various specific implementations of these principles vary
significantly in detail depending upon the application,
even though the underlying fundamentals are the same.
This section reviews various generic implementations in
order to highlight similarities between applications.
Where possible, specific examples are provided. REGION ILLUMINATED AT
Implementations involving imaging of fixed targets or A GIVEN INSTANT OF TIME

scenes from moving sensor-bearing vehicles are TERRAIN STRIP TO BE IMAGED

considered first. Conventional wide-area stripmap mode Fig. 17. Schematic representation of stripmap mode imaging radar.
SAR and the spotlight mode SAR are both described. The
second part of the section provides a look at range resolution is achieved through accurate time-delay
implementations which utilize the same principles in measurement obtained by transmitting dispersed pulses
providing multidimensional images of moving or rotating and applying pulse-compression techniques to the
objects from Earth-fixed coherent radar sensors. returned pulses. As indicated previously, azimuth or
along-track resolution is obtained by recording the
A. Vehicle-Borne Imaging of Fixed Objects Doppler frequency (range-rate) as scattering elements
migrate through the antenna beam. Knowledge of the
Radar systems designed to provide images of the Doppler frequency versus time relationship for a scatterer
Earth's surface are generally airborne or spaceborne at a known range, which is computed based on
sensors. The motion of the sensor-bearing vehicle measurements or a priori knowledge of vehicle motion,

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 379


allows one to precisely locate the scatterer in a manner
analogous to the pulse compression applied to the range S(x,y,xo,ro) = cu exp{j2,n L 2f0R(x - xo, ro)
direction.
Stripmap SAR Data Acquisition. The form of a + 2y 2R(x -xo ro))1 } (40)
typical stripmap mode SAR system is shown in Fig. 18.
A coherent waveform generator (WFG) provides a where the complex-valued weight or is determined by
wideband signal for periodic transmission, at the pulse transmitted pulse weighting, the antenna pattern,
repetition frequency (PRF), through a "fixed' antenna in attenuation with distance, propagation phase effects,
order to illuminate the terrain strip of interest. The scatterer complex radar reflectivity, and R(x - x0, ro) is
transitted signal has the form the one-way range to the scatterer of (x0, ro) for along-
track position x. One-dimensional samples of this
s(t) = a(t) exp{j2rr[fot + 4(t)]} (39) function are obtained at along-track positions X = ni zX
wherefo is the RF carrier frequency, a(t) is the amplitude = nv/PRF, where v is the platform velocity and n
weighting of the pulse, and 4)(t) is the phase modulation represents pulse number. The signal in the fast-time
used to obtain resolution in range. dimension is represented in cross-track spatial coordinates
During the imaging time, the antenna pointing y, where y = ct/2. The arguments x0 and ro in the form
direction must be adjusted slightly to compensate for of S reflect the scatterer-position-variant nature of S.
angular excursions made by the sensor platform. Steering As part of the process of baseband conversion,
commands are usually derived from information obtained compensation for turbulence-induced antenna phase center
from the aircraft inertial navigation system (INS) and motion away from the desired straight path flight line
from real-time analysis of the Doppler spectrum of the must be applied. Such compensation takes the form of
received signals. Unlike a real aperture side-looking radar phase shifts, and in some cases time shifts, of the
system, antenna pointing for an SAR does not effect received signal. This is usually the case for airborne
output image geometry. Rather, pointing impacts SNR of systems flying in a turbulent atmosphere, rather than for
the image by virtue of achieving adequate signal power spaceborne systems such as SEASAT and SIR-A whose
from the image area during coherent integration. motion is generally accurately predictable using
Returned signals from the terrain strip are received via ephemeris information and spacecraft models. It is also
a coherent receiver and are frequency converted to possible to record the measured motion information and
baseband for analog to digital (A/D) conversion in to take it into account as part of the correlation
preparation for digital processing. After baseband processing operation. However, the latter approach can
conversion, the signal received from a single point prevent the use of more efficient means of implementing
scatterer at along-track position x0 and cross-track range the image formation processing step.
(broadside) ro is given by In general, the motion compensation process adjusts

Ant{nn
Antenna

Image
Display/4
Record

Fig. 18. Typical stripmap mode SAR system.

380 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
the phase and time delay of the signals to remove the function applied to control the sidelobes of the system
effects of aircraft displacement on a pulse-by-pulse basis. point-target response. The extent of summation over the
In cases where the depression angle change over the range direction for each output sample is determined by
image swath is sufficiently small, a single correction the time duration of the transmitted pulse. The extent of
applied to the returns from all ranges will be adequate. summation in along track is determined by the
For wide-swath-width systems, range-dependent illumination interval, which in general is a function of
correction schemes are required to apply corrections range as determined by the antenna cross-range
which are dependent upon the depression angle to the beamwidth.
range of interest. In simpler notation, this process an be denoted by
To correct for variations in along-track position, the
system PRF can be slaved to vehicle velocity. O(n, m) = A s(i, j)w(i n, j m)
i J
Alternatively, the knowledge of along-track motion can
accompany the signal data and be accounted for as part of S(i - n, j m, n, m). (42)
the image-formation process.
Following conversion to baseband and assuming that The form of reference function S denotes the range
digital processing techniques are to be used in forming dependence of the system reference function.
the image, the received signals are converted to quantized Theoretically, a different S, as indicated by the fourth
discrete sampled data. For cases involving significantly argument m, must be used when correlating each range
less than unity duty cycle, a PRF buffer serves to spread bin of interest. In practice, however, a single reference
the digital samples over the entire interpulse period in function will suffice over a considerable number of range
order to minimize the peak data rate. bins.
An azimuth presummer is then usually employed to In most systems, the signals are range-compressed
low-pass-filter and downsample the data in the azimuth prior to the azimuth correlation process. For example, in
dimension to the minimum Doppler bandwidth required to the STAR system described in Section 11, the received
support the desired along-track resolution. This step is signals are pulse-compressed prior to A/D conversion
taken to minimize the amount of data to be digitally using a surface acoustic wave (SAW) device. In this
processed. The original azimuth sample rate (the PRF) case, the final azimuth correlation process is given by
must be high enough to unambiguously sample the
Doppler spectrum associated with the antenna beamwidth. O(n, m) = A s'(i, j)w(i - n)
This beamwidth is often greater than the minimum
required to achieve the desired azimuth resolution due to S'(i- n, j- m, n, m) (43)
antenna size constraints associated with the sensor
platform. Also, such excess beamwidth is often used to where s' (i, j) is the range-compressed signal, w(m) is
provide noncoherent averaging in order to reduce the the weighting applied in azimuth, and S" is the range-
effects of coherent microwave speckle in the final image. compressed system reference function which in general
The usual presummer implementation consists of multiple has a sin xix amplitude variation in the range dimension
overlapping, recursive digital filters. If the system PRF [62]. The extent of the range summation (in j) in (43) is
has not been slaved to along-track velocity, then along- equivalent to the amount of range migration of a point
track motion compensation can be accomplished in an scatterer during the coherent integration time. For a
equivalent manner by computing presummer outputs at system with the antenna boresight pointed broadside, this
equally spaced along-track positions. extent is generally significantly less than that implied by
Stripmap SAR Image Formation. The operation (42) for the uncompressed pulse. Thus range compression
required of a digital stripmap SAR processor can be first results in significant computational savings. For a
expressed as system with significant squint of the antenna away from
O(ndy, mdv) = I s(iAx, jAv)w(ix -ndx, jAy -mds) broadside, this is not the case and other procedures must
.i be applied. Methods developed for the spotlight case may
S(iAx -nd, jAy -mdv, tidx, mdv) (41) be adapted for this purpose, as is described in a later
section.
representing the two-dimensional correlation of the two- The similarity of (43) to (14) describing the general
dimensional sampled signal s with a weighted complex extended correlation processing for rotating objects is
conjugate of the sampled response of the system to an apparent. The range-dimension correlation of (43), the
isolated point scatterer, as given by (40). Here, O(ndx, summation over j, may be viewed as a finite impulse
mdy) represents the complex-valued output image response interpolation process required to sample the
sampled at along-track positions, ndx, and cross-track range-compressed pulse at the precise range R(ro, p) for
positions (range bins) mdy. The image can exhibit (14).
different sampling frequencies than the prefiltered For broadside systems where the range migration
sampled signals as indicated by the difference between during the integration time (so-called range walk) is less
Ax and dx, and Ay and dy. Also, w(x, y) is the weighting than on the order of 1/2 of a range resolution cell, the

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 381


image formation process simplifies further. This condition effective rotation of the scene to obtain the desired cross-
is achieved if range resolution, as given by LzX = X/p,. The scene may
be three dimensional in nature and the sensor may not
P2Pr
p 2R/16 (44) precisely follow a straightline path. A motion-sensing
where P, and pr are the azimuth and range resolutions, system must be used to determine the required antenna
respectively, and R is the operating range [63]. In such a pointing angles and to provide knowledge of relative
case, the system reference function becomes separable in motion between the vehicle and the scene, knowledge
range and azimuth and the image formation process which must be used during the image formation
becomes a sequence of two one-dimensional processing processing step.
steps [64]. Fig. 20 shows a possible configuration for a spotlight
mode SAR system. The diagram assumes the use of a
(2) Spotlight Mode SAR. Spotlight SAR [6-10, 54,
linear frequency modulation (FM) waveform for use in
56, 651 has as its objective the production of imagery fine range resolution, although such an
obtaining
exhibiting resolution finer than that associated with the
limits imposed by a fixed antenna, or to produce imagery assumption is not necessary in order to perform spotlight
imaging. As before, an inertial measurement system can
with a great deal of angular diversity with which to
provide pointing angles for the antenna, although for the
understand the directional characteristics of the scene
spotlight case, the illumination follows a fixed point on
reflectivity of interest. Benefits which may accrue
the ground rather than following a strip of terrain parallel
through use of the spotlight mode come at the expense of to the flight track as was the case for stripmap.
area coverage since longer dwell times are required.
The first step in preparing for range-Doppler imaging
Spotlight Data Acquisition. The collection geometry is to remove the effects of the gross range changes to
for the spotlight mode SAR is shown in Fig. 19. As the scene center on a pulse-by-pulse basis over the coherent
vehicle carrying the SAR sensor moves past the area of illumination time. In the system of Fig. 20, this is
interest, the antenna boresight is continually realigned so accomplished by multiplying the returned signals with a
as to point at the center of the scene. The antenna replica of the transmitted signal, only delayed by
beamwidth must be large enough to adequately illuminate precisely the round-trip delay to scene center. This delay
the desired area to be imaged, and the duration of the is determined by real-time computation of the range to
illumination must be long enough to obtain sufficient the scene center ra using INS-supplied information.

Flight Path -'

1
/
/
1
1/
__1
701s5

701
I!
/
7
3D Spotlight Scene /
/
/
/
Fig. 19. Spotlight mode SAR collection geometry. (Symbol with overbar corresponds to boldface symbol in text.)

382 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
from the desired area, beginning with the near-edge
return and ending with the far-edge, are shown occurring
with appropriate delay associated with round-trip
propagation. Mixing with a replica of the original
transmission delayed by the round-trip time to scene
center produces a constant frequency signal for each
return from a point scatterer. The frequency of this signal
is proportional to the range of the scatterer.
The entire set of constant-frequency signals associated
with the scene to be imaged are shown centered about
zero frequency in Fig. 21. These video signals would
thus be encoded as complex-valued (I and Q) data. The
figure depicts a direct conversion of RF to I and Q data.
With practical considerations given to filtering of signals
from terrain which is illuminated but not desired in the
final image, there might be several intermediate-
frequency processes required to produce the desired
result.
The total set of video signals have bandwidth BW,
related to total range swath width (SW) by the scale
factor 2y/c. The duration of the signals is now
proportional to the original sweep time and hence to the
bandwidth to be used to obtain range resolution. Note
that the signals from all ranges do not completely overlap
in time, although chirp rates and pulse lengths may be
chosen to minimize this effect. In such cases, only the
central overlapped region is A/D converted and recorded
Fig. 20. Simplistic spotlight SAR system with polar-format
processing.
to avoid the inefficiencies associated with storing and
processing of digital data whose time-bandwidth product
is not wholly occupied. The shaded area in Fig. 21
The frequency-versus-time characteristics of the indicates the time-bandwidth product of the signal
signals for a single radar pulse transmission are shown in digitized and recorded over time period T. The effective
Fig. 21. The figure depicts the generation and RF bandwidth which determines range resolution, BWrf,
transmission of the linear FM waveform to begin at time is shown to be less than the full transmitted bandwidth in
zero with chirp rate y. The total set of signals returned this case.

Far-Edge Return
f Reference Function

Near-Edge Return

fol
I/
I
1
Far-Edge I
-T -

Fig. 21. Spotlight range tracking to produce video.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 383


Use of linear FM waveform in this deramping scheme effects described in Section III will limit the useful
on a pulse-by-pulse basis has resulted in signals with portion of the final scene to a region about the central
frequency and starting phase determined by the relative compensation point with approximate diameter
range of a scatterer to scene center, thus establishing the D = 4p2/X. The multiple-subpatch image-formation
conditions required to perform frequency-domain range- approach accepts this limitation and in an efficient
Doppler imaging. In instances where other transmitted manner applies the same process multiple times at
waveforms are desired, it is still possible to achieve this different locations to obtain full quality over the entire
condition by first pulse-compressing the received signals scene. The full scene is essentially divided into several
to achieve fine range resolution (while retaining the phase smaller scenes of diameter less than the limit imposed by
information) and then Fourier-transforming the signal scatterer migration, each scene being compensated for
such that point scatterers give rise to signals with motion relative to its center.
frequency and starting phase proportional to relative range An efficient process for accomplishing this method is
to scene center. depicted in Fig. 22. The input to the process is the video
Spotlight Image-Formation. As described in Section signal which has been compensated to the center of the
Ill, there are at least four fundamental ways of scene by mixing with the reference function R,,(t). This
accomplishing the image-formation process of such function, which changes on a pulse-by-pulse basis
signals: (1) subaperture linear range-Doppler processing; according to the pulse-by-pulse changes in range, has
(2) multiple-subpatch linear range-Doppler processing; caused any signal received from the center of the scene to
(3) whole-scene polar-format processing; and exhibit zero frequency and phase over the entire coherent
(4) backprojection or general correlation processing. The integration period. The full scene signal is passed through
relative advantages and disadvantages of these approaches a bank of bandpass filters which filter the data in the fast-
are dependent upon the particular system parameters at time dimension into some number N of frequency sub-
hand and would have to be determined on an ad hoc bands corresponding to range subswaths across the scene.
basis. The frequency content of each subswath has been reduced
The subaperture linear range-Doppler processing nominally by a factor of I IN such that the N output
approach is analogous to the extended correlation channels can each be downsampled (reduced sampling
processing (ECP) algorithm described in Section IVB for frequency) by an equivalent amount. In practice,
efficient processing of extended correlation data. The however, some excess subswath BW is required to
two-dimensional signals are arrayed in a simple linear prevent ambiguities due to nonideal bandpass filters.
fashion and multiple azimuth subapertures are two- Each of the signals corresponding to the range
dimensional Fourier-transformed to form complex-valued, subswaths are then recompensated using reference
coarse azimuth-resolution images. The subapertures are function R,(t) such that the nominal center of each range
chosen to be small enough such that there is insignificant subswath exhibits zero frequency and phase over the
range-cell migration during the subaperture interval. Each integration time. The reference function is generated on
of the images formed within subapertures are then the basis of differential range, on a pulse-by-pulse basis,
compensated in phase and spatial rotation to compensate between the center of the subswath and the center of the
for the scene rotation which occurs between subapertures, entire scene. This process "stabilizes" the azimuth
are upsampled in azimuth to accommodate the eventual Doppler frequency content within each subswath in
finer azimuth resolution, and coherently summed and preparation for filtering in the azimuth, or slow time,
detected to form the final image. Because of the analogy dimension to form image subpatches.
to the ECP algorithm, discussion of the approach is The azimuth bandpass filters for each subswath
deferred to Section IVB. partition the Doppler spectrum into some number M of
The backprojection processing method is common to sub-bands corresponding to multiple subpatches in the
the field of computer-automated tomography (CAT). cross-range direction within each subswath. The outputs
Various analogies have been drawn between CAT of this process are downsampled in slow time to the
processing and spotlight SAR processing 154]. As was minimum allowed to unambiguously represent the signals.
mentioned in Section IlIl, the backprojection processing The outputs of each of the N x M filters is additionally
approach is essentially the same as the pulse-by-pulse compensated to set the center of each subpatch to zero
correlation approach as it is applied to the imaging of frequency and phase over the integration time by
rotating objects and thus will not be discussed in further multiplying by reference functions R,,,,,(t), which are
detail here. formed on the basis of differential ranges between
The multiple-subpatch processing method and the subpatch centers and range subswath centers on a pulse-
polar-format processing method are described in more by-pulse basis.
detail below. Each of the N X M channels which have been created
Multiple-Subpatch Processing. If the spotlight SAR by this process now provide signals for multiple
signals are simply rectilinearly formatted and two- subpatches covering the entire scene, with each channel
dimensionally Fourier-transformed over the entire compensated in frequency and range to the center of the
collection duration, then the point scatterer migration associated subpatch. The data within each channel is then

384 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
Subpatch
Signal Data

Fig. 22. Multipatch range-Doppler processing.

two-dimensionally Fourier-transformed to form the is usually not sufficient to provide meaningful resolution
subpatch images, which may then be mosaicked to form in the third dimension. Thus, two-dimensional approaches
the full scene image. The processing within each normally suffice. (The resultant two-dimensional output is
subpatch relies entirely on linear range-Doppler analysis also compatible with current two-dimensional display
since each subpatch scene dimension was limited in technology.)
extent to prevent relative range walk greater than some As implied in Section Ill, when applying two-
fraction of a resolution cell during the required integration dimensional processing to form a two-dimensional image,
time. one must account for the noncoplanar excursions of the
Polar-Format Processing. The formulation of sensor vehicle in order to obtain a correctly focused
Section 1II provides a sound basis for application of the image. Even then, correct focus can be obtained only for
polar-format processing approach to spotlight SAR data. collections of scatterers which lie in a common plane.
Section 1 determines that each individual radar This plane is called the focus target plane and can be
transmission and reception which is appropriately arbitrarily chosen prior to processing but would usually
compensated for range to scene center and which is be made to correspond to the nominal ground plane
processed such that frequency and starting phase become within the scene of interest.
proportional to relative range to a scatterer can be thought The method for accounting for noncoplanar motion of
of as viewing a linear one-dimensional segment of the the collection vehicle is illustrated in Fig. 23. The signal
three-dimensional Fourier transform of the (in general) values corresponding to the collection signal surface must
three-dimensional scene. Taken as a whole, the total set be projected in a direction normal to the chosen focus
of such observations over the flight path in Fig. 19 plane until they intersect the desired processing plane.
corresponding to the coherent illumination period, one is Projection of the signal values in this particular direction
essentially observing a two-dimensional curved surface preserves the correct relative phase of the samples for
within the three-dimensional transform. This surface is signals which result from scatterers lying in the focus
known as the collection signal surface. plane, as was described in Section 111. The intersecting
Although one can theoretically perform a three- plane is known as the reference plane, or alternatively,
dimensional transform of a volume containing this surface the output image plane.
to form a three-dimensional image of the scene, the Selection of the reference plane determines the
obtainable sensor vehicle excursion in the third dimension perspective, or point of view, associated with the final
AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 385
yg..
~~~NTX
' ~ ~ ~ ~ ~ ~ Tx
-C1;:1:1:. * * 4 ym
Tr
y

KTr

Fig. 23. Polar-format signal projections in frequency domain. I----


toi
image. If one wishes the image to appear as if the viewer
were looking straight down upon the scene, then the
Fig. 24. Digital polar-format geometry.
ground plane itself is selected as the reference plane
(possibly synonymous with the focus plane in this case). The number of samples along each radial line K is
Conventional SAR images for the stripmap case have the determined by the effective duration of the video signal
perspective normal to the "SAR plane," which is defined as given by T in Fig. 21. By virtue of the linear FM
to be the plane formed by a central point within the scene deramping operation, this duration is directly proportional
and the velocity vector of the sensor vehicle. To affect a to the effective RF bandwidth used to obtain range
similar appearance for the spotlight case, the reference resolution.
plane might be selected to nominally coincide with the To form an image, the geometric array of samples in
curved collection signal surface. It must be pointed out Fig. 24 must be Fourier-transformed in two dimensions.
that the concept of point of view is correct only for scene In order to take advantage of the efficiency of a two-
elements lying within the focus plane. Point scatterers out dimensional FFT and to produce an image which is
of this plane will image at positions which do not sampled on a two-dimensional grid, one must resample
correctly follow from rules of perspective, an effect the data of Fig. 24 to produce new samples occurring at
known as range layover. the intersections of a two-dimensional rectilinear grid as
Once the focus and reference planes have been depicted in Fig. 24. The range and azimuth sample
selected, the polar-formatting operation is straightforward. spacings associated with this grid, T, and Tv, determines
Based upon the projection of collection surface signal the extent of the output image in the range and azimuth
sample positions in a direction normal to the focus plane, dimensions, respectively. The number of grid samples,
positions of the samples in the intersected reference plane and hence the extent of the grid, in the range and azimuth
are computed. Auxiliary data describing the pulse-to- dimensions determines the output sample spacing in both
pulse position of the antenna phase center relative to the dimensions of the image. This interpolated grid is often
scene center and knowledge of the frequency-versus-time increased in size prior to the two-dimensional FFT by the
relationship for the transmitted pulse are used in this padding of zeros in order to increase the image sampling
geometrical computation. Once these positions have been rate on output, although the system resolution is
determined, the collected signal data is appropriately dependent only on the portion of the grid filled with
described in Fig. 24. actual signal data. The grid values might also be
Fig. 24 shows the relative positions of the data weighted in range and azimuth prior to FFT in order to
samples as they have been projected into the reference lower the sidelobes of the Fourier transform process.
plane. The samples, represented as black dots, are shown The original system concept depicted in Fig. 20
arrayed along radial lines at angles 0 corresponding to indicated that the formation of the new sample grid may
the line-of-sight angles to scene center for each radar be performed as two separate one-dimensional
transmission, as projected into the reference plane. The interpolation steps. The first step, illustrated in Fig. 25, is
spacing between samples Tr is determined by the original known as range interpolation. For this stage, each
video signal sampling period as projected into the individual radar pulse is simply resampled such that the
reference plane. The original video sampling frequency new samples fall on positions along the horizontal lines
must be high enough to unambiguously sample the video making up the interpolation grid. This operation may be
spectrum whose bandwidth is dependent upon the range viewed as a digital filtering technique where the input
extent of the illuminated scene. function is a discrete set of uniformly spaced samples and

386 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
in an orthogonal direction. The required interpolation can
also be implemented as a digital filtering operation with
output samples computed at appropriate times for each
row of data. However, the input samples to this process
cannot be considered equally spaced and digital filtering
techniques which take this into account must be
employed. As was the case for range interpolation, this
latter interpolation process is also a low-pass filtering
operation. The azimuth signal bandwidth must be reduced
to that associated with the desired cross-range scene size
prior to resampling the data. The low-pass filtering in
both dimensions is easily accomplished as part of the
resampling process.
After the two-dimensional interpolation to form the
rectilinear signal grid, a complex-valued image is formed
using a standard two-dimensional FFT algorithm.
Detection of this array produces the desired image.
Extension of Spotlight Processing to Stripmap
Fig. 25. Polar-format range interpolation. (Key: 0 indicates range
SAR. Practical implementations of the stripmap
interpolated samples (input); 0 indicates azimuth interpolated samples
processing algorithm noted above are restricted to
(equally spaced in x and v.)
situations where the radar illumination is primarily in the
the output samples are computed at a lower sampling rate broadside direction, and where the amount of range cell
and with some specified delay with respect to the first migration of scatterers is minimal (on the order of a few
cells). However, the aforementioned spotlight SAR image
sample within each radar pulse. Since, in general, the
formation processes may be beneficially applied to
output samples from this operation occur with lower
stripmap cases which do exhibit squinted antenna since
frequency than the input samples (due to the likelihood of
the algorithms, in combination with the preprocessing
overillumination of the desired scene), one must perform range changes,
low-pass filtering of the original data to ensure that compensation for gross pulse-to-pulse
inherently compensate for the associated range walk
aliasing effects do not occur. For range interpolation, this
low-pass digital filter may be thought of as a range effects.
prefilter which limits the video frequencies present to The approach to accomplishing "spotlight"
those associated with the final desired range extent of the production of stripmap data is shown in Fig. 27. The
desired stripmap scene is envisioned as being partitioned
imaged scene.
The second stage of polar-format interpolation is
shown in Fig. 26. The azimuth interpolation operates on
the samples produced from the range interpolation, only

Fig. 26. Polar-format azimuth interpolation. (Key: 0 indicates raw data


samples (input); 0 indicates range interpolated samples (equally spaced
in y.) Fig. 27. Spotlight image formation applied to squinted stripmap SAR.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 387


into many subpatches in a manner analogous to the targets such as rotating platforms, aircraft, ships, and
spotlight subpatch processing approach mentioned earlier. ground vehicles.
As the fixed antenna beam migrates over each of these
subpatches, the signal data for each subpatch is isolated (1) FFT Range-Doppler Images. In Section 1, it
by motion-compensating to the center of the cell was pointed out that if the coherent processing interval
(removing range changes to subpatch centers) and low- AT is sufficiently small, one can perform conventional
pass-filtering to remove signal data which does not range-Doppler processing to calculate images. Here, we
contribute to the desired subpatch scene. The data within demonstrate that G(ro) can be efficiently calculated using
each cell is then processed by any of the given spotlight the (FFT) method if AT strictly satisfies the constraint of
no motion through resolution cells. In this case, G(ro)
algorithms, and the final full-strip image is formed by
mosaicking of the individual subpatches after resampling will be known as an FFT range-Doppler image.
in the along-track and cross-track dimensions. In many space object imaging applications, useful
Although this approach is robust in terms of results often can be obtained by FFT range-Doppler
compensating for severe range walk, there are various imaging. This can be understood from (9) and (10), since
factors which could limit the practicality of a given artificial satellites typically have limited dimensions and
implementation. Since the antenna beam is not stewed to planetary imaging requires resolutions on the order of 10
illuminate the individual subpatches, the subpatch must km.
be limited in size such that the entire patch can be The constraints on AT can be more precisely stated
illuminated simultaneously over the coherent integration by requiring that during this interval
period required to achieve the desired cross-range (i) the relative range to every scatterer not change by
resolution. To achieve reasonably sized patches, this more than a fraction of the range resolution,
requires excess antenna beamwidth over the minimum (ii) the relative range rate to every scatterer remain
required to achieve the cross-range resolution. Also, a within 1 range-rate resolution cell.
fair amount of processing overhead is entailed in filtering
the data into subpatches and in resampling and It will be shown that condition (i) is necessary to use the
mosaicking the results to form the full image. In the full efficiency available from the FFT method. When
event that FFT methods are involved in forming subpatch condition (ii) is satisfied, it can be shown that the relative
images, the smaller patch sizes resulting from the antenna range to every scatterer varies linearly with time to a
beamwidth limits also begin to limit the efficiency of the precision of a small fraction of the wavelength.
FFT algorithm itself. Furthermore, in this case, it can also be shown that for
three-dimensional objects, it is necessary for all RLOS
during the imaging interval to be very close to coplanar.
B. Ground-Based Imaging of Moving Objects Specific expressions for image intervals that meet the
range-Doppler imaging conditions, analogous to those in
In this section, we discuss useful image types and (9) and (10), are stated for typical applications.
imaging applications for the case of a ground-based, fixed Range-Doppler Imaging Coordinate System. Range-
radar with moving object targets, such as the solar Doppler images can be most conveniently calculated in
system's bodies and artificial Earth satellites. Because of the (x',y',z'Y "imaging" coordinate system of Section
their possible varied motion characteristics, these objects I1B. In this coordinate system, the RLOS directions are
can provide a wide spectrum of illustrative and important specified by the angles 0 and n shown in Fig. 28. Fig. 28
imaging cases and applications. is a local view of the surface of the unit sphere including
Many of the imaging methods discussed here apply, all RLOS positions during AT. The angle between the
or can be appropriately modified, for other moving x',y' plane and the RLOS is denoted by q. This is the

Z'

t = +T/2 t = - AT/2
t
=~~~~~~~~~
0
do R1

x, 1-c 9_
Fig. 28. Aspects sampled for linear range-Doppler imaging on surface of unit sphere.

388 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984
complement of a conventional polar angle measured from In this more general context, the relation between the
the z' axis. The azimuthal angle of the RLOS about the relative Doppler frequency of the scattererfD and the
z' axis is 0, with 0 = 0 at t = 0, 0 increasing with time. scatterer's cross-range displacement x' is given by
With this geometry, the relative range to point
(x',Y',z'), defined by (16), can be written fD = 2Do(x',y',z')/X = 2x'6/X
as compared with (5).
D(t) =x- cos m sin 0
+ yv cos q cos 0 + z' sin q. (45) Determination of Range-Doppler Imaging
Intervals. The first condition limiting the interval AT
From this one gets for range-Doppler imaging (that the relative range to a
D(0) -Do = v' (46a) scatterer change less than the range resolution) can be
written
D (0) Do = x'0H (46b) AO OAT < Pr/I XmaxI (49)
D(O):- Do = x,0O- vO + z'-io. (46c) Here, X' max is the largest cross-range displacement of
any scatterer in the target. This is a more general version
Range-Doppler Image Function. If the radar's PRF of the expression in (9).
is constant and if condition (ii) for FFT range-Doppler A similar limiting expression for AT from condition
imaging is satisfied, one can approximate D(t) over the (ii) is given by
interval AT by a linear function of the pulse number p
AT < C (X/iD50Imax)/' (50)
D(x',y',z',t) D(p) Do + Dop/PRF (47)
where ID0I is the maximum value of lDo(x',Y',z')¶ for
where p is defined to be zero at t = 0, the center of AT. any scatterer in the target and C is a dimensionless
The error in this linear approximation can be estimated by numerical constant.
1ot2/2. In most cases where range-Doppler imaging is used,
With (46) and (47), equation (15) becomes the (60)2 contribution to Do predominates. In such cases,
G(x',',z') exp( -4Trjv'/)P(x',v') (48a) the angular rate of the RLOS relative to the target is
approximately constant, and the RLOS are approximately
where coplanar relative to the target. Then (50) can be written
P(x',y') = E W(p)S(p) exp(-4ljOx'p/PRF) (48b) A0 z=AT < C
p) (X/ly'Imax) (51)
is the range-Doppler image function. The summation where lY' Lx is the maximum range displacement from
extends over the pulses in the interval AT. Because of the origin to a scatterer. This expression is a more
(46a) and (46b), the calculated image function P(x',y') is general version of (10).
also available as a function of D and Do, P(Do,DO). If the range-Doppler image is to have equal range
The function P(x',y'), as expressed in (48b), has the and cross-range resolution, p, = P, = p, then to satisfy
form of a discrete Fourier transform to be calculated at (49) and (51), respectively, jx'imax and JYxlnax must each
each value of y'. This form has some advantages in be less than 4p2/X. Outside these limits one can see a
computational speed. However, without an additional slight smearing of the scatterer's image. The smearing
approximation, it cannot be evaluated with the full gradually becomes more pronounced the farther the
efficiency available from the FFT. The quantity S(p) scatterer is from the origin.
[(17) and the definition of S(p)] comes from the radar Fig. 29 shows this smearing as a function of the
return sampled at the range R(O,t) + D(t), where D(t) target's location. This range-Doppler image was
depends on both x' and y' by (46) and (47). To evaluate calculated from simulated radar data, assuming that X =
(48b) efficiently with the FFT, S(p) cannot depend on x' 3 cm. The true location of each scatterer is at the center
However, if condition (i) is satisfied, i.e., if the change of its image area. It should be noted that as a scatterer's
in relative range is small compared with the range image is smeared over a larger area, the peak RCS falls
resolution, then S(p) sampled at below the actual RCS of the scatterer. For a few of the
scatterers, the loss in image RCS is shown in dB. The
R(9,t) + y' + x'Op/PRF integral in square meters over the scatterer's image area
remains constant. Total "power" is conserved between
is approximately equal to S(p) sampled at data input and range-Doppler image output.
R(0,t) + y'. In some important cases, the contribution of 60 or N
to (46c) cannot be neglected. The first class of cases
With this approximation, S(p) becomes independent of x' consists of stable or very slowly rotating targets in low
and (48b) can be evaluated as an FFT at each value of Earth orbit near the beginning or end of a pass when the
relative range, y'. satellite's velocity vector is directed almost toward or
AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 389
SINGLE LINEAR IMAGE coherent data that spans a time significantly greater than
the interval AT which can be used in linear range-
20 Doppler imaging. This set of data need not be
-8.0 dB
-10.3 dB
dB
continuous. It can, for example, be made up of many
widely separated intervals. Two particular classes of data
16- sets are discussed.
The first class of data sets, associated with wide-angle
ui
0
12-
imaging, uses one continuous interval of data AT, where
z
-6.0 dB -7.2 dB
AT is larger than can be used in linear range-Doppler
imaging, but is a fraction of the target's rotation period.
The second class, associated with multiple rotation
8-
imaging, uses many equal intervals selected
* 4' 4 synchronously from successive target rotation periods.
4- This second class includes stroboscopic and three-
dimensional imaging.

0-
Wide-Angle Imaging. In wide-angle imaging, the
0.0 dB -6.0 dB -9 7 dB continuous data interval used AT is significantly larger
than the interval that can be used in range-Doppler
imaging, i.e., AT severely violates (49) and (50) so that
0 4 8 12 16 20 simple Fourier transform processing cannot be used.
CROSS RANGE (m) These larger values of AT correspond to larger aspect
Fig. 29. Range-Doppler image over 7.20 of aspect change with too changes Point scatterer-like features that give
AO.

large an interval (0 = constant; simulated data X = 3 cm). persistent returns during this interval image with a sharper
cross-range resolution according to (8). Also, the
away from the radar. In these cases, 00 is important. In boundaries of specular surface are more sharply defined.
the second class of cases, --0 is predominant. These More specular surfaces are included in the image because
involve very rapidly rotating targets where the angle K of the wider range of aspects covered by the data. The
between rotation axis and the RLOS is small or close to image SNR will improve for small persistent point
1800. Other cases where these second derivatives cannot scatterers, thus yielding in some cases otherwise
be neglected must be expected, but the above two classes unobtainable information about some of the low RCS
are known to occur frequently. features of the target.
(2) Extended Images. An extended correlation When the aspect rate 0 is constant, Fig. 29 shows the
image is obtained by evaluating G(ro) over a set of smearing of a range-Doppler image that results when

SINGLE LINEAR IMAGE EXTENDED CORRELATION IMAGE

20- a Xsw

-8.0 dB -10 3 dB

16-
_

z -6.0 dB -7.2 dB

> 8- Al t t

---
4u
4- t~~~~ , * I b

0- %
0.0 dB -6.0 dB -9.7 dB

I I I~~~~~~~~~~~~~~~~~~~~~~~~~~~
0 4 8 12 16 20
CROSS RANGE (m)
Fig. 30. Comparison of range-Doppler with correlation image over 7.20 of aspect change (0 =
constant; simulated data).

390 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984
both constraints on range-Doppler imaging are violated data is taken at geosynchronous altitudes is about 1.3 x
by an extraordinarily large target. (The radar data is 10`4 rad/s. This limiting value is the fastest rate of the
simulated.) Fig. 30 shows the same range-Doppler image RLOS in inertial space, when the satellite's speed is less
beside a correctly calculated correlation image. The than Earth escape velocity. All lower values of K can
correlation image correctly focuses, with no loss of RCS, occur. Many geostationary satellites have K = 0. At
all scatterers in this large target. lower altitudes, both larger and smaller values of K are
Near the end of subsection IV B-1, a not unusual case possible, but the smaller values of i occur much less
was described where a linear range-Doppler was limited frequently and for shorter periods of time.
in cross-range resolution by 00. This occurs because early There are two useful classes of multiple rotation
(and late) in a near overhead pass of an Earth-stable imaging: three-dimensional images and stroboscopic
target, the aspect rate, 0, is small and rapidly varying. images. Since the boundary between three-dimensional
Fig. 31(a) shows a range-Doppler image calculated using and stroboscopic imaging is not sharp, an arbitrary
a AO that violates (49) by a factor of 7. (Again, radar boundary will be defined. An image will be called
data is simulated.) The actual scatterer locations can be stroboscopic if p(z') > z' extent of target, where p(z') is
seen in Fig. 31(b) which was calculated by correlation given by (37). It will be called three dimensional if
imaging using the same data interval. The nature and amb(z') > z' extent of target > p(z'), where amb(z') is
location of the smearing on the range-Doppler image given by (32). This arbitrarily includes in the three-
confirms that the 00X' contribution to Do in (46c) is the dimensional category images where the target's Z' extent
dominant source of smearing. Again, on the correlation is only a little greater than the resolution.
image, each scatterer is correctly focused. Similar image The aspect sampling that permits three-dimensional
improvement in these cases has been obtained with real imaging is illustrated in Fig. 16. The first requirement in
data. the selection of data for three-dimensional imaging is to
ensure that amb(x'), given by (31), is greater than the x'
extent of the target and that amb(z'), given by (32), is
Multiple Rotation Imaging. The data used in greater than the z' extent. At X band with typical satellite
multiple rotation images covers the same interval A1J dimensions, this generally requires that both 80 and 6K
(Fig. 16) on each of many successive rotations. be a small fraction of a degree. The second requirement
According to the discussion in Subsection IIC, useful is that enough data be used to give the desired resolution
multiple rotation images require that the rotation rate i in both the x' and the z' directions. At X band, this
be much larger than the rate of change K of the aspect generally requires A0 .\K AK a few degrees.
deviation angle. For satellites in orbit about the Earth, it For stroboscopic imaging, the aspect deviation angle
can be shown that the largest possible value of K when K effectively remains constant over the entire data set

14
1 A -
* -l X
(a) (b)
12-
Jilt

E 10-
Z8
WL
Z 8- ..
t
-J

2 6-
g.,..4.A
ccW 4- AL = 320 km
EL = 7 to 17 deg
2- MAX EL = 90 deg
do/2 = 16
0U a'

1 1 1
.

-6 -4 -2 0 2 4 6
CROSS RANGE (m)
Fig. 31. (a) Range--Doppler image of Earth-stable target at low elevation from overhead pass (0 = rapidly changing; AO z 60; simulated data).
(b) Correlation image from same data used to calculate (a).

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 391


used in the image. The same arc of a K = constant small artificial satellites out to geosynchronous ranges. Some of
circle on the unit sphere in the target coordinate system is its design parameters are listed in 11 11. In deep space,
sampled again and again on successive target rotations. coherent integration over a large number of pulses is
This redundant sampling of the same arc of aspects is generally required for adequate image signal-to-noise
useful mainly because it permits coherent integration to ratio. At a PRF of 1200, unambiguous images of objects
suppress the noise. With uniform weighting. correlation with cross-range extents exceeding 4 m can be obtained
imaging is the optimum process for this coherent with rotation periods as short as 2 s.
integration over rotation periods. With each rotation These ground-based radars, like (spotlight) SAR
weighted the same, the noise power level in the radars, can use time-bandwidth exchange techniques to
correlation image is suppressed by the factor of 1IN, pulse-compress a wideband FM waveform. Fig. 32, which
where N is the number of rotations used. Because the is similar to Fig. 20, shows a possible simplified radar
weights are normalized, the target RCS in the image system configuration. The most essential difference from
remains the same. Fig. 20 is that the real-time tracking system with its input
If the RLOS used are approximately coplanar, the measurements of range, azimuth, and elevation replaces
stroboscopic image, like a single rotation image using the the motion measurement system of the airborne SAR.
same +i interval, is two dimensional. The image function The other major, but not essential, difference is that the
G(ro) need only be calculated over the z' = 0 plane. steps that represent polar-format processing are omitted.
An additional important property of stroboscopic Instead, in Fig. 32, the pulse-compressed radar data,
imaging is that it can be used to suppress the cross-range along with auxiliary data, are recorded for separate image
ambiguous images that occur with PRF limited data. With processing. Fig. 21, which shows the signals from the
a large rapidly rotating target, 60, the change in aspect transmitter, from the receiver, and from the correlation
between the pulses used in an image calculated from a mixer of a (spotlight) SAR also applies to the ground-
single rotation may be so large that the x' extent of the based radar. However, the small range extents of typical
target exceeds the ambiguity interval amb(x') given by man-made moving targets causes the two parallelograms
(31). This would cause the ambiguous images on each on the figure to become extremely slender. Essentially,
side to overlap the true image. If the apparent rotation the full length of the chirp pulse can be used.
period of the satellite is not an integral multiple of the The recorded pulse-compressed signals and auxiliary
radar's interpulse period, the aspects sampled by the data are used to calculate FFT images (see Section IVB-
pulses on successive rotation periods are not precisely the 1). Because of the small size of the objects, single FFT
same and the sampled aspects are interleaved. This images with cross-range resolution equal to the range
interleaving of aspects, which usually occurs, causes the resolution often are well focused over the full extent of
" ghost" (i.e., ambiguous) images in successive single the target. If extended images are required, they also can
rotation correlation images to be misaligned in phase, be calculated. Since the extended images require better
while the true images are correctly aligned in phase with trajectory and rotational models of the objects, human
each other. With coherent summing over rotation periods, intervention would normally be required.
the ghost images are partially suppressed. Optimal The recorded auxiliary data contains metric
suppression of the ghost images can be achieved with information which can be used along with the pulse-
proper selection of nonuniform weights between rotation compressed signal data to improve the trajectory estimate.
periods. The nonuniform weights usually cause a modest The "predicted range" which is recorded gives the
reduction in the SNR gain for the true image. precise range to one of the range recording bins (i.e., to
Any rapidly rotating geostationary satellite whose one of the range FFT outputs). This allows the range to
rotation axis has a constant orientation relative to the every range bin to be accurately calculated at every pulse.
Earth will have a constant K so stroboscopic imaging can
be done. At geostationary ranges, the improved image Dynamical Modeling for Space Object
SNR is needed. Other deep space targets may possibly be Imaging. Dynamical models describing the orbital
found with the rapid rotation and extremely slow K motion of the target center of mass and the rotational
needed for stroboscopic imaging. motion of the target relative to the distant stars are
essential inputs in the calculation of the correlation image
(3) Data Acquisition: Ground-Based function of (16). The orbital model is needed to perform
Radars. Ground-based radars for imaging of man-made the overall center of mass Doppler component phase
moving objects require, among other attributes, sufficient correction of (17). Both the orbital and the rotational
sensitivity for the objects' range, good frequency stability motion model are needed to determine the extent of the
for phase measurements, a PRF capability greater than integration time (number of pulses) required to sample the
the greatest Doppler frequency spread of the objects to be aspects necessary to provide a good resolution image and
imaged, and a PRF control system to insure that to determine the correct relative range rate (frequency to
transmitted pulses will not interfere with the received cross-range conversion scales).
pulses.
The long range imaging radar (LR1R) is an example Precision Requirements for Dynamical Models. As
[1 It was designed to meet these requirements for
1]. previously indicated (Section IIIC), in order to obtain
392 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984
FFT I
- -*.4(Cross Rangel
Resolution) I
L. _ _ _ _ _

Fig. 32. Simplified ground-based radar systems.

quality images from the calculation of (16), the combined and thus all the recorded data can be used in one coherent
dynamical model parameters should be determined to interval.
sufficient precision that they determine range variations to Rotational model parameters, such as the target's
any point ro in the object to a precision of a small angular velocity and the orientation of its rotation axis,
fraction of a wavelength over the data interval of the often cannot be determined as reliably as the orbital
image calculation. Experience has shown that in many parameters. The development of techniques to determine
cases orbital and rotational motion parameters determined rotational motion parameters is a crucial and still very
from radar data can give on the order of X/30 range active endeavor. The successful techniques depend
precision or better. In particular, one can demonstrate that critically on the nature of the data, such as the extent of
when one is estimating orbital parameters, the calculated change of the orientation of the target. Refinement of
range variation error due to orbital parameter errors can rotational motion parameters using phase-derived range
often be estimated by measurements has been successfully achieved for a few
selected targets. In every case, a preliminary approximate
RE cr(Robs) AT/DT (52) rotational model must be obtained before refinements
with phase-derived ranges can be performed. Often the
where c(Robs) is the rms range observation error. preliminary model, however, may be difficult to
Typically, cr(Robs) is on the order of several centimeters determine.
to a meter. DT is the data time interval over which the Except at low elevations or over very long time
trajectory fit is calculated. It is typically 10 min or intervals, a propagation model that is a standard
longer. AT is the coherent imaging interval. For an troposphere model for the radar site is adequate. The
Earth-stable target, AT may be typically one or two troposphere model errors and the ionospheric effects do
orders of magnitude smaller than DT (which is limited by not change rapidly enough to defocus the resulting
the duration of the pass) and consequently the X/30 images. In the exceptional cases, orbit fit range residuals
precision requirement may or may not be achieved. For from phase-derived ranges have been fitted to smooth
rapidly rotating targets, AT is usually sufficiently smaller functions of time. These smooth functions have been used
than DT that the precision requirement is easily reached. for additional propagation corrections.
Also for some of these rapidly rotating targets, techniques
using phase-derived ranges have been developed for Extended Coherent Processing (ECP). Here we
calculating orbit fits with rms range observation errors, discuss an algorithm called ECP which has been
c-(Robs), on the order of 1 mm. With phase-derived developed for the purpose of efficiently calculating,
ranges, the X/30 precision is achievable for AT - DT analysing, and displaying the pulse-by-pulse correlation

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 393


image function defined by (14). The method used by ECP further shortened by calculating u(t) and ii(t) only once
is basically of the multisubaperture processing type for each subinterval. The simple dot products, (53a) and
discussed in Section IIIB-2. (53b), are all that must be repeated for each (ro) grid
This program estimates G(ro) for arbitrarily long sets point in the image.
of data by means of a coherent summation of linear If the radar range is not much larger than the object,
images evaluated over shorter segments of the data the dot product approximation of (53) cannot be used.
intervals. Like all correlation imaging, ECP requires Instead, the instantaneous position of the radar in the
orbital and rotational models that give precise estimates target's coordinate system is given by
of range variations at all the input data times. These rr = -R(0,t)u
precise models are used to correctly account for the (54)
nonlinear motion of scatterers in a wide-angle image, and the exact relative range is given by
and/or the rotation-to-rotation relative motion in a
multiple rotation image. The ECP image is an excellent D(ro) = Iro - rrl - Ir,^l (55)
approximation to G(ro). Furthermore, its evaluation is
also very efficient, being at least an order or magnitude The relative range rate D is obtained by differentiating
faster than the pulse-by-pulse calculation of G(ro). the expression for D, holding ro constant. These exact
calculations require a modest increase of computer time
The ECP Algorithm. To calculate a correlation over the time required by the dot product approximation.
image using (14) or (15)-(17), it is necessary to calculate For a bistatic system, the relative range would be the
the quantity W(p)S(p) expt -j44-R(p)/XI (or the average between relative ranges calculated for the
equivalent quantity from (16)) once for every ro grid point transmitter and the receiver.
in the image and to repeat these calculations for every Let D(n) D(ro,n) and D(n) -D(ro,n) denote the
pulse. These repetitive calculations, although calculated values of Do and Do at the center time of the
straightforward, require about an order of magnitude nth range-Doppler image subinterval to be used in
more computer time than calculating a linear range- calculating the correlation image. With this notation, the
Doppler image using (48b) over the same set of pulses. linear approximation equivalent to (47) gives the relative
This fact suggests that it would be worthwhile to range at pulse p in the nth subinterval as
reformulate extended correlation imaging so as to use
range-Doppler image functions, P(D,D), calculated over D(rop) D(ron) + D(ro,n)p/PRF (56)
subintervals of the total data set.
In order to do this reformulation, it is necessary to where, as in (47), p = 0 at the center time of the nth
replace (46a) and (46b) by a calculation of Do and Do as range-Doppler subinterval. When (56) is used in (16),
a function of (x, y,z) for a more general fixed orientation the contribution to G(ro) from the nth subinterval
of the (x,y,z) coordinate system with respect to the becomes
target. The y' axis can be aligned with the range direction
at the center time of only one range-Doppler subinterval. G(ro,n) exp[ -4IjD(n)/X]P[n,D(n),D(n)] (57)
Thus (46a) and (46b) cannot be used for any other where
subintervals. Fortunately, the calculation of D and D
from (x,y,z) is straightforward given any rectangular P[n,D,D] = E W(p)S(p) exp[-4'rrjD/(XPRF)]. (58)
coordinate system that rotates with the target. {An
Let the time-varying unit vector u(t) be aligned with The summation in (58) is over the set of pulses {p},,
the RLOS at all times. This unit vector, and its time contained in the nth subinterval. In (58), as in (48b), the
derivative ui(t) can be calculated in the target's coordinate phase-corrected signal S(p) is sampled at the approximate
system ro = (x,y,z) using the target's orbital and range [R(O,t) + D], instead of at the more precise range
rotational models. The equations needed are [R(0,t) + D + D p/PRF], in order to permit FFT
commonplace tools in applied satellite dynamics. They evaluation of the Fourier transform.
will not be given here. If the radar range is much larger The ECP approximation to the correlation image
than the target, the relative range to an (ro) grid point is function G(ro) is obtained by summing (57) over the N
given in terms of this unit vector by the dot product subintervals.
D (ro) = u ro (53a) N

and the relative range rate is given by


GECP(rO) -= E (n) exp[ 4irjD(n)
D(ro) = a*ro. (53b) /X]P[n, D(n), D(n)]. (59)
Equation (53b) is obtained by differentiating (53a), For flexibility and convenience, a new set of weights
holding ro constant. These relative ranges and relative w(n) has been introduced here, controlling the relative
range rates need only be calculated at the center times of contribution from the various range-Doppler image
the range-Doppler subintervals. The calculations are functions. To closely approximate the original correlation

394 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984
image, the original weights W(p) would be used in (58) point in a rapidly rotating target. The sloping lines
and the new weights w(n) would be uniform. tangent to the smooth curve are the piecewise linear
Effective calculation of (58) using FFT methods, as in approximations used in phase corrections. The horizontal
Subsection IVB-l, will give the function P[n,D,D] only and vertical dashed lines are the step function
at a discrete set of (D, D) grid points. In general, the approximation used for the relative range in sampling the
calculated points [D(ron), D(ro,n], where the function P radar data.
is needed for (59), will fall between the (D,D) grid The ECP Algorithm: Numerical Considerations. To
points at which the function P has been calculated. The ensure that the preceding approximations do not cause
calculated function P(n,D,D) is stored in a two- significant errors, both constraints on range-Doppler
dimensional array or table containing the real and imaging expressed by inequalities (49) and (50) must be
imaginary parts of P. Bivariate linear interpolation is used satisfied by every range-Doppler subinterval used.
to extract P[n,D(ro,n), D(ro,n)l from this table. The Low cross-range sidelobes in the periodogram images
interpolation is, in effect, performed separately on the are desirable if they can be obtained without degrading
real and imaginary parts of P. The table or array in which the extended image. Using a sidelobe suppression set of
P(n,D,D) is stored is called the nth "periodogram." tapered weights W(p) in calculating the periodograms
When (59) is used to approximate the original accomplishes this.
correlation image, one is effectively modeling the The radar returns S(p) must be extracted from the
smoothly varying actual relative range D(ro,p) by a recorded pulse-compressed radar signals at the desired
piecewise linear function of time where it is used to range Rd = R(O,t) + D without introducing harmful
calculate the phase corrections in (58) and (59). When errors. Interpolation is required. If the signals are
D(ro,p) is used in extracting S(p) from the radar data for recorded at a range spacing of c/(4BW), where c is the
(58), it is approximated by a step function that takes a speed of light and BW is the bandwidth, then linear
new constant value for each range-Doppler subinterval. interpolation is adequate. This sample spacing is half the
This modeling is illustrated in Fig. 33 for one ro grid maximum spacing allowed by the sampling theorem. It is
obtained by padding half the pulse compression FFT
inputs with zeros.
If the PRF is much greater than the Doppler
bandwidth of the object, then a number of successive
pulses can be presummed into each FFT input at a given
value of D. Prior to this presumming, the signals must be
phase-corrected (17) and interpolated to the range Rdt. If
the number of pulses presummed is less than the number
--
RD Interval
of pulses between FFT inputs, it is best to select for
presumming a cluster of adjacent pulses centered on the
time of each FFT input.
We call the Fourier transform in (58) "symmetric"
m
because the reference phases in the exponential function
c
are all zero at the center time of the range-Doppler
interval, since p is zero at this time. The FFT calculation
of P(DoDo) must be arranged to calculate such a
ma
a
Al symmetric Fourier transform in order for (57) and (59) to
be valid.
After the periodograms, P[n,D, D] are correctly
r - calculated and stored, a possible source of error in
1 calculating (59) is the bivariate linear interpolation used
to get P[n,D(n),D(n)] from the periodogram tables.
Errors here are controlled by calculating and storing the
periodograms over a sufficiently fine grid in both the
1 range and range rate directions.
A grid spacing in relative range D of c/(4BW) is
sufficiently small to give reasonably accurate range
interpolation. The grid spacing required in relative range
rate D depends on the periodogram sidelobe level. If the
Time -* sidelobes in the periodograms are low, a grid spacing of
X/(4AT) in D gives reasonably accurate range-rate
Fig. 33. Relative range to a scatterer versus time. Piecewise linear interpolation. This too is half the maximum grid spacing
approximation shown for phase (solid lines) and step function
approximation for range sampling (dashed lines). The length of the FFT allowed by the sampling theorem. This requires padding
processing interval is exaggerated to make the errors visible. about half the FFT input array with zeros for Doppler
AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 395
imaging. If the sidelobes in the periodograms are large, for, useful radar images. The different image processing
then a finer D grid is required. The relative weights w(n) algorithms required to perform particular imaging tasks
to use between periodograms depend on the type of have been introduced and outlined.
imaging. For three-dimensional imaging, a sidelobe We have stressed that all these imaging techniques are
suppression taper, as a function of K suppresses sidelobes basically equivalent and can be developed from a
in the second cross-range direction z'. For stroboscopic common theoretical background, which is, in fact, also
imaging, uniform weights between periodograms give common with tomographic imaging applications.
optimum SNR improvement, but the nonuniform weights These techniques have been conceived and developed
discussed in Section IVB-2 may be needed to suppress to deal with the problem of the scatterer's motion through
ambiguous images in the original cross-range x' direction. resolution cells, thus permitting a much wider spectrum
For wide-angle imaging, it has been found best to use of applications than allowed by obeying the stringent
periodograms that overlap 50 percent, i.e., half the pulses requirements of linear range-Doppler imaging. These
used in the nth periodogram are reused in the (n1 + 1)th techniques, in order to handle the data-intensive
periodogram. A sidelobe suppression weighting W(p) is applications, have also been developed to be
used in calculating these periodograms, and similar computationally efficient.
weighting w(n) is used between periodograms in The differences among the various computational
calculating (59). algorithms are affected by the approximations that are
valid in specific applications and also by tradeoffs
V. SUMMARY between image quality and computational efficiency.

In th s paper we have presented a general treatment of


ACKNOWLEDGMENT
range-doppler radar imaging techniques and have given
detailed discussions of some of the most prominent and The developments reviewed in this paper are the
illustrative applications, such as airborne SAR imaging result of significant contributions over nearly three
and space object (planets, artificial satellites) imaging decades by various researchers too numerous to list. The
from ground-based wideband radars. We have also stated references provide partial documentation of these
the general properties of, and the necessary requirements contributions.

396 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984
REFERENCES [18] Shuchman, R.A., Davis, C.F., and Jackson, P.L. (1975)
Contour stripmine detection and identification with imaging
radar.
[1] Cutrona, L.J., Vivian, W.E., Leith, EN., and Hall, G.O. Bulletin of the Association of Engineering Geology, X-Hl
(1961) (1975), 99.
A high-resolution radar combat-surveillance system. [19] Brown, W.E., Jr., Elachi, C., and Thompson, T.W. (1976)
IRE Transactions on Military Electronics, MIL-5, (Apr. Radar imaging of ocean surface patterns.
1961), 127. Journal of GeophYsical Research, 81 (1976), 2657.
[2] Leith, E.N. (1977) [20] Shemdin, O.H., Brown, W.E., Jr., Staudhammer, F.G.,
Complex spatial filters for image deconvolution. Shuchman, R., Larson, R., Zelenka, J., Rose, D.B., McLeish,
Proceedings of the IEEE, 65, 1 (Jan. 1977), 18. W., and Berles, R.A. (1978)
[3] Jordan, R.L. (1980)
The Seasat A synthetic aperture radar aperture.
Comparison of in situ and remotely sensed ocean waves off
IEEE Journal of Oceanic Engineering, OE-5, 5 (Apr. 1980),
Marineland, Florida.
Boundary Layer Meteorology, 13 (1978), 173.
154. [21]
[4] Marlow, H.C., Watson, D.C., Van Hoozer, C.H., and Freeny,
Shuchman, R.A. (1981)
C.C. (1965)
Processing synthetic aperture radar data of ocean waves.
In J.F.R. Gower (Ed.), Oceanography from Space.
The RAT SCAT cross-section facility. New York: Plenem, 1981, p. 477.
Proceedings of the IEEE, 53 8 (Aug. 1965), 946. [22] Gray, A.L., Hawkins, R.K., Livingstone, E.E., Arsenault,
[5] Walker, J.L. (1980)
Range-Doppler imaging of rotating objects.
Drapier, and Johnstone, W.M. (1982)
Simultaneous scatterometer and radiometer measurements of
IEEE Transactions on Aerospace and Electronic Systems, sea-ice microwave signatures.
AES-16 1 (Jan. 1980), 23-52. IEEE Journal of Oceanic Engineering, OE-7, (1982), 20.
[6] Kirk, J.C., Jr. (1975) [23] Luther, C.A., Lyden, J.D., Shuchman, R.A., Larson, R.W.,
A discussion of digital processing in synthetic aperture radar.
IEEE Transactions on Aerospace and Electronic Systems,
Holmes, QA., Nuesch, D.R., Lowry, R.T., and Livingstone,
C.E. (1982)
AES-II (May 1975), 326-337.
[7] Kirk, J.C., Jr. (1975) Synthetic aperture radar studies of sea ice.
In IEEE International Geoscience and Remote Sensing
Digital synthetic aperture radar technology. Symnposium, Munich, Germany, (1982), pp. TA-8 1.1-1.9.
In IEEE 1975 International Radar Confrrence Record, p.
482.
[24] Beal, R.C., DeLeonibus, P. and Katz, I. (eds.). (1981)
[8] Kirk, J.C., Jr. (1975)
Spaceborne Synthetic Aperture Radar for Oceanography.
Motion compensation for synthetic aperture radar.
Baltimore, Md.: Johns Hopkins Press, 1981.
IEEE Transactions on Aerospace Electronic Systems, AES-1 /
[251 Gonzalez, F.l., Beal, R.C., Brown, W.E., Jr., DeLeonibus,
3 (May 1975), 338-348.
P.S., Gower, J.F.R., Lichy. D., Ross, D.B., Rufenach, C.L.,
[9] Kovaly, J.J. (1977) Sherman, J.W._ III, and Shuchman, R.A. (1979)
SEASAT synthetic aperture radar: Ocean wave detection
High resolution radar fundamentals
In Eli Broakner (Ed.), Radar Technology. capabilities.
Dedham, Mass.: Artech House, 1977. Science, 204 (1979), 418.
[10] Brookner, E. (1977)
[26] Gower, J.F.R. (Ed.). (1981)
Synthetic aperture radar spotlight mapper. Oceanography from Space.
New York: Plenum, 1981.
In Eli Brookner (Ed.), Radar Technology
Dedham, Mass.: Artech House, 1977.
[27] Elachi, C., et al. (1982)
Shuttle imaging radar experiment.
[11] Bromaghim, D.R., and Perry, J.P. (1980) Science, 218 (1982), 996.
A wideband linear FM ramp generator for the long-range
imaging radar.
[28] Duchossors, G., and Honvault, C. (1981)
The first ESA remote sensing satellite system ERS-1.
IEEE Transactions on Microwave Theory and Techniques, Presented at the 15th International Symposium on Remote
MTT-26, 5 (May 1980), 322.
[12] Shapiro, J.J. (1968) Sensing of the Environment (Ann Arbor, Mich., May 1981).
Planetary radar astronomy.
[29] Raney, R.K. (1982)
The Canadian RADARSAT program.
IEEE Spectrum, 5, 3 (Mar. 1968), 70. In Proceedings of the 1982 International Geoscience and
[13] Prickett, M.J., and Chen, C.C. (1980) Remote Sensing Symposium (IGARSS '82), IEEE Catalog
Principles of inverse synthetic aperture radar (ISAR) 82CH 14723-6.
imaging. [30] Matsumoto, K., Kishida, H., Yamada, H., and Hisoda, Y.
IEEE 1980 EASCON Record, p. 340.
(1982)
[14] Sherwin, C.W., Ruina, J.P., and Rawcliff, R.D. (1962) Development of active microwave sensors in Japan.
Some early developments in synthetic aperture radar In Proceedings of the 1982 International Geoscience and
systems. Remote Sensing Svmposium (IGARSS '82), IEEE Catalog
IRE Transactions on Military Electronics, MIL-6 (Apr. 82CH 14723-6.
1962), 111. [311 Green, P.E., and Price, R. (1960)
[15] Jolley, J.H. and Dotson, C. (1981) Signal processing in radar astronomy.
Synthetic aperture radar improves reconnaissance. Technical Report 234, Lincoln Laboratory, Massachusetts
Defense Electronics, 13 9 (Sept. 1981), I 1 1. Institute of Technology, Cambridge, Oct. 1960.
[161 Porcello, L.J., et al. (1974) [321 Green, P.E. (1978)
The Apollo lunar sounder radar system. Radar measurements of target scattering properties.
Proceedings of the IEEE, (June 1974), 768-783. In J.V. Evans and T. Hagfors (Eds.), Radar Astronomy.
[17] Daily, M., Elachi, C., Farr, T., and Schaber, G. (1978) New York: McGraw-Hill, 1978, pp. 1-75.
Discrimination of geologic units in Death Valley using dual [33] Pettengill, G.A. (1960)
frequency and polarization radar data. Measurements of lunar reflectivity using the Millstone radar.
Geophysical Research Letters, 5 (1978), 889. Proceedings of the IRE, 48 (1960), 933.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING


397
[34] Pettengill, G.A., et al. (1982) [50] Chen, C.C., and Andrews, H.C. (1980)
A radar investigation of Venus. Target-motion-induced radar imaging.
Astronomical Journal, 67 (1982), 181. IEEE Transactions on Aerospace and Electronic Systems,
[35] Smith, W.B. (1963) AES-16 (Jan. 1980), 2-14.
Radar observations of Venus 1959 and 1961. [51] Chen, C.C., and Andrews, H.C. (1980)
Astronomical Journal, 68 (1963), 15. Multifrequency imaging of radar turntable data.
[36] Muchleman, D.O., Black, N., and Holdridge, D.B. (1962) IEEE Transactions on Aerospace and Electronic Systems,
The astronomical unit determined by radar reflections from AES-16 (Jan. 1980), 15-22.
Venus. [521 Mensa, D.L., Halevy, S., and Wade, G. (1983)
Astronomical Journal, 67 (1962), 191. Coherent Doppler tomography for microwave imaging.
[37] Thompson, J.H., et al. (1961) Proceedings of the IEEE, 71 (Feb. 1983), 254.
A new determination of the solar parallax by means of radar [531 Munson, D.C., and Jenkins, W.K. (1981)
echoes from Venus. A common framework for spotlight mode synthetic aperture
Nature, 190 (1961), 519. radar and computer-aided tomography.
[381 Kotelnikov, V.A., et al. (1962) In Proceedings of the 15th Asilomar Conference on Circuits,
Radar system employment during radar contact with Venus. Systems, and Computus (G.L. Pacific Grove, Calif., Nov.
Radiolekhnika i Electronika, 7 (1962), 1715. 9-11, 1981), p. 217.
[39] Carpenter, R.L., and Goldstein, R.M. (1983) [54] Munson, D.C., O'Brien, J.D., and Jenkins, W.K. (1983)
Radar observations of Mercury. A tomographic formulation of spotlight-mode synthetic
Science, 142 (1983), 381. aperture radar.
[40] Pettengill, G.H. (1965) Proceedings of the IEEE, 7 (Aug. 1983), 917-925.
Recent Arecibo observations of Mars and Jupiter. [55] Aleksoff, C.C., LaHaie, I.J., and Tai, A.M. (1983)
Journal of Research of the National Bureau of Standards D, Optical-hybrid backprojection processing.
69 (1965), 1627. In Proceedings of the 10th International Computing
[411 Dyce, R.B. (1965) Conference (Apr. 6-8, 1983), IEEE Catalog 83 CH 1880-4.
Recent Arecibo observations of Mars and Jupiter. [56] Mims, J., and Farrell, J.L. (1972)
Journal of Research of the National Bureau of Standards D, Synthetic aperture imaging with maneuvers.
69 (1965), 1628. lEEE Transactions on Aerospace and Electronic Systems,
[42] Evans, J.V., et al. (1965) AEA-8 (July 1972), 410-418.
Radio echo observations of Venus and Mercury at 23 cm [57] Brown, W.M. (1980)
wavelength. Walker model for radar sensing of rigid target field.
Astronomical Journal, 70 (1965), 486. IEEE Transactions on Aerospace and Electronic Systems,
[43] La Hoffneon, R.A., Hurlbut, R.H., Kind, D.E., and Wentroub, AES-16 (Jan. 1980), 104-107.
H.J. (1969) [58] Lewitt, R.M. (1983)
A 94-GHz radar for space object identification. Reconstruction algorithms: Transform methods.
IEEE Transactions on Microwave Theory and Techniques, Proceedings of the IEEE, 71 (Mar. 1983), 390--408.
M7T-17 12 (Dec. 1969), 1145. [59] Brown, W.M., and Porcello, J.L. (1969)
[44] Brown, W.M. (1967) An introduction to synthetic aperture radar.
Synthetic aperture radar. IEEE Spectrum, 6 (Sept. 1969), 52-62.
IEEE Transactions on Aerospace and Electronic Systems, [60] Leith, E.N. (1971)
AES-3 (1967), 217. Quasi-holographic techniques in the microwave region.
[45] Brown, W.M., and Fredericks, R.J. (1969) Proceedings of the IEEE, 59 (Sept. 1971), 1305- 1318.
Range-Doppler imaging with motion through resolution cells. [61] Kozma, A., et al. (1972)
IEEE Transactions on Aerospace and Electronic Systems, Tilted-plane optical processor.
AES-5 (Jan. 1969), 98. Applied Optics, 11 (Aug. 1972), 1766-1777.
[46] Walker, J.L., Carrara, W.G., and Cindrich, 1. (1973)
Optical processing of rotating-object radar data using a polar [62] Wu, C. (1980)
recording format. A digital fast correlation approach to produce SEASAT SAR
Technical Report RADC-TR-73-136, AD 526 738, Rome imagery.
Air Development Center, Rome, NY, May 1973. In Proceedings of the IEEE 1980 International Radar
[47] Mensa, D., Heidbreder, G., and Wade, G. (1980) Conference, pp. 153-160.
Aperture synthesis by object rotation in coherent imaging. [63] Leith, E.N. (1973)
IEEE Transactions on Nuclear Science, NS-27 (Apr. 1980), Range-azimuth-coupling aberrations in pulse-scanned
989. imaging systems.
[48] Mensa, D. (1982) Journal of the Optical Society of America, 63 (Feb. 1973),
High resolution imaging. 119-126.
Dedham, Mass.: Arctech House, 1982. [64] Ausherman, D.A. (1980)
[49] Wehner, D.R., Prickett, M.J., Rock, R.G., and Chen, C.C. Digital versus optical techniques in synthetic aperture radar
(1979) (SAR) data processing.
Stepped frequency radar target imagery, Theoretical concept Optical Engineering, 19 (Mar./Apr. 1980), 157-167.
and preliminary results. [65] Skolnik, M.I. (1980)
Technical Report 490, Naval Ocean Systems Center, San Introduction to Radar Systems, 2nd ed.
Diego, CA, Nov. 1979. New York: McGraw-Hill, 1980, pp. 34-44.

398 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984
Dale A. Ausherman (S'66. M'72) was born in Maryville, Mo., on January 12, 1947.
He received the B.S., M.S., and Ph.D. degrees in electrical engineering from the
University of Missouri at Columbia in 1969, 1970, and 1973, respectively.
While attending graduate school, he was engaged in research on the application of
digital image processing to automated diagnosis from medical radiographs. He joined
the Environmental Research Institute of Michigan in 1973 and has done research on
digital image formation processing techniques for SAR systems, including the methods
required for fine resolution imaging of rotating objects. He has acted as a consultant to
government and industry in putting such techniques into practice. He is currently
Deputy Director of the Radar Division.
Dr. Ausherman is a member of Tau Beta Pi, Eta Kappa Nu, and Sigma Xi.

Adam Kozma (M'66) was born in Cleveland, Ohio, on February 2, 1928. He


received the B.S.E. degree in mechanical engineering and the M.S.E. degree in
instrumentation engineering from the University of Michigan in 1952 and 1964,
respectively. He received the M.S. degree in engineering mechanics from Wayne State
University in 1961 and the Ph.D. degree in electrical engineering from Imperial
College, University of London, in 1968.
After a number of years with the automobile industry, he joined the Willow Run
Laboratories of the University of Michigan in 1958 where he worked in the Radar and
Optics Laboratory on synthetic aperture radar, holography, and coherent optics. In
1969, he joined Harris, Inc., where he directed the Ann Arbor Electro-Optics Center
in research and development of coherent optical processing systems, holographic
memories, and various electro-optical devices and systems. Since 1973, he has been
with the Environmental Research Institute of Michigan and has been engaged in
research on synthetic aperture radar techniques. He is currently Vice-President and
Director of the Radar Division. His experience includes industrial and government
consulting assignments, as well as lecturing at Imperial College and the University
of Michigan.
Dr. Kozma is a member of Sigma Xi, the American Defense Preparedness
Association, the American Management Association and a fellow of the Optical
Society of America.

Jack L. Walker (S'61, M'64) was born in Mattawan, Mich., on May 6, 1940. He
received the S.B. degree in electrical engineering from Massachusetts Institute of
Technology in 1962 and the M.S. and Ph.D. degrees in electrical engineering from the
University of Michigan in 1967 and 1974, respectively.
He has worked for General Electric, Bendix, the Willow Run Laboratories of the
University of Michigan and, since 1973, at the Environmental Research Institute of
Michigan. His experience includes research on MTI radar systems, coherent optics,
and synthetic aperture radar. He received the IEEE Aerospace and Electronic Systems
Society M. Barry Carlton award in 1981 for his paper on Range-Doppler Imaging. He
is presently Vice-President and Director of the Infrared and Optics Division.
Dr. Walker is a member of Eta Kappa Nu, Sigma Xi, and the Optical Society of
America.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 399


Harrison M. Jones was born in Ottumwa, Kans., on November 16, 1922. He
received the B.S. degree in naval architecture and marine engineering in 1944 from
Webb Institute of Naval Architecture, now located in Glen Cove, Long Island, N.Y.,
and the M.S. and Ph.D. degrees in physics from Yale University, New Haven, Conn.,
in 1948 and 1956, respectively.
He served in the U.S. Navy from 1943 to 1946. From 1948 to 1952, he was a
Research Assistant in the Department of Physics at Yale, working in theoretical
nuclear physics. During the academic year 1952-1953, he served as Assistant
Professor in the Department of Physics at Vanderbilt University, Nashville, Tenn. He
has been a staff member of the M.I.T. Lincoln Laboratory since 1953, doing research
in radar detection theory, ionospheric physics, orbital mechanics, ballistic missile
defense systems, and radar imaging.
Dr. Jones is a member of the American Physical Society, Sigma Xi, the American
Institute of Aeronautics and Astronautics, the American Defense Preparedness
Association, and the U.S. Naval Institute.

Enrico C. Poggio was born in Milan, Italy, on January 29, 1945. He received the
B.S. degree in physics, the B.S. degree in applied mathematics in 1966, and the
Ph.D. degree in theoretical physics in 1971, all from the Massachusetts Institute of
Technology.
From 1971 to 1978 he held research positions at Columbia University, Harvard
University, and Brandeis University, working in theoretical elementary particle physics
and quantum field theory and published over 20 papers. He joined the M.I.T. Lincoln
Laboratory in 1978 where he has been a staff member in the Radar Imaging
Techniques Group. He is presently on a leave of absence and is a candidate for the
M.S. degree in the management of technology at the M.I.T. Sloan School of
Management.

400 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984

Das könnte Ihnen auch gefallen