Sie sind auf Seite 1von 6

IMAGE FORMATION ON UNDULATING TERRAIN USING THE UPGRADED INGARA

L-BAND RADAR SYSTEM

P.B. Pincus*, M. Preiss*, N.J.S. Stacy D.A. Gray


*Defence Science & Technology Organisation University of Adelaide Radar Research Centre
Edinburgh, SA, Australia School of EEE, U. Adelaide, SA, Australia

ABSTRACT terms of the spotlight-mode tomographic framework. This


enables us to quantify the spatial-frequency support provided
A beamforming approach to SAR image formation is shown
by the set of beamformed pulses i.e. the aperture for each
to facilitate spatially variant aperture trimming which max-
pixel, accounting for the local terrain at that pixel position. A
imises the potential repeat-pass coherence when focusing
key conclusion is that beamforming is effectively the limiting
onto undulating terrain. The method is demonstrated using
case of PFA where each pixel position is a distinct motion-
the upgraded Australian airborne radar system, Ingara.
compensation point whose object plane is aligned with the
Index Terms SAR, interferometry, terrain local terrain. We then incorporate a spatial-frequency trim
into the beamformer which allows for the generation of a
1. INTRODUCTION
pair of images whose pixels have spatial-frequency supports
SAR image formation and interferometry over undulating ter- trimmed to a common overlap appropriate to their local sur-
rain using an airborne radar system is a challenging task, be- face geometry. The interferometric coherence of such a pair
cause the ground-platform collection geometry may change is shown to exhibit less geometric decorrelation, both in
considerably over the aperture and over the scene. One stan- simulation and using real data.
dard method of image formation for spotlight-mode data is Recent papers by colleagues Blacknell and Andre, with
the polar-format algorithm (PFA), which employs a tomo- others, showed in simulation and using a ground-based radar
graphic framework to dene a single coordinate space and imaging a laboratory-controlled scene, that their similar pro-
a single object plane onto which the pixels are focused [1]. cess of spatially variant incoherence trimming improved the
However, when the scatterers do not lie in this plane, and the coherence over undulating terrain [6, 7]. They did not though
ight-track is not straight and level, the PFA image at those present a mathematical formulation or algorithmic details.
scatterer positions will suffer a loss of focus [2]. For inter- We begin with a description of the Ingara L-band radar,
ferometry, the coherence between two images depends on the whose recent upgrade was the motivation for this work.
cross-correlation between their spatial frequencies. Further-
more, the support for these spatial frequencies depends on 2. INGARA
the acquisition geometry at the local scatterer surface. There- Ingara is an airborne imaging radar system built and operated
fore, a pair will exhibit a loss in coherence, termed geometric by Australias Defence Science and Technology Organisa-
or baseline decorrelation, in proportion to the difference in tion [8]. In a standard conguration, it operates at X-band us-
their acquired support [3]. For scatterers in the object plane, ing a dual-linearly-polarised patch array pulsed in ping-pong
this decorrelation can be mitigated by ltering i.e. trimming mode (alternating transmit, simultaneous receive) to acquire
the spatial-frequency supports to their common area. How- full quad-polarisation imagery. It has been extended with
ever, a global trim will not sufce for undulating terrain [4]. a ground-based receiver to conduct bistatic imaging experi-
Both these problems can be ameliorated by dividing the image ments [9]. In order to investigate the scattering phenomenol-
swath into patches and separately forming and trimming the ogy of forests, an L-band variant (140 MHz BW, monostatic)
image at each patch using a coordinate system better aligned was built using a pair of circularly-polarised helical antennas,
with the local surface tangent. shown in Fig. 1, to similarly obtain the full scattering matrix.
A beamforming approach to SAR image formation is to First ights on a Beechcraft 1900C took place in July 2013.
directly compensate for the range from each pulse position to A notable feature of the Ingara radar is that it steers in
each pixel position. This offers a convenient, although com- real-time to a planned spot (in spotlight mode) or line (in
putationally expensive, way to drape imagery over the terrain stripmap mode) on the ground. This pulse-to-pulse agility
i.e. focus the pixels to the ground surface height instead of increases the tolerance for typical airborne platforms which
a at plane at a nominal height [5]. In SAR terminology, the only roughly approximate the planned track. The X-band sys-
PFA operates in the spatial-frequency domain, whereas beam- tem steers in two ways: the antenna is mounted on a two-axis
forming operates in the time really the spatial domain. gimble, so that the beam is physically steered, and the range-
In this paper we will formulate SAR beamforming in gate delay (RGD) is dynamic, so that the timing of the receive

978-1-4673-7297-8/15/$31.00 2015
c IEEE 629
window approximately suits the instantaneous range to the an attenuated and delayed echo g(x, y, z)sc (t r ) where
desired ground position. The L-band system does only the r = 2r/c and c is the speed of propagation. The total re-
latter type of steering, as the helical antennas are xed; the sponse gp (r)sc (tr ) from range r depends on the projection
broader beam-pattern lessens the need for physical steering. gp (r) of the reectivity over the spherical wavefront at r [12].
The noise level of the L-band radar was estimated by mea-    
suring the coherence of the cross-pol channels, which would gp (r) = (r x2 + y 2 + z 2 )g(x, y, z)dxdydz
ideally be unity for reection-symmetric scattering, and then z y x

using the connection between SNR and decorrelation due The total response srx (t) available at the receive antenna at
to noise SN R = 1/(1 + 1/SN R). This curve was tted to time instant t is the sum of the responses across all ranges.

2D histograms of the power in the left-transmit-right-receive
(LR) channel (after absolute calibration using trihedral corner srx (t) =  gp (r)sc (t r )dr
r
reectors) and the coherence between the LR and RL chan-
The Ingara L-band receiver mixes the signal down to an in-
nels. The approximate noise-equivalent sigma-nought value
termediate frequency fIF = f0 fmix and samples the result.
giving a good t was 31 dB.
The recorded pulse ss (t) + n(t) starts at t = TRGD , has dura-
tion Ts , and suffers from additive noise n(t). (Discrete signals
are written using continuous-time notation for simplicity.)
ss (t) = srx (t) cos (2fmix t)rect((t TRGD Ts /2)/Ts )
As described in section 2, Ingaras range-gate delay
TRGD = 0 + Tc /2 Ts /2 varies pulse-to-pulse so that
the echo from a reference ground position is approximately
centred in the receive window, even as the range r0 = c0 /2
Fig. 1. The Ingara L-band helical antenna pair mounted under the to that position changes. In stripmap-mode, the reference
fuselage. The helices turn in opposite directions, giving orthogonal position changes pulse-to-pulse as well this is the standard
left and right circular polarisations.
mode for the Ingara L-band radar.
3. IMAGE FORMATION AND THE Continuing the demodulation of the pulse in software,
SPATIAL-FREQUENCY SIGNAL MODEL the complex baseband signal sb (t) is obtained by mixing the
We formulate image formation in terms of the spotlight-mode raw pulse with a phase ramp at fIF , or equivalently, circu-
tomographic framework, and show how beamforming ts larly shifting the pulse spectrum. The introduced phase offset
within this framework. Given knowledge of the terrain to- 2fIF TRGD is removed to phase-align all pulses at t = 0.
pography, we can focus to the ground surface and determine sb (t) =rect((t TRGD Ts /2)/Ts )
the spatial-frequency support for each pixel. 
2

The beamforming described here is straightforward, gp (r)ej2f0 r ej(tr Tc /2)


r
and is similar to most common SAR backprojection ap-
rect((t r Tc /2)/Tc )dr
proaches [10], but without the fast feature, or the associated
approximations. We previously described the algorithm for As is commonplace in signal processing, a matched lter
a different radar which used dechirp (a.k.a. deramp) demod- smf (t) is applied at baseband to compress the chirp and
ulation [11], however, the Ingara L-band radar (unlike the estimate the information in gp (r) while maximising the SNR.
2
X-band system) directly samples the received chirp signal, smf (t) = ej(tTRGD Tc /2) rect((tTRGDTc /2)/Tc )
which changes the processing. Moreover, here we provide
insight into how the spatial frequencies in the chirp data In software, the matched lter is implemented in the fre-
collected in stripmap mode can be quantitatively understood quency domain, with the signals upsampled to avoid aliasing.
in terms of the widely-known tomographic framework for Here, the time-domain convolution is written out to pro-
dechirp data collected in spotlight mode [1, 2]. vide insight into the available data. (All rect() windows are
Let the transmitted radar signal be the real part of a lin- dropped for brevity.)
  
ear FM pulse sc (t) with centre frequency f0 , bandwidth Bc , sb smf (n ) = sb (t)smf (n t)dt = gp (r)ej drdt
duration Tc and chirp rate = Bc /Tc . t t r
j2f0 t j(tTc /2)2
sc (t) = e e rect((t Tc /2)/Tc ) = [(tr Tc /2) (tn TRGD Tc /2)2 ]2f0 r
2

Model the illuminated ground scene as an undulating surface = [2(t Tc /2)(TRGD + n r )+r2 (TRGD +n )2 ]
with complex reectivity g(x, y, h(x, y)) [2]. In what fol- + 2f0 (TRGD + n r ) 2f0 (TRGD + n )
lows, we ignore amplitude factors due to propagation or radar
hardware as these do not affectthe signal processing. Each = [2f0 + 2(t Tc /2)](TRGD + n r )
scattering element at range r = x2 + y 2 + z 2 responds with + [r2 (TRGD + n )2 ] 2f0 (TRGD + n )

630 2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar(APSAR)


Transform from time delays n and r to ranges single coordinate system, with origin shifted to a xed scene
rn = c(TRGD + n )/2 and r = cr /2, and transform temporal reference position [2]. This leads to a common k-space and a
quantities t and f0 to k = 4/c(t Tc /2) and k0 = 4f0 /c. single object plane. Our key proposal is that the time-domain
  algorithms which fully compensate for the exact range can
4f0 4 4 4f0
= + (tTc /2) (rn r)+ 2 (r2 rn2 ) rn nonetheless be interpreted tomographically, with the Carte-
c c c c
sian axes positioned and aligned to suit the local surface at
= (k0 + k)(rn r) + 4/c2 (r2 rn2 ) k0 rn each pixel position. That is, (r, , ) are recalculated for each
We assume that the residual chirp phase 4/c2 (r2 rn2 ) pixel, with and the effective azimuth and grazing angles
contributed by a response from r at the evaluated range rn is on incidence. In effect, the pulse data is motion-compensated
negligible. Set kr = k0 + k, and note that dkr = dt, ignoring to each pixel position, and each pixel is formed as if it were
amplitude scaling. the scene reference point of a spotlight-mode image, with an
 
object plane aligned to the local topography. We use the to-
sb smf (rn ) = ek0 rn gp (r)ejkr r ejkr rn drdkr
kr r
mographic framework to compute the spatial-frequency sup-
Hence, the integral over r, inherent to the data collection, port that each pulse offers to each pixel. For each pixel, an
is now a Fourier transform evaluated at spatial frequencies aperture can be selected, the point-spread function and reso-
k0 k/2 kr k0 + k/2, where k = 4Bc /c is the lution quantied, and the spatial-frequency overlap computed
spatial bandwidth available from the rect window in sc (t). for repeat acquisitions, which is critical for interferometry.
The integral over kr , due to the applied convolution, is an in- Image formation just involves evaluating the convolution
verse Fourier transform evaluated at rn . The radar measures sb smf (rn ) for all pulses at all ranges rn corresponding to
a ltered, narrowband version of the scene reectivity func- all pixels (x, y, h(x, y)) in the scene. Each pixel value is the
tion g. In principle this is the same for dechirp data, but one sum of all the pulse contributions weighted by their conjugate
key difference is that in the dechirp case the spatial-frequency range phase +k0 rn to achieve coherent addition hence the
support shifts according to a scatterers range delay relative to term beamforming. We interpolate to each pixel range by up-
the dechirp delay i.e. the support depends directly on the rect sampling the pulse and selecting the nearest range bin. Thus,
window in sc (t r ). Essentially, the product of two chirps is basic time-domain image formation only requires knowledge
a sinuosoid whose frequency depends on their relative delay. of the instantaneous range from platform to pixel. However,
This characteristic can be removed by a dispersive lter, as in to understand the constituent spatial frequencies, knowledge
chirp deskew [13]. In contrast, in the chirp case, the convolu- of the angular geometry (, ) is crucial.
tion output is in the spatial domain, and the spatial-frequency 4. SPATIAL-FREQUENCY SUPPORT ON
supports, accessible by a Fourier transform, will be the same UNDULATING TERRAIN
for all ground positions (or a subset if only part of the echo fell A coordinate space for an airborne radar image can be estab-
in the receive window). Essentially, the convolution converts lished by setting z normal to the nominal ground plane and
different delays into spatial offsets, as captured by rn . x parallel to the planned ight-track, so x is azimuth and y
To complete the tomographic formulation, the projection- is ground-range. This at reference frame will usually be the
slice theorem may be invoked [1]. Approximate the spher- frame in which the nal 2D image is displayed, but more-
ical wavefront at r by a tangent plane, written in Hes- over, it will serve as the common reference frame for repeat
sian normal form as r = n x, where x = [x, y, z]T and passes, and therefore the space in which spatial frequencies
n = [sin cos , cos cos , sin ]T is the unit normal from are correlated during interferometric processing. This space
the origin at the platform to the plane in the scene, at azimuth is sufcient to calculate range and therefore do beamforming.
angle and depression angle . However, to accurately compute the full platform-pixel ge-
   
gp (r)ejkr r dr g(x, y, z)ejkr nx dxdydz ometry, a secondary coordinate space (x, y, z) is dened for
r z y x each pixel, with origin at the pixel position and z normal to
   the local surface tangent z denes the object plane. This
= g(x, y, z)ej(kx x+ky y+kzz ) dxdydz second space is obtained by rotating the nominal image coor-
z y x
dinate axes by the local ground-range and azimuth slopes gr
where kx = kr sin cos , ky = kr cos cos & kz = kr sin .
and az . Now the effective angular geometry (, ) of the
Importantly, this allows us to interpret the spatial-frequency
platform at A relative to the pixel at P can be computed for
support geometrically as an offset segment of a radial line
the rotated Cartesian position see Fig. 2.
oriented at (, ) in k-space i.e. the domain of the 3D Fourier
transform of g(x, y, z). The orientation (, ) denes the xA cos az 0 sin az 1 0 0 xA
instantaneous range direction for the pulse, and therefore the yA = 0 1 0 0 cos gr sin gr yA
orthogonal projection which generated gp (r). zA sin az 0 cos az 0 sin gr cos gr zA

In traditional spotlight-mode image formation, all scat-


= arctan xA /yA = arctan zA / x2A + yA
2
terer and platform positions are determined with respect to a

2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar(APSAR) 631


To obtain the elevation and slopes at each pixel, we t a aged at broadside i.e. az = 0 and xA = xA = 0 so = = 0,
2D spline to the available terrain elevation data, and use nite the local grazing angle is just the sum of the nominal graz-
differences between the interpolated heights of neighbouring ing angle and the ground-range slope.
pixels. Note that the order of the az and gr rotations does yA sin gr + zA cos gr
matter; the specic signs used in the image formation and = arctan = + gr
yA cos gr zA sin gr
the geometry calculations may mean that the order should be
reversed to match the data. 5. SPATIAL-FREQUENCY TRIMMING FOR
REPEAT-PASS INTERFEROMETRY
SAR interferometry involves computing the complex coher-
ence between a pair of radar images, fa (x, y) and fb (x, y)
[2]. The coherence can be expressed in terms of the cross-
correlation (
) of the images spatial frequencies Fa (kx , ky )
and Fb (kx , ky ).
E{fa fb } E{FFT1 {Fa
Fb }}
= = 
E{fa fa }E{fb fb } E{Fa Fa }E{Fb Fb }
For repeat passes of an airborne platform, the ight-tracks
will not be identical. Therefore, the supports for Fa and Fb
will not be the same, leading to a geometry-induced degra-
dation in the coherence i.e. baseline decorrelation [3]. This
decorrelation can be avoided by trimming the images spatial-
frequency supports to a common overlap region in a reference
plane here we use the kx -ky ground plane of the image at
the expense of reduced resolution. However, as shown in sec-
tion 4, the spatial frequencies at each pixel position project
Fig. 2. The nominal-image (blue) and pixel-specic (red) coordi- into the ground plane according to the local geometry, so for
nate reference frames at a pixels spatial position P (left), and in the undulating terrain the trim should be spatially varying [6].
spatial-frequency domain (right). The reference frames are related SAR beamforming does not explicitly consider the spatial
by rotations by the ground-range and azimuth slopes, gr and az ,
frequencies of the data, and implementing a trim as part of
about the x and y axes, respectively. The effective angular geom-
time-domain image formation is not a standard processing
etry (, ) of the platform at A is computed, and this determines
the projection direction kz of the pulse (green) in k-space. The rst task. We have implemented the trim in three stages. Firstly,
projection is onto the kx -ky object ground plane (orange), and the for each pixel, we compute the spatial-frequency overlap
second projection is onto the kx -ky image ground plane (light blue). region in the kx -ky ground plane for the apertures of both
passes, using the approach of section 4. Secondly, for each
For one pixel, the angular geometry (, ), together with pixel, we nd the portion of each pulses line of support
the radar parameters fc and Bc , enable the spatial-frequency which lies inside the overlap region in the kx -ky ground
support to be properly located in that pixels k-space, as plane, and record the spatial-frequency pulse samples which
shown in section 3. However, the pixel will eventually be part should be trimmed. Note that different pixels may require the
of an image in which the effective spatial-frequency domain same pulse to be trimmed differently, due to different local
is simply the 2D Fourier transform of the image, oriented geometries. Thirdly, when beamforming each pixel, each
in the original at image reference frame. Therefore, the pulse in the aperture is trimmed according to its associated
3D spatial-frequency support in k-space must be projected record, prior to upsampling and range compression. Essen-
onto the at ground plane in k-space. This is done in two tially, for each pixel, the aperture is ltered to suit the overlap
steps, as shown in Fig. 2: rstly, the support is projected onto requirement.
the kx -ky ground plane, and secondly, the support is further
projected onto the kx -ky ground plane; both projections are 6. SIMULATION
along kz . The second projection is implemented by dila- The 2D Fourier transform of a radar image shows the super-
tion factors 1/ cos az and 1/ cos gr applied to the kx and position of the spatial frequencies, and therefore the spatial-
ky dimensions respectively. A similar process is followed in frequency supports, of all pixels. For standard polar-format
slant-to-ground plane conversion of polar-format imagery [2]. imagery, the supports are presumed to be the same for all
kx = kr sin cos / cos az pixels, following the plane-wave and at-scene approxima-
tions [2]. For beamformed imagery, the supports are not the
ky = kr cos cos / cos gr same, even for a at scene, because the geometry of one pulse
For the special case of a point with no azimuth slope im- position relative to each pixel is different. For an undulat-

632 2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar(APSAR)


ing scene, the supports could be dispersed widely over the k-
x
transformed image. Note too that the support displayed in the ky
?
transformed image may be an alias, as the observable k-space
extents are limited to the reciprocal of the spatial pixel spac-
ings along azimuth and ground-range, scaled by 2. There-
fore, if the pixel spacing is changed, the aliased position of
the support in the transformed image will change, although of (a) reference (b) grazing 0 = 41
course the actual support has not changed.
To demonstrate the effect of different geometries on the
location of the k-space support, we synthesise raw data for
a scene consisting of a single scatterer, form the image us-
ing the beamforming approach of section 3, and display its
2D Fourier transform see Fig. 3. In each case, the spa-
tial image, not shown, is a 2D sinc function. We also follow
(c) squint 0 = 30 (d) azimuth offset 0 = 10
the approach of section 4 to predict where the support should
lie. Moreover, we use this knowledge to compute the k-space
overlap region between repeat passes and then trim the aper-
tures to this overlap see Fig. 4.
Fig. 5 shows the utility of the spatially varying trim for
a clutter scene that has signicant undulation. Two pairs of
images were formed using synthetic data from two passes:
pair (a) had no aperture trim, whereas pair (b) did. All four (e) ground-range slope gr = 10 (f) azimuth slope az = 10
images were focused to the undulating surface. The interfer- Fig. 3. 2D Fourier transforms of beamformed images of synthetic
ometric coherence || for pair (a) suffers from a high level of raw data. The scene consists of a single point scatterer. Chirp:
geometric decorrelation due to the incoherent contributions f0 = 1.32 GHz, Bc = 140 MHz. Azimuth resolution: 0.75 m. The
reference geometry, used for (a), is a straight, broadside (squint
from non-overlapping spatial-frequency supports the level
0 = 0 ) aperture centred at r0 = 1 km, 0 = 0 and 0 = 45 , rela-
of overlap, and therefore the level of decorrelation, varies tive to the scatterer on a at surface. (b)-(f) each have one deviation
spatially according to the underlying topography. This decor- from the reference geometry. The red box is the expected location of
relation is mostly absent from pair (b), because the spatial- the spatial-frequency support provided by the aperture.
frequency supports have been trimmed to their overlap.
7. REAL DATA
We have used Ingara L-band data acquired in stripmap mode
to demonstrate the effectiveness of the pixel-based trim. The
interferometric results displayed in Fig. 6 indicate that the
spatially varying aperture trim implemented as part of the
beamformer is effective in removing additional decorrelation (a) 0 = 3 , az = 10 (b) 0 =2 , az,gr =20 , 10
0 =(18 , 22 ), 0 =(18 , 22 )
beyond what a simple global trim can achieve. The effect is
Fig. 4. Extension of Fig. 3 for repeat passes. Now the aperture is
limited by the sparsity and accuracy of the available elevation
trimmed during image formation to suit the overlap region (green).
data [14], as height errors lead to slope errors which lead to
x
-
incorrect k-space projections and trims. y ?
8. CONCLUSION
SAR image formation via time-domain beamforming has
been formulated in the spotlight-mode tomographic frame-
work. Given elevation data, this framework enables the
(a) Without trim (b) With pixel-based trim
spatial-frequency support of each pixel to be determined in a
coordinate space aligned to the local terrain. The beamform- Fig. 5. Interferometric coherence || between repeat passes over
sinusoidally undulating terrain; slope varies between 20 . Black:
ing algorithm was extended to incorporate a spatially varying
no coherence, white: full coherence. The scene consists of uniform
aperture trim, which is used to align the spatial-frequency clutter. Geometry and radar as in Fig. 3, but 0 = 2 . In (a), the
supports for each pixel position in an image pair, prior to in- bands of low coherence occur at maximum positive slope, where the
terferometric processing. Results using the upgraded Ingara overlap is smallest. The aperture trim in (b) leads to a substantial
L-band radar over undulating terrain show that the method is reduction in geometric decorrelation, except where the overlap is so
better able to mitigate geometric (baseline) decorrelation than small (steep slope,near range) that there is little coherence to recover.
a global image trim.

2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar(APSAR) 633


(a) Optical image of scene 500 500 m (Google) (b) Elevation (m) (c) Overlap (%)
az
-
gr ?

(d) SAR image, resolution (azgr): 0.751.63 m (e) || with global image trim (f) || with local pixel trim
Fig. 6. Ingara L-band beamformed imagery and coherence, focused onto undulating terrain [14] at Cape Jervis, South Australia. Slopes
vary between [13 , +22 ]. Two passes (a, b) were acquired at r0 = (3.3, 4.1) km, 0 = (6.6 , 6.9 ) and 0 = (26.6 , 21.5 ) N.B.
0 = 5.1 . The integration angle was 12 . A Hamming window was applied. (c) shows, for each pixel, the size of the overlap relative to the
support from one pass this captures the net effect of terrain slope, platform geometry and radar waveform. Comparing the interferometric
coherence magnitudes (e) and (f), it is clear that the spatially varying trim removes more geometric decorrelation than the global trim based
on the geometry at the scene centre only. Some patches in near-range (top of image) have very small overlap, leading to very low coherence.

9. REFERENCES [8] N. J. S. Stacy & M. P. Burgess, Ingara: the Australian air-


borne imaging radar system, in Proc. IGARSS, Aug. 1994.
[1] D. C. Munson, J. D. OBrien & W. K. Jenkins, A tomographic [9] A. S. Goh, M. Preiss & N. J. S. Stacy, Initial polarimetric re-
formulation of spotlight-mode SAR, IEEE Proc., vol. 71, sults from the Ingara bistatic SAR experiment, in Proc. Radar
no. 8, pp. 917925, Aug. 1983. Conf., Sep. 2013.
[2] C. V. Jakowatz et al., Spotlight-Mode Synthetic Aperture [10] L. M. H. Ulander, H. Hellsten & G. Stenstrom, Synthetic-
Radar: A Signal Processing Approach. Sandia National Lab- aperture radar processing using fast factorized back-
oratories Albuquerque: Springer, 1996. projection, IEEE TAES, vol. 39, no. 3, Jul. 2003.
[3] F. Gatelli et al., The wavenumber shift in SAR interferome- [11] P. B. Pincus et al., Low frequency high resolution SAR imag-
try, IEEE TGRS, vol. 32, no. 4, pp. 855865, Jul. 1994. ing and polarimetric analysis of a Queensland tropical forest,
[4] N. Marechal, Tomographic formulation of interferometric in Proc. IGARSS, Jul. 2013.
SAR for terrain elevation mapping, IEEE TGRS, vol. 33, no. 3, [12] N. J. Redding & T. M. Payne, Inverting the spherical radon
pp. 726739, May 1995. transform for 3D SAR image formation, in Proc. Radar Conf.,
[5] C. V. Jakowatz, D. E. Wahl & D. A. Yocky, Beamforming as a Sep. 2003.
foundation for spotlight-mode SAR image formation by back- [13] W. G. Carrara, R. S. Goodman & R. M. Majewski, Spot-
projection, in Proc. SPIE Alg. SAR Im., Apr. 2008. light Synthetic Aperture Radar: Signal Processing Algorithms.
[6] D. Blacknell, D. B. Andre & C. M. Finch, SAR CCD over Boston: Artech House, 1995.
mountainous regions, in Proc. Intl conf. SAS/SAR, Sep. 2010. [14] J. C. Gallant et al., 1 second SRTM derived digital el-
[7] D. B. Andre, D. Blacknell & K. Morrison, Spatially variant evation models: user guide, Geoscience Australia, 2011,
incoherence trimming for improved SAR CCD, in Proc. SPIE accessed: 18/12/2014. [Online]. Available: www.ga.gov.au/
Alg. SAR Im., May 2013. topographic-mapping/digital-elevation-data.html

634 2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar(APSAR)

Das könnte Ihnen auch gefallen