Beruflich Dokumente
Kultur Dokumente
above the reference plane. The equation shows that the amount of relief displacement is zero at the nadir point, a, (r = 0) and largest at the edges of a line
camera image and the corners of a frame camera image. Relief displacement is
inversely proportional to the flying height.
Figure 6.4: Fragment of a large scale aerial photo- graph of the centre of Enschede. Note the effect of height displacement on the higher buildings.
Inthecaseofsceneofonlybarrenland,wecannotseereliefdisplacement. How- ever,
we can see relief displacement if there are protruding objects in the scene (it is
then occasionally referred to as height displacement). Buildings and trees
appear to lean outward, away from the nadir point on large scale photographs
and images from very high resolution spaceborne sensors (Figure 6.4).
previous next back exit contents index glossary bibliography about
6.2. ELEMENTARY IMAGE DISTORTIONS 227
The main effect of relief displacement is that inaccurate or wrong map coordinates will be obtained when, eg, digitizing images without further correction.
Whether relief displacement should be considered in the geometric processing
of RS data depends on its impact on the accuracy of the geometric information
Effect of relief displacement derived from the images. Relief displacement can
be corrected for if informa- tion about terrain relief is available (in the form of a
DTM). The procedure is explained later in Section 6.4. It is also good to
remember that it is relief dis- placement allowing us to perceive depth when
looking at a stereograph and to extract 3D information from such images.
previous next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 228 6.3 Two-dimensional approaches
In this section we consider the geometric processing of RS images in situations
where relief displacement can be neglected. An example of such images is a
scanned aerial photograph of flat terrain. For practical purposes, flat may be
considered as h/H < 1/1000, though this also depends on project accuracy
requirements; h stands for relative terrain elevation, H for flying height. For
satellite RS images of medium spatial resolution, relief displacement usually is
less than a few pixels in extent and thus less important, as long as near vertical
images are acquired. The objective of 2D georeferencing is to relate the image
coordinate system to a specic map coordinate system (Figure 6.5).
(a) (b)
Figure 6.5: Coordinate system of (a) the image dened by its raster rows and
columns, and (b) the map with x- and y-axes.
previous next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 229 6.3.1 Georeferencing
Thesimplestwaytolinkimagecoordinatestomapcoordinatesistouseatransformation formula. A geometric transformation is a function that relates the co-
outcomearethevaluesfortheparametersatof oftheEquations6.2and6.3and
quality indications.
Tosolvetheaboveequations,onlythreeGCPsarerequired;however,youshould use
more points than the strict minimum. Using merely the minimum number of
points for solving the system of equations would obviously lead to a wrong
previous next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 231
GCP i j x y xc yc dx dy 1 254 68 958 155 958.552 154.935 0.552 -0.065 2 149
22 936 151 934.576 150.401 -1.424 -0.599 3 40 132 916 176 917.732 177.087
1.732 1.087 4 26 269 923 206 921.835 204.966 -1.165 -1.034 5 193 228 954
189 954.146 189.459 0.146 0.459
Table 6.1: A set of ve ground control points, which are used to de- termine a
1st order transformation. xc and yc are calculated using the transformation, dx
and dy are the residual errors.
transformation if you made an error in one of the measurements. Including
Number of GCPs more points for calculating the transformation parameters
enables the software to also compute the error of the transformation. Table 6.1
provides an example of the input and output of a georeferencing computation
in which 5 GCPs have beenused. EachGCPislistedwithitsimagecoordinates (i,j)
anditsmapcoor- dinates (x,y) .
The software performs a least-squares adjustment to determine the transformation parameters. The least squares adjustment ensures an overall best t of
imageandmap. Weusethecomputedparametervaluestocalculatecoordinates
(xc,yc) for any image point (pixel) of interest:
xc = 902.76 + 0.206i + 0.051j,
and
yc = 152.5790.044i + 0.199j.
previous next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 232
Forexample,forthepixelcorrespondingtoGCP1(i=254and j=68),wecancal- culate
transformed image coordinates xc and yc as 958.552 and 154.935, respectively. These values deviate slightly from the input map coordinates (as measuredonthemap). Thediscrepanciesbetweenmeasuredandtransformedcoordinates of the GCPs are called residual errors (residuals for short). The residuals
Residuals arelistedinthetableas dx and dy.
Theirmagnitudeisanindicatorofthequality of the transformation. The residual
errors can be used to analyze whether all GCPs have correctly been identied
and/or been measured accurately enough.
original image. Imagine that the transformed image is overlaid with a grid corResampling responding to the output image and the pixel values for the new
image are deprevious next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 236
Figure 6.7: Illustration of the transformation and re- sampling process. Note
that the image after trans- formation is a conceptual illustration, since the actual pixel shape cannot change.
termined. Theshapeoftheimagetoberesampleddependsonthetypeoftransformation used.
Figure 6.8: Illustration of different image transfor- mation types, and the
numberofrequiredparam- eters.
Figure 6.8 shows four transformation types that are frequently used in RS. The
previous next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 237
types shown increase in complexity and parameter requirement, from left to
right. In a conformal transformation the image shape, including the right anConformal transformation gles,areretained.
Therefore,onlyfourparametersareneededtodescribeashift alongthexandthey
axis,ascalechange,andtherotation. However,ifyouwant to geocode an image to
make it t with another image or map, a higher-order transformation may be
required, such as a projective transformation or polyno- mial transformation.
Figure 6.9: Principle of resampling using nearest neighbour, bilinear, and
bicubic interpolation.
Regardless of the shape of the transformed image, a resampling procedure as
illustrated by Figure 6.9 is used. As the orientation and size of the input raster
andtherequiredoutputrasteraredifferent,thereisnotaone-to-onerelationship
between the cells of these rasters. Therefore, we have to apply interpolation to
Interpolation compute a pixel value for a ground resolution cell where it has not
been mea- sured. Forresamplingweonlyuseverysimpleinterpolationmethods.
Themain methods are: nearest neighbour, bilinear interpolation, and bicubic
interpolaprevious next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 238
tion (Figure 6.9). Consider the green grid to be the output image to be created.
To determine the value of the centre pixel (bold), in nearest neighbour the
value of the nearest original pixel is assigned, the value of the black pixel in
this ex- ample. Note that always the respective pixel centres, marked by small
crosses, are used for this process. When using bilinear interpolation, a
weighted average is calculated from the four nearest pixels in the original
image (dark grey and black pixels). In bicubic interpolation, a weighted average
of the values of 16 sur- rounding pixels (the black and all grey pixel) is
calculated [30]. Note that some software uses the terms bilinear convolution
and cubic convolution instead of the terms introduced above.
The choice of the resampling algorithm depends, among others, on the ratio
between input and output pixel size and the purpose of the resampled image.
Nearest neighbour resampling can lead to the edges of features to be offset in
a step-like pattern. However, since the value of the original cell is assigned to
Choice of method a new cell without being changed, all spectral information is
retained, which means that the resampled image is still useful in applications
such as digital image classication (see Chapter 8). The spatial information, on
the other hand, may be altered in this process, since some original pixels may
be omitted from theoutputimage,orappeartwice.
Bilinearandbicubicinterpolationreducethis
effectandleadtosmootherimages;however,becausethevaluesfromanumber of
cells are averaged, the radiometric information is changed (Figure 6.10).
previous next back exit contents index glossary bibliography about
6.3. TWO-DIMENSIONAL APPROACHES 239
Figure 6.10: The effect of nearest neighbour, and bi- linear and bicubic resampling on the original data.
previous next back exit contents index glossary bibliography about
6.4. THREE-DIMENSIONAL APPROACHES 240 6.4 Three-dimensional approaches
Inmappingterrain,wehavetoconsideritsverticalextent(elevationandheight) it two
types of situation:
We want 2D geospatial data, describing the horizontal position of terrain
features, but the terrain under considerations has large elevation differ- ences.
Elevation differences in the scene cause relief displacement in the image.
Digitizing image features without taking into account relief dis- placement
causes errors in computed map coordinates. If the positional
errorswerelargerthantoleratedbytheapplication(ormapspecications), we should
not go for simple georeferencing and geocoding. We want 3D data.
Whenbotheringaboutmappingterrainwithanincreasingdegreeofrenement we
have to rst clarify what we mean with terrain. Terrain as described by a
topographic map has two very different aspects: (1) there are agricultural elds
androads,forestsandwaterways,buildingsandswamps,barrenlandandlakes, and
(2) there is elevation changing with position - at least in most regions on Earth.
We refer to land cover, topographic objects, etc as terrain features; we show
them on a (2D) map as areas, lines, and point symbols. We refer to the shape
of the ground surface as terrain relief; we show it on a topographic map Terrain
relief by contour lines and/or relief shades. A contour line on a topographic map
is a line of constant elevation. Given contour lines, we can determine the
introduced above, in particular, DEM is often used carelessly. Worth mentioninginthiscontextisalsothemisuseoftopographyassynonymforterrain relief.
previous next back exit contents index glossary bibliography about
6.4. THREE-DIMENSIONAL APPROACHES 243 6.4.1 Orientation
The purpose of camera orientation is to obtain the parameter values for transforming terrain coordinates (X,Y,Z) to image coordinates and vice versa. In
solvingtheorientationproblem,weassumethatanyterrainpointanditsimage lie on
a straight line, which passes through the projection centre (ie, the lens). This
assumption about the imaging geometry of a camera is called collinearity
relationship and does not consider that atmospheric refraction has the effect of
slightly curving imaging rays. We solve the problem of orienting a single image with respect to the terrain in two steps: interior orientation and exterior
orientation.
Figure 6.12: Illustration of the collinearity concept, where image point, lens
centre and terrain point all lie on one line.
previous next back exit contents index glossary bibliography about
6.4. THREE-DIMENSIONAL APPROACHES 244
Orienting a single image as obtained by a line or frame camera
The interior orientation determines the position of the projection centre with respecttotheimage. Theproblemtosolveisdifferentfordigitalcamerasandlm
cameras. In the case of a digital camera (line or frame camera, spaceborne or
aerialcamera)thepositionoftheprojectioncentrewithrespecttotheCCDarray
Interior orientation
doesnotchangefromimagetoimage,unlesstherewereextremetemperatureor
pressure changes. For standard applications we can assume that the position of
theprojectioncentredoesnotchangewhendenedintherow-columnsystemof the
digital image. Two parameters are needed, the principal distance and the
principal point (see Figure 6.13); they are determined by camera calibration.
The principal distance is the geometrical equivalent of the focal length (which
is Principal distance physical property of a lens). The principal point is the
intersection point of the
perpendicularfromtheprojectioncentrewiththeimageplane. ThecameracaliPrincipal point
brationreportstatestherow(andcolumn)numberoftheprincipalpointandthe
principal distance. When using digital photogrammetric software for working
with images from digital aerial cameras, the user only has to enter the principal
distance,thepositionoftheprincipalpoint,andthepixelsizetodenetheinte- rior
orientation. In the case of a lm camera, the position of the principal point is
only xed in the camera and determined by camera calibration with respect to
the ducial marks. When scanning photographs (see Section 9.6), the principal point will have different positions in the row-column system of each image,
because each image will be placed slightly differently in the scanner. Therefore,
you have to measure for every image the imaged ducial marks and calculate
the transformation onto the ducial marks as given by the calibration report.
Digital photogrammetric software can then relate row-column coordinates of
any image point to image/camera coordinates (x,y) (see Figure 6.13).
previous next back exit contents index glossary bibliography about
6.4. THREE-DIMENSIONAL APPROACHES 245
Figure 6.13: Inner geom- etry of a camera and the associated image.
The exterior orientation determines the position of the projection centre with
re- spect to the terrain and also the attitude of the sensor. Frame cameras (lm
as well as digital ones) acquire the entire image at once, thus three coordinates
for the projection centre (X, Y, Z) and three angles for the attitude of the sensor
Exterior orientation (angles relating to roll, pitch, and yaw) are sufcient to
dene the exterior ori- entation of an entire image. Line cameras yield images
where each image line has its own projection centre and the sensor attitude
may change from one line to the next. Satellites have a very smooth motion
and airborne line cameras Sensor position and attitude are mounted on a
stabilized platform so that the attitude change is gradual and small.
Mathematically we can model the variation of an attitude angle through a
scene by a low degree polynomial. Images from modern high resolution line
cameras on satellites come with rational polynomial coefcients (RPC). The
RPC rational polynomials dene by good approximation the relationship
between image coordinates of an entire frame (in terms of row-column pixel
positions) and terrain coordinates (usually in terms of Lat-Lon and ellipsoidal
height of
previous next back exit contents index glossary bibliography about
6.4. THREE-DIMENSIONAL APPROACHES 246
WGS84). The nice thing is that the RPC are understood by RS software such as
ERDAS thus taking care of interior and exterior orientation. For cases where
RPC are not given or considered not accurate enough, you need GCPSs. For
aerialsurveyswecansolvetheexteriororientationinoneofthefollowingways:
Indirect camera orientation: identify GCPs in the image and measure the rowcolumncoordinates;acquire (X,Y,Z) coordinatesforthesepoints,eg, by GPS or a
sufciently accurate topographic map; use adequate software
tocalculatetheexteriororientationparameters,afterhavingcompletedthe interior
orientation. Direct camera orientation: make use of GPS and IMU recordings
during im- age acquisition employing digital photogrammetric software.
Integrated camera orientation, which is a combination of (a) and (b).
ForimagesfromveryhighresolutionsensorssuchasIkonosorQuickBird,add- ing one
GCP can already considerably improve the exterior orientation as de- ned by
the RPC. For Cartosat images it is advisable to improve the exterior orientation
by at least ve GCPs. For orienting a frame camera image you need at least
three GCPs (unless you also have GPS and IMU data). After orienta- tion, you
can use the terrain coordinates of any reference point and calculate its position