Sie sind auf Seite 1von 23

This is a pre-print. It might differ from the published version.

Please cite article as:

M. Koeva, M. Muneza, C. Gevaert, M. Gerke & F. Nex (2016)


Using UAVs for map creation and updating. A case study in
Rwanda, Survey Review, DOI:10.1080/00396265.2016.1268756

Using UAVs for map creation and updating. A case study in Rwanda
M. Koeva, M. Muneza, C. Gevaert, M. Gerke, F. Nex
Faculty of Geo-information Science and Earth Observation, University of Twente,
P.O.Box217, 7500 AE Enschede, The Netherlands

Rwanda Natural Resources Authority (RNRA), Kigali, Rwanda

Keywords: UAV, mapping, photogrammetry, urban, surveying, updating.

Aerial or satellite images are conventionally used as a data source for geospatial data collection,
map creation and updating purposes. However, it can be time consuming and costly to obtain
them. Recently, Unmanned Aerial Vehicles (UAVs) are emerging as suitable technology which
has the potential to provide information with a very high spatial and temporal resolution at a low
cost. This paper aims to show the potential of using UAVs, which can be an affordable mapping
solution for many developing countries. In the paper, the general UAV workflow is introduced.
Using the case study in Rwanda, where 954 images were collected with a DJI Phantom 2 Vision
Plus quadcopter at a flying height of 50 m, an orthophoto covering 0.095 km² with a spatial
resolution of 3.3 cm was produced. The final orthophoto with a positional accuracy of 6.0 cm
was used to extract features for mapping purposes with a sub-decimetre accuracy. General
quantitative and qualitative control of the UAV data products and the final outputs were
performed, indicating that the obtained accuracies comply to international standards. Moreover,
possible problems and further perspectives were also discussed. The results demonstrate that
UAVs provide promising opportunities to create high-resolution and highly accurate orthophotos,
thus facilitating map creation and updating.

Introduction

Geospatial data plays an important role in an estimated 80% of our daily decisions
(Heipke, Woodsford and Gerke 2008), and in various urban planning activities. For
example, in the context of the recently accepted Sustainable Development Goals, the UN
emphasises the need for high-quality and usable data, as “data are the lifeblood of
decision-making” (IEAG 2014). Moreover, there is an initiative from UN on Global
Geospatial Information Management (UN-GGIM) which aims to promote the use of
geospatial information to address key global challenges. Unfortunately, as discussed by
the Regional Centre for Mapping of Resources for Development (RCMRD) in Rwanda,
lack of funding is a major bottleneck in many developing countries and the required data
are often unavailable or outdated (Ottichilo and Khamala 2002). To ensure the usability
of spatial data as well as to provide a solid basis for informed decision-making and
planning, map updating is imperative. Map updating consists of three main steps: (i)
comparing a new data source against the existing basemap, (ii) identifying changes and
updating the database, and (iii) verifying the logical consistency of the updated version
with the old version (Heipke, Woodsford and Gerke 2008). Aerial or satellite imagery
obtained through remote sensing or earth observation is used as a data source for many
basemap updating activities. Previous research has demonstrated the use of satellite and
aerial imagery as means to extract information for creating and updating maps
(Alexandrov et al. 2004a; Alexandrov et al. 2004b Ali, Tuladhar, and Zevenbergen 2012)
as well as to provide input for urban models (Herold et al. 2003). Important features of
the urban environment, such as roads and buildings, may then be digitised in the imagery
either by experts (e.g. Ottichilo and Khamala 2002) or by a wider public in participatory
mapping exercises (Mourafetis et al. 2015). Over the past two decades, considerable
research has also focused on automatic feature extraction from high resolution satellite
and aerial images (Liu and Jezek 2004; Babawuro and Beiji 2012; Gruen et al. 2012;
Awad 2013; Horkaew et al. 2015). However, the temporal resolution of conventional
sensors is limited by the restricted availability of aircraft platforms and the orbit
characteristics of satellites (Turner, Lucieer and Watson 2012). Another disadvantage is
cloud cover, which impedes image acquisition through these platforms. Such limitations
restrict the use of satellites or manned aircrafts for map updating purposes, as it may
increase cost and production time. In order to provide the high-quality and up-to-date
information required to support urban governance and informed decision-making, Van
der Molen (2015) calls for land surveyors to make use of the potential of new affordable,
geo-spatial technologies.

A suitable example of such emerging technology is Unmanned Aerial Vehicles (UAVs),


which are proving to be a competitive data acquisition technique designed to operate with
no human pilot on board. The term UAV is commonly used, but other terms, such as
drones, Unmanned Aerial Systems (UAS), Remotely Piloted Aircraft (RPA) or Remotely
Piloted Aerial Systems (RPAS) have also been frequently used in the geomatics
community (Nex and Remondino 2014). UAV refers to the aircraft itself which is
intended to be operated without a pilot on-board, whereas UAS refers to the aircraft and
other components that could be required such as navigation software and communication
equipment etc. According to ICAO Standards1, RPAS are a subset of UAS which are
specifically piloted by a “licensed ‘remote pilot’ situated at a ‘remote pilot station’ located
external to the aircraft”. RPAS refers to the entire system whereas RPA refers to the
aircraft itself. The pilot’s license should address legal, operational and safety aspects. In
the geomatics community, the terms UAS and RPAS are often used interchangeably, and
will be considered as synonyms in the current paper.

For photogrammetric applications, the payload of the whole system is composed of a


camera, Global Navigation Satellite System (GNSS) and Inertial Measurement Unit
(IMU) (Colomina and Molina 2014). The camera takes overlapping images as it flies over
a study area. These images may be processed through a photogrammetric workflow to
obtain a point cloud (or a Digital Surface Model), an orthophoto or a full 3D model of the
scene. An on-board GNSS device allows these data products to be georeferenced.
However, in the context of low-cost UAVs, the accuracy of such GNSS is often limited.
Therefore, supplementary Ground Control Points (GCPs) are usually acquired in the
study area, in order to maintain the accuracy of the image block orientation and derived

1
http://www.icao.int/Meetings/UAS/Documents/Circular%20328_en.pdf
mapping products, such as orthophotos, and to facilitate its integration with other spatial
data. These GCPs have to be carefully selected and well distributed, and they should be
visible in many images, as well as easily identifiable in the images after the acquisition
and measurable with accurate technology such as survey-grade GNSS. With the current
study, we aim in particular to analyse the process of orthophoto production using
Unmanned Aerial Vehicles for map creation and updating as a low-cost solution which
can be affordable for many developing countries. Moreover, an important step is to assess
the obtained accuracy both quantitatively and qualitatively. To address this aim, we make
use of a case study in Rwanda.

The paper structure is as follows: firstly, we introduce the background, regulation


framework and the methodology for true orthophoto creation and feature extraction.
Using the case study in Rwanda, we perform a quantitative and qualitative control of the
UAV data products. For this purpose, we conduct experiments with and without the use
of GCPs for the qualitative analysis of the product. This in-depth analysis gives insight
to the quality of the data products and thus their fitness for various applications, the
importance of ground control points, as well as to describe possible deformations in the
orthophotos and how these may be avoided. Finally, we describe a feature extraction
framework of digitisation rules which can be applied to ensure the logical consistency for
map creation and a subsequent updating procedure. This last step is very important as the
considerably higher resolution of the UAV images, as opposed to the previous scale of
the basemap, allows new features to be visible, and existing features to be represented in
different ways. The accuracy of digitization is also assessed and described. General
quantitative and qualitative control of the UAV data products and the final outputs are
shown. Finally, possible problems and further perspectives are also discussed.

Background and regulatory framework

UAVs were first created and used in military applications where flight recognition in
enemy areas, unmanned inspection, surveillance, reconnaissance, and mapping of enemy
areas without any risk for human pilots were the primary military aims (Nex and
Remondino 2014). Nowadays, UAVs are increasingly being used in civil and scientific
research activities in different fields of application. For example, for agriculture
(Grenzdörffer and Niemeyer 2011), mapping (Nex and Remondino 2014), surveying and
cadastral applications (Cunningham et al. 2011; Barnes et al.2014; Manyoky et al. 2011;
Cramer et al. 2013), archaeology and architecture (Chiabrando et al. 2011), geology
(Eisenbeiss 2009), coastal management (Delacourt et al. 2009), disaster management
(Choi and Lee 2011; Molina et al. 2012), damage mapping (Vertivel et al. 2015) and
cultural heritage (Remondino et al. 2011; Rinaudo et al. 2012). With this paper we aim to
investigate the suitability of UAVs for map creation and updating. Other examples for
applications have been demonstrated for monitoring forest fire, Zarco-Tejada and Berni
(2012) used a fixed-wing UAV with thermal and hyperspectral sensors. Experiments have
also been reported for tree classification (Agüera, Carvajal and Pérez 2011), Normalized
Difference Vegetation Index (NDVI) calculation (Lucieer et el. 2012), and monitoring
stream temperatures (Costa et al. 2012).

Technically, UAVs can fly almost everywhere. Their flexibility is high and this fact
allows them easily to change the observed location and viewing angle in a short time. For
that reason it is important to pay attention to the safety of the users of aerial spaces
including manned or other unmanned aircraft, to the people and property on the ground,
as well as their impact on the environment (Watts, Ambrosia, and Hinkley 2012). This
emerges from a concern regarding how this flying system which does not have pilots on
board, can be safely deployed in public space. With the ability to carry cameras, infrared
sensors and facial recognition technology, they can present a serious threat to privacy.
However, in daily practice one obstacle is a missing or unfavourable regulatory
framework. Great diffusion and commercialisation of UAVs have pushed several national
and international associations to analyse the operational safety of UAVs through the
formulation of clear regulations. Motivations behind such regulations include the safety
of people on the ground and security against the misuse of UAVs which can result in
accidents and privacy violations as they are flying in shared public space. Also, having
clearly formulated regulations is important in order to provide a legislative environment
which enables the formulation of proper insurance policies. However, in the past years
each country has defined a different implementation of the UAV regulations and a
comprehensive and common set of rules is still missing. In June 2015, the Federal
Aviation Administration (FAA)2, defined a new set of rules (Washington, DC 20591).
Operational limitations such as maximal weight of the vehicle (less than 25 kg.), visual
line-of-sight (VLOS) over rural or uninhabited areas during daylight, maximum ground
speed of 100 mph, maximum altitude of 400 feet above the ground, minimum weather
visibility of 3 miles from control station, etc. were defined in detail. The rules concerning
the responsibilities and certification of the pilot were set as well. The same year, the
European Aviation Safety Agency (EASA) developed Advance Notice of Proposed
Amendment 2015-103 categorising the vehicles into three classes: open (low risk),
specific operation (medium risk), and certified (higher risk). Regulations at European
level usually consider the following three elements: (i) the security and technical
specifications of the UAV (ability to perform safe flights and the weight of the UAV) (ii)
the skills of the pilot (who must be certified), and (iii) the surveyed area which can be
uncritical (without humans or important infrastructure) and critical (with humans, etc.).
These three elements determine where flights are allowed and under which conditions
(Dolan and Ii 2013; Nex and Remondino 2014). At the time when this project was done,
there were no particular restrictions or regulations for the usage of UAVs in Rwanda, the
location of our case study, and flights could be performed after an application for flight
permission from the Rwanda Civil Aviation Authority (RCAA) and contact with
landowners.
This overview shows that all over the world efforts are being undertaken to find a suitable
regulatory framework for the (professional) civil use of UAVs. From our point of view it
is thus worthwhile to investigate into the use of UAV-based imagery for mapping
purpose, even if today conditions for their usage are not defined everywhere.

Methodology

In this section we describe the general UAV workflow. Using the case of Rwanda we
perform quantitative and qualitative control of the final UAV data products for the
purpose of map updating. As block deformations can be observed due to missing or
poorly distributed GCPs and insufficient camera calibration (Gerke and Przybilla 2016),
the importance of GCPs is analysed. Then using the high resolution output, the feature

2
https://www.faa.gov/uas/resources/uas_regulations_policy/
3
https://www.easa.europa.eu/system/files/dfu/A-NPA%202015-10.pdf
extraction process based on clear feature digitisation rules is described and their accuracy
assessed. Such rules ensure the consistency in the updates by various technicians as well
as the compatibility between the previous map and updated version. A schematic
overview of the UAV workflow is shown in Figure 1.

Figure 1. UAV workflow.

Flight planning, image acquisition and GCP collection

For every aerial surveying project using UAVs or classical acquisition techniques, the
first obligatory part is to start with flight planning. For mapping purposes this part
requires activities such as getting the flight permission, the software selection, the detailed
analysis of the area and required pixel size on the ground (ground sample distance, GSD),
among others. Some important aspects such as flying height, use of GNSS and Inertial
Navigation System (INS) on board, measurements of GCPs on the ground with required
technical equipment should be considered as well. The quality of the final product that
will be used for mapping - usually the orthophoto - depends greatly on the quality of the
acquired images. High overlaps (such as 80%) between images are generally used to
improve the quality of the final products, in particular the dense image matching results.
The high overlap is also recommended in order to avoid gaps owing to platform instability
induced by possible turbulence.

Image orientation

The main objective of photogrammetry is to extract 3D information from 2D images. For


this purpose, the interior orientation, IO, (defining the position of the projection centre of
the camera with respect to the image, the principal distance or focal length and lens
distortions) and exterior orientation, EO, (defining the position of the camera projection
centre and the rotation of the assembly of its optical axis with respect to the mapping
frame) of the images have to be computed. In the traditional (manned) airborne
photogrammetry the adopted sensors are metric cameras. These cameras have stable
interior orientation parameters, usually estimated by the producer through a camera
calibration. The first examples of low weight metric frame cameras (Kraft et al. 2016) for
UAVs have been recently developed, but in the current study we focus on the low-cost
solutions as they still represent the large majority of the sensors installed on UAVs. These
solutions are non-metric consumer cameras and are already largely used for
photogrammetric mapping projects (Barnes et al. 2014). The consequence of using
cameras of this category is that IO parameters need to be estimated from the captured data
themselves (self-calibration).

To perform camera calibration and image orientation the extracted of common features
(tie points) visible in multiple images are needed. The selection of homologous points in
the images is nowadays performed automatically using the dedicated algorithm of feature
extraction (Barazzetti, Scaioni and Remondino 2011). The mathematical relationship
between images, camera geometry and ground space is computed during block
triangulation, which is commonly combined with an error minimisation strategy and is
then referred to as bundle block adjustment (BBA). This includes the following tasks: (1)
determine the EO information for each camera in the project as it was at the time of its
exposure including the IO parameters for each device, (2) determine the ground
coordinates of the tie points measured in the image overlapping area. In a global error
minimisation procedure the errors associated with the image and camera geometry and
associated image measurements are distributed. If GCPs are provided, they help to
geometrically support the overall image block, and at the same time define a proper datum
for the object space. If the GNSS/INS information on board the UAV is available then
the collected data helps in the automatic tie point extraction because a prediction of the
image in overlap is possible in advance. The navigation information also supports the
georeferencing of the whole image since the image projection centres are then already
estimated (GNSS assisted sensor orientation).

When these data are not available or of bad quality, then indirect sensor orientation is
done by incorporating GCPs. The distribution of GCPs is of high importance not only for
the image orientation but also for the prevention of block deformation effects which may
result from the remaining systematic errors in the camera calibration. As concluded by
Gerke and Przybilla (2016), to avoid block deformations it is suggested to plan cross-
flight (different flight directions and altitude) for some parts of the study area. This
procedure will be especially helpful for the enhancement of the results in flat terrain
owning to a more reliable self-calibration.

Dense point cloud generation and DSM generation

A dense matching technique should be applied after accurate IO and EO parameters are
available in order to represent the object space through dense point clouds. These point
clouds should later on be structured, interpolated if needed simplified and textured for
photo-realistic representation and visualisation (Nex and Remondino 2014). A large
number of image matching techniques have been developed and presented in the
photogrammetric and computer vision communities over the last decade. These can be
divided into two main classes: (i) patch-based approaches and (ii) semi-global
approaches. An example of the first class is the multi-image approach presented in
Furukawa et al., (2010), while an example of the second group is given by Hirschmuller
(2008) and subsequent refinements (Rothermel et al. 2012; Wenzel, Rothermel, and
Fritsch 2013). Patch-based approaches are very often multi-image (i.e. they use many
images simultaneously to determine homologous points and their 3D position), while
semi-global approaches work with stereo-pairs and then merge the generated point clouds
into a single dataset afterwards. Their quality is influenced by flight parameters, such as
resolution on the ground (GSD), image overlap and sensor quality. The presence of
occlusions, shadows and regions with reduced texture might lead to gaps and increases
the noise of the generated DSM, which is interpolated from the point cloud. For a more
detailed description of matching algorithms, refer to Remondino et al. (2014).

True-Orthophoto creation

The final representation to be used for mapping is created through orthorectification,


which requires precise surface information to remove projective distortions of the original
images. The orthophoto represents all objects in a map-like projection. The required
surface may be obtained using a Digital Terrain Model (DTM), or DSM: in this last case,
the result of the orthorectification is called a true-orthophoto. The depiction of the scene
in the airborne images, combined with its accurate geometry given by the orthogonal
projection, allows even users unskilled in cartography to understand, read and accurately
measure objects present in the image (Biasion, Dequal and Lingua 2004). However, in
order to use this final product as a reliable source for feature extraction, the created
orthophoto should be free from distortions and mis-projections. Therefore, qualitative and
quantitative assessments and correction of the orthophoto might be necessary. For
example, building facades should not be visible in true orthophotos, although they might
be visible in the original images or conventional orthophotos (Figure 2).
(a) (b)

Figure 2. An example of facade visibility in the original image (a), which should be
removed to obtain a true orthophoto (b).

Feature extraction

The final orthophoto and the DSM are very useful for manual or semi-automatic feature
extraction for map creation or updating. Highly accurate photogrammetric products
acquired through traditional platforms such as satellites or aircraft have been used for
feature extraction for decades (Madzharova et al. 2008; Gruen et al. 2012). Their added
value compared to the conventional surveying methods in terms of time, costs and
accuracy has been proved, even though there are some disadvantages due to the lack of
attribute data (street names) or occluded features. The current study aims to investigate
the suitability of orthophotos, specifically those produced from UAV images, for map
creation and updating. To obtain high quality vectorised maps in ordered, clear and
complete way, the feature extraction procedure should be guided by well formulated
rules. In practice, such rules and guiding explanations are combined in so-called
“extractions guides”. Before the manual feature extraction is actually undertaken, these
rules should be accepted and approved by the authorised specialist (such as head of
departments working in municipalities). During digitisation, a unique identifier should
be assigned to each extracted feature in the database, along with the attributes defined by
the extraction guides.

Geometric quality assessment

The quality of the image orientation and orthophoto were analysed both qualitatively and
quantitatively. Possible deformations of the orthophoto include radiometric errors caused
by imperfect image blending and radiometric differences between the individual UAV
images. Deformation could also be visible due to the imperfections in the DSM, causing
faulty orthorectification of the individual images. Through visual inspection, deformation
and artefacts are presented. Their respective causes are discussed, along with measures
which can be taken to avoid or ameliorate the defects.

Case study in Rwanda

Existing geospatial data

During recent years, Rwanda has been growing in terms of population and economic
development, which raises the need for up-to-date geospatial information to facilitate
timely and efficient planning. To support this idea, a traditional aerial photogrammetric
mission was performed over the entire country in 2008 and 2009. Elevation data was
generated and orthophotos with high accuracy were produced. A basemap covering the
whole country of 26.338 km2 was created through manual digitisation. It includes the
following features:

 Boundaries (administrative boundaries)


 Hydrography (rivers and lakes)
 Elevation (contours lines every 25 m, DEM)
 Physical infrastructure (railways, roads network, airport, powerlines, harbours,
signposts)
 Social infrastructure (schools, health, facilities, government offices, markets,
place names)
 Thematic data (land cover, land use, built-up, parcels)

In 2010, the orthophoto was also used for the production of the Rwanda National Land
Use Master Plan at a scale of 1:250 000 and the creation of topographic maps at a scale
of 1:50 000. This represented a significant progress for the country since, in most of the
developing countries, the maps in use are from the 1970s. Unfortunately, there have been
no efforts for updating this information since then. Hence this case study aims to apply a
state-of-the-art acquisition technique to partially update the derived information and thus
supports future development.

Study area

The area for the current study covers an informal settlement located in Nyarutarama cell,
City of Kigali, Rwanda (Figure 3). This area was selected as there have been major
changes since the orthophoto was produced (2009), and it thus illustrates the importance
of updating the national basemap which forms the foundation of many policy and
development decisions.

Figure 3. Location of the study area and distribution of GCPs.

Results

Data Acquisition, image processing and orthophoto creation

In May 2015, UAV flights were operated over the area using a DJI Phantom 2 Vision
Plus quadcopter, the main characteristics of which are shown in Table 1. Using the
Pix4DCapture app4, a flight plan was defined above the study area. The UAV flew
autonomously at an altitude of 50 m above the ground, resulting in an average GSD of
3.3 cm. A common problem while using Phantom is that there is a difference between
planned number of images and the one performed. For our study a total of 1172 geotagged
nadir images were planned and only 954 were obtained. However, an average of 85%
forward and 75% side-overlaps were achieved. The duration of the flight over the study
area, including take-off and landing, was approximately 2 hours and the identification

4
https://pix4d.com/
and marking of the GCPs in the images and image orientation, dense image matching and
orthophoto creation took about 2 days on a decent desktop computer.

Model DJI Phantom 2 Vision +


Camera PhantomVisionFC 2000
Resolution 16 MP
Sensor width x height 6.48525 x 4.86394 mm
Image width x height 4608 x 3456 pixels
Pixel size 1.4 µm
Focal length 5 mm
Maximum flight time 25 min
Geolocation On-board GNSS

Table 1. UAV properties.

In the case of these non-metric cameras a self-calibration is mandatory, so pre-calibration


is not an option, hence in our case the indirect georeferencing cannot be decoupled from
the sensor calibration. The acquired images were processed using the Pix4D Mapper
photogrammetric software. The camera installed on the Phantom 2 is a fish-eye lens that
acquires the images using a rolling shutter sensor. Fisheye lenses have the advantage of
acquiring images with large fields of view and high resolution but they require dedicated
camera calibration models to generate results comparable to conventional perspective
cameras (Strecha et al. 2015). As already discussed, the cameras installed on a UAV are
non-metric and a self-calibration in the Bundle Block Adjustment is therefore mandatory.
The electronic shutter, also called “rolling shutter”, adopt a sensor that is exposed and
reads the scene line-by-line, instead of exposing the entire image at once. Additional
distortions have to be considered and modelled when the UAV is flying too fast or when
there are dynamic objects to be captured (Baker, et al. 2010; Grundmann et al. 2012). In
the case of UAVs, the effect of the rolling shutter cameras can be modelled by assuming
some constant motion speed, and it was demonstrated that this model might provide
results comparable to the results of the global shutter cameras as reported in Vautherin et
al. (2016). The rolling shutter model presented in this paper is also implemented in Pix4D
and it was therefore adopted for the processing of the image block as an additional
experiment.

The Phantom 2 carries a consumer grade GNSS (L1, code signal, expected absolute
position accuracy 5-10 m), so high-quality GCPs were collected in order to be able to
produce accurately georeferenced products. These points should be measured on static
objects which are easily identifiable in the UAV imagery. Figure 3 shows the layout of
the points which were surveyed on ground using a Real-Time Kinematic (RTK) GNSS
with an approximate accuracy of 2 cm. The local coordinate system, TM_Rwanda, was
utilised. The GCPs were split into two groups: 7 GCPs were used as GCPs for the image
block and 6 points were used as Check Points (CPs) to assess the accuracy. All points
were marked in at least 14 UAV images.

After the image orientation, the dense matching process was initiated using the full
resolution of the images to generate a very dense point cloud. The employed software
uses a patch based approach. This process led to the generation of millions of points that
were interpolated into the DSM. Using the obtained DSM, the orthorectification process
was performed in order to remove relief distortions and to produce a true-orthophoto.
Image orientation experimental results

Three experiments were conducted for the quantitative analysis. For image orientation
the first is GNSS assisted sensor orientation where no GCPs captured with geodetic
receivers are involved. It provides an indication of the accuracy which can be achieved
by this UAV in situations where no external GCPs are available. As UAVs may be utilised
for rapid mapping applications in areas with hazardous or limited human accessibility,
thus making it difficult or impossible to take ground control points, knowing the accuracy
of the orthophoto obtained, using only the on-board GNSS will help to estimate how
accurate the obtained model is. This procedure might be very useful for mapping tasks
where lower accuracy is acceptable. As Figure 4 shows, the image orientation with only
geotags resulted in a low geolocation accuracy: achieving an RMSE at checkpoints of
0.19 m in X, 0.77 m in Y, and 5.67 m in Z although this accuracy may be adequate for
some mapping applications, it is still very high compared to the spatial resolution of the
orthoimagery. The low accuracy is primarily due to insufficient accuracy of the
navigation-grade GNSS units used to record camera position at the time of image
acquisition. Regarding the low vertical accuracy, the error may also be due to the fact that
the UAV used for the present study did not record the absolute altitude of the drone in the
image geotags but rather the relative height above the take-off location. Here, the absolute
height was obtained by extracting the elevation of the take-off point from a national DEM
(with a spatial resolution of 5 m), introducing errors in the final model. In order to mitigate
the systematic shift induced by this procedure one could simply shift the whole block
according to the mean error. This, however, requires some sort of ground truth.

6.0 5.67

5.0

4.0
RMSE (m)

3.0

2.0

1.0 0.77
0.19 0.13 0.04 0.07 0.05 0.07 0.20
0.0
X Y Z

Exp 1: GNSS-assisted sensor orientation


Exp 2: Indirect sensor orientation including GCPs without rolling shutter
Exp 2b: Indirect sensor orientation including GCPs with rolling shutter

Figure 4. RMSE obtained with experiment 1 (GNSS-assisted sensor orientation),


experiment 2 (indirect sensor orientation including GCPs without rolling shutter
correction) and experiment 2b (indirect sensor orientation including GCPs with rolling
shutter correction).

The second experiment includes high precision GCPs in the image orientation phase, thus
maximizing the accuracy of the image orientation and subsequent orthophoto production.
After including the 7 GCPs in the image orientation, the accuracy improved significantly
(Figure 4). The model achieves an RMSE at check points of 0.13 m (3.94 GSD) in X,
0.07 m (2.12 GSD) in Y and 0.07 m (2.12 GSD) in Z, thereby meeting the requirements
of the present study. The third experiment not only includes the high precision GCPs, but
also introduces the rolling shutter correction. Here we can see that this greatly increases
planimetric accuracy, reducing the RMSE at check points to 0.04 m (1.21 GSD) in X and
0.05 m (1.51 GSD) in Y, though slightly increases the RMSE in Z to 0.20 m (6.06 GSD).
The minimum, maximum, mean, and standard deviations of the check point residuals for
the three experiments are also provided in Table 2.

Table 2. The minimum, maximum, mean and standard deviations of the check point
residuals for all three experimental set-ups.

After the image orientation the dense matching was performed. The image matching
results of the third experiment including GCPs and rolling shutter correction have been
published and can be freely viewed online5.

Qualitative assessment of the orthophoto

The processing steps as explained in the previous section resulted in a high quality RGB
orthophoto (Figure 5) with a GSD of 3.3 cm and a radiometric resolution of 8 bits. A
visual inspection of the image indicates that it is suitable for visual interpretation, as
features are clearly visible and objects can be easily extracted.

5
The point cloud visualization is available at:
https://www.itc.nl/eos_public/rwanda2015/portal.html . Please note that the
compatibility has been verified with Google Chrome on a Windows platform.
Figure 5. The orthophoto obtained from UAV imagery of the study area in Kigali.

However, some minor deformations were detected and will be further analysed here with
the purpose of illustrating which types of errors may be present in orthophotos obtained
from UAV imagery. Examples of such distortions are given in (Figure 6). Problems with
facade visibility caused by DSM imperfections were observed (Figure 6a). This can be
mitigated by improving the image overlap during the UAV flights. Moving objects such
as cars may be located in different areas due to the slight time delay between the
acquisition of sequential images. This may cause the problem that moving objects appear
multiple times in the orthophoto (Figure 6b). Such effects can be manually removed if
there is sufficient overlap between the images. For the current study, this problem was
solved using the “mosaic editor” in Pix4D Mapper, which allows the user to select which
image is used to produce a section of the orthophoto. Distortions attributed to the fish-
eye lens of the UAV camera were visible in areas at the border of the image block. (Figure
6c). Also, standing objects such as light poles and pylons may cause difficulties in the
dense matching and DSM generation steps, which results in wrong reconstructions in the
orthomosaic (Figure 6d). Finally, the DSM interpolation method influences the
orthophoto as well. For example, using a direct interpolation method may cause noise at
overhanging roof edges (Figure 6e), whereas Inverse Distance Weighting interpolations
may cause rounded roof corners (Figure 6f). This final aspect is further addressed in the
discussion section of this paper.
(a) (b) (c)

(d) (e) (f)

Figure 6. Possible distortions in the UAV orthophoto include: (a) facade visibility, (b)
moving objects, (c) feature distortion due to insufficient overlap and lens distortions (d)
distortions of high poles, and distortions due to the DSM interpolation method used such
as grainy roof edges (e) or rounded roof corners (f).

Quantitative assessment of the orthophoto

The quantitative assessment of the orthophoto consists of two aspects: (i) the accuracy at
the measured control points and (ii) the geometric accuracy of objects measured in the
orthophoto. The same control points utilized to verify the image orientation step yield an
RMSE of 4.8 cm in X and 3.7 cm in Y, corresponding to a planimetric accuracy of 6.0
cm. The result shows that the level of detail and the radiometric quality of the orthophoto
is completely comparable to the quality of the input images. According to the new
standard, the obtained error meets the requirements for the horizontal accuracy class of
7.5cm (ASPRS 2015).

The horizontal accuracy obtained without ground control points with the GNSS-assisted
sensor orientation method was around 80cm. Similar studies that use various UAVs to
obtain orthophotos report accuracies of: 1 m (8 cm GSD) using on-board GNSS
(Skarlatos et al. 2013), 90 cm accuracy (Haitao and Lei 2010), 2-8 m for 130–900 m
flying altitude was reported by Küng et al. (2012), and 2-4 m with 50-100 m flying height
by Eugster and Nebiker (2007).

The second part of the quantitative assessment consisted in analysing the geometric
accuracy of the produced orthophoto on an object level. A number of permanent objects
were actually measured in the field as well as digitized over the orthophoto (see Figure
7). Results indicate that measurements in the orthophoto replicated the field
measurements to an error of less than 0.6% of the actual dimensions (Table 3).

Figure 7. Example for measured distances on the ground.

Object L field (m) L ortho (m) Error (m) Error (%)


1 56.200 56.221 0.021 0.037%
2 0.700 0.704 0.004 0.568%
3 1.820 1.822 0.002 0.110%

Table 3. Comparison between physical objects measured in the field (L field) and in the
orthophoto (L ortho).

Map creation and basemap updating

The orthophoto was then used to extract features to update the existing basemap. The
original basemap was used to define the extraction guides for updating existing layers.
However, the high resolution and level of detail of the UAV orthophoto enables additional
objects of interest to be visible. This provides the opportunity for creating new vector
datasets representing topological features (such as drainage and narrow footpaths),
potentially enabling a more informed decision-making for various urban planning
activities. Extraction guidelines were also developed for: road centrelines, roads, built-up
areas, footpaths, and drainage. For each of these datasets, the extraction guide provided a
definition and description of the features to be included, and mandated how these should
be digitized. The guide also described the geometry type, unique code, and required
attributes of each feature.

In total, 948.7 m of roads, 16553.0 m² of buildings, 778.8 m of drainage, 1510.3 m of


footpaths, and 1078.0 m² of schools were digitised. After digitisation, their spatial
accuracy was checked by comparing a number of digitized coordinates with those
measured in the field. The features were digitised with an average error of 1.3 cm in X
and 3.2 cm in Y, and a planimetric RMSE of 8.8 cm. This is acceptable for mapping
according to the new combined standard (ASPRS 2015). The augmented resolution
greatly enhanced the interpretability of the orthophoto for feature extraction purposes. It
was observed that not only a variety of features can be identified and distinguished more
easily, but also the high resolution also allows convenient extraction of new features. For
example, features such as drainage lines (both man-made and eroded gullies) and
structures such as lamp posts and electricity poles were very clearly visible.

Metadata was added to the vector layers according to ISO 19115 (2003) to document their
quality and facilitate future analyses using the updated basemap. Two examples of 1:1000
scale maps were produced for demonstration purposes: a cadastral (Figure 8) and a
topographic map (Figure 9). Furthermore, generalisation procedures were applied to
rescale the features digitised over the orthophoto at the scale of the national basemap
(1:50 000) and update the existing basemap (Figure 10). A topological control and
metadata creation (ISO 2003 Template) were conducted before completing the updating
procedure.

Figure 8. Cadastre map 1:1000 scale.

Figure 9. Topographic map 1:1000 scale.


Figure 10. Map update 1:50000 (before and after: inserted).

Discussion

The results indicate that with a low-cost UAV and photogrammetric techniques, it is
possible to obtain high-quality products. The final orthophoto with a positional accuracy
of 6.0 cm proves to be suitable to extract features (accuracy of 8.8 cm), which is
acceptable for mapping according to the new combined standard (ASPRS 2015).
Compared to the time and cost of traditional photogrammetric surveys, this technique
represents a promising alternative. Unclear or restrictive legislation and computer
processing requirements for large projects are two of the main challenges faced by the
application of UAVs for large-scale basemap updating. However, there are projects
currently addressing such issues, for example by reducing the computational
requirements of processing (NLeSC, 2016) in MicMac, an open source photogrammetric
software6 (IGN, France). Furthermore, technological developments such as GPU
processing are continuously reducing the computational bottleneck.

The quality obtained by UAV photogrammetric products depends on a number of factors.


Visual errors in the final orthophoto were mainly due to DSM deformation. This
deformation was caused by lack of sufficient overlap during image acquisition. Therefore,
the generated point cloud lacked the required density to perform the geometric
reconstruction of certain objects. This stresses the importance of rigorous flight planning
and precise image acquisition in order to guarantee the quality of the final product. Other
deformations, such as the duplication of moving objects, can be removed by manually
adjusting the input images selected for the creation of the orthophoto. Such manual
processes, however, are also necessary for conventional manned airborne based products.

The orthophoto quality also depends on which algorithm is used to interpolate the DSM
from the point cloud. A common method is triangulation, which may result in noise

6
http://logiciels.ign.fr/?Micmac
around overhanging roof edges (Figure 6e). Other interpolation methods such as Inverse
Distance Weighting improve the visual appearance of overhanging roof edges but causes
rounded roof corners (Figure 6f). Such observations stress that quality of the DSM and
orthophoto also depend on which software and algorithms are utilised.

Regarding the georeferencing accuracy, a comparison was made between using only on-
board GNSS and including external GCPs for image orientation. The achieved results
indicate that GNSS-assisted sensor orientation using the coordinates provided by the
UAV resulted in low accuracy (see Figure 4). This is due to the use of a consumer grade
GNSS. This demonstrates that for maximal accuracy, there is still a dependency on high-
quality GCPs. GCP quality can be influenced by the precision of the surveying equipment
used, their distribution throughout the study area and positioning error introduced when
manually marking GCPs in the UAV images is performed. Errors incurred in any of these
elements will have an impact on the accuracy of the final product.

Conclusions and recommendations

This work demonstrates that UAVs provide promising opportunities to create a high-
resolution and highly accurate orthophoto, thus facilitating map creation and updating.
Through an example in Rwanda, the photogrammetric process of obtaining an orthophoto
from the individual UAV images is explained. With the support of external high precision
GCPs, the orthophoto created for the case study has planimetric and vertical accuracies
less than 8 cm, thus meeting the requirements for 1:1000 scale maps. A number of factors
that influence the quality of the orthophoto are highlighted, as well as possible strategies
which can be adopted to mitigate these imperfections.

This important part of the study shows that due to the high resolution of the UAV
orthophoto, new features can be easily extracted and various outputs can be produced.
For the basemap updating task of the case study area, clear digitization rules for feature
extraction were defined in order to ensure the logical consistency. A so-called extraction
guide was created to guide the feature extraction process. It consists of a clear description
of the object, requirements for their attributes and specifically defined digitization rules.

The important role of GCPs on increasing the accuracy of the obtained orthophoto is also
demonstrated here. As reported, the geolocation accuracy without external GCPs is
relatively low. This can be resolved through the collection of additional high-quality
GCPs in the field, which require extra time for collection and insertion in the software.
Therefore, UAVs are currently more suitable for map updating projects over a limited
study area and incremental map updating. However, rapid developments in both UAV
platforms, increasing the area covered per flight and improving the accuracies of the on-
board GNSS (Gerke and Przybilla 2016), as well as photogrammetric software will likely
facilitate the processing of larger projects in the foreseeable future.

Notes on contributors

Mila Koeva is an Assistant Professor working in 3D Land Information. She holds a PhD
in 3D modelling in architectural photogrammetry from the University of University of
Architecture, Civil engineering and Geodesy in Sofia. She also holds a MSc. degrees in
Engineering (Geodesy) from the same institution obtained in 2001. After her work in
Municipality company GIS-Sofia Ltd. and later on in the private company Mapex JSC.,
combining multidisciplinary approach for cadastre needs, she moved to University of
Twente at the faculty of Geo-Information Science and Earth Observation (ITC) where
she was teaching topics of Photogrammetry and Remote sensing. Her main areas of
expertise include 3D modelling and visualization, 3D Cadastre, 3D Land Information,
UAV, digital photogrammetry, image processing, large scale topographic and cadastral
maps, GIS, application of satellite imagery among others. She is co-chair of the ISPRS
working group WG IV/10: Advanced geospatial applications for digital cities and
regions.

Muneza Jean Maurice is currently working as photogrammetrist in Rwanda Natural


Resources Authority. He obtained his Bsc. degree in 2010 at National University of
Rwanda and later on in 2014 his Master’s degree in Environmental studies at the Open
University of Tanzania. In 2015 he successfully obtained his second Master’s degree in
Geoinformatics at Faculty of Geo-Information Science and Earth Observation (ITC) of
the University of Twente. His main interests are in photogrammetry, surveying, disaster
and humanitarian relief, economic empowerment, science and technology.

Caroline M. Gevaert is currently pursuing a Ph.D. degree at the Faculty ITC, University
of Twente in the Netherlands. The objective of the research is to investigate how UAVs
may be useful for informal settlement mapping. Her research interests span a wide range
of topics from image and point cloud feature extraction methods to advanced machine
learning techniques, as well as the potential societal impact and ethics of the use of UAVs
in this context. She previously received an M.Sc. degree in Remote Sensing from the
University of Valencia (Spain) and an M.Sc. degree in Geographic Information Science
from Lund University (Sweden).

Markus Gerke is Assistant Professor since 2007 in the Department of Earth Observation
Science at the University of Twente. He specializes in geometric and semantic
information extraction from airborne image sequences, photogrammetry and 3D
modelling. He obtained Masters and PhD degrees at the Leibniz University of Hannover,
and contributes to projects, including projects funded nationally and by the EC, on 3D
topography, virtual city modelling, use of un-calibrated airborne imagery and close-range
photogrammetry. He is co-chair of the ISPRS working group II/4 on “3D Scene
Reconstruction and Analysis” and PI and Co-PI of two well-recognized benchmark tests
in the field of photogrammetry and remote sensing.

Francesco Nex received his master degree in Environmental Engineering (2006) and his
PhD degree (2010) from the TU Turin (Italy). From 2011 to 2015 he was with the 3DOM
unit of FBK (Italy) working as Marie-Curie Post-Doc in the CIEM Project from 2011 to
2014 and then as Researcher. In 2015, Francesco moved to the University of Twente,
where he is currently Assistant Professor at the faculty of Geo-Information Science and
Earth Observation (ITC), department of Earth Observation Science (EOS). His main
research interests are in the use of UAV platforms and oblique imagery as well as the
automation in the feature extraction from this data. He is currently involved in three
European research projects dealing with UAV imagery. He is the Chairman of the ICWG
I/II (Unmanned Aerial Vehicles: sensors and applications) of the ISPRS and he was the
PI of the ISRPS scientific initiative on the integration of UAVs and oblique imagery.

Acknowledgement

The authors would also like to express their gratitude to ITC (https://www.itc.nl/) which
financed the UAV image acquisition, Rwanda Natural Resources Authority and the
support of the officials in the City of Kigali for their support. We thank also Pix4D for
providing us with the research license of Pix4dmapper.

References

Agüera, F., Carvajal, F., & Pérez, M. 2011. Measuring sunflower nitrogen status from an
unmanned aerial vehicle-based system and an on the ground device. In ISPRS Ann.
Photogramm. Remote Sens. Spatial Inform. Sci, XXVIII-1/C2, pp.33-37.

Alexandrov, A., Hristova, T., Ivanova, K., Koeva, M., Madzharova, T., & Petrova, V. 2004a.
Application of QuickBird satellite imagery for updating cadastral information. In The
International Archives of Photogrammetry, Remote Sensing and Spatial Information
Sciences, Volume XXXVPart B2, 2004, (pp. 386-391).

Alexandrov, A., Hristova, T., Ivanova, K., Koeva, M., Madzharova, T., & Petrova, V. 2004b.
Application of satellite imagery for revision of topographic map of Sofia. In 4th
International PHOTOMOD users conference (Proceedings), Minsk, pp. 3-10.

Ali, Z., Tuladhar, A., and Zevenbergen, J. 2012. An integrated approach for updating cadastral
maps in Pakistan using satellite remote sensing data. In International Journal of Applied
Earth Observation and Geoinformation, 18, pp. 386-398.

American Society for Photogrammetry and Remote Sensing (ASPRS). 2015. ASPRS Positional
Accuracy Standards for Digital Geospatial Data. Photogrammetric Engineering and Remote
Sensing, 81(3), A1–A26.

Awad, M.M. 2013. A Morphological Model for Extracting Road Networks from High-Resolution
Satellite Images. Journal of Engineering, 2013, pp. 1–9.

Babawuro, U. and Beiji, Z. 2012. Satellite Imagery Cadastral Features Extractions using Image
Processing Algorithms: A Viable Option for Cadastral Science. Internatoinal Journal of
Computer Science Issues, 9(4(2)), pp. 30-38. Baker, S., Bennett, E., Kang, S. B. and
Szeliski, R., 2010. Removing rolling shutter wobble. In: Computer Vision and Pattern
Recognition (CVPR), 2010 IEEE Conference on, IEEE, pp. 2392–2399.

Barazzetti L, Scaioni M, and Remondino F. 2011. Orientation and 3D modeling from markerless
terrestrial images: combining accuracy with automation. The Photogrammetric Record,
25(132), pp. 356–381.

Barnes, G., Volkmann, W., Sherko, R., Kelm, K., 2014. Drones for Peace : Part 1 of 2 Design and
Testing of a UAV-based Cadastral Surveying and Mapping Methodology in Albania. In:
World Bank Conference on Land and Poverty, Washington DC, USA, 24-27 March 2014.

Biasion, A. Dequal, S. and Lingua, A. 2004. A New Procedure for the Automatic Production of
True Orthophotos. The International Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, 35, pp. 1682-1777.

Chiabrando, F. Nex, F. Piatti, D. and Rinaudo, F. 2011. UAV and RPV systems for
photogrammetric surveys in archaelogical areas: two tests in the Piedmont region (Italy).
Journal of Archaeological Science, 38(3), pp. 697-710.

Choi, K. and Lee, I. 2011. A UAV-based close-range rapid aerial monitoring system for
emergency responses. The International Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Volume XXXVIII-1/C22, 2011, pp.247-252.
Colomina, I. and Molina, P. 2014. Unmanned aerial systems for photogrammetry and remote
sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing, Volume 92, pp.
79-97.

Costa, F.G. Ueyama, J. Braun, T. Pessin, G. Osório, F.S. and Vargas, P.A. 2012. The use of
unmanned aerial vehicles and wireless sensor network in agricultural applications. In: IEEE
International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany,
22-27 July 2012.

Cramer, M. Bovet, S. Gültlinger, M. Honkavaara, E. McGill, A. Rijsdijk, M. Tabor, M. and


Tournadre, V. 2013. On the use of RPAS in national mapping—The EUROSDR point of
view. The International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, Volume XL-1/W2, 2013, pp.93-99.

Cunningham, K. Walker, G. Stahlke, E. and Wilson, R. 2011. Cadastral Audit and Assessments
Using Unmanned Aerial Systems.’ In: UAV-g: Conference on Unmanned Aerial Vehicle in
Geomatics. Zurich, Switzerland, 14-16 September 2011.

Delacourt, C. Allemand, P. Jaud, M. Grandjean, P. Deschamps, A. Ammann, J. Cuq, V. and


Suanez, S. 2009. DRELIO: An unmanned helicopter for imaging coastal areas. Journal of
Coastal Research, 56 (SI), 1489-1493.

Dolan, A.M. and Ii, R.M.T. 2013. Integration of Drones into Domestic Airspace : Selected Legal
Issues. Current Politics and Economics of the United States, Canada and Mexico, 15(1), pp.
107-137.

Eisenbeiss, H. 2009. UAV Photogrammetry. Ph.D. Thesis. Institut für Geodesie und
Photogrammetrie, ETH-Zürich. Zürich, Switzerland

Eugster, H. and Nebiker, S. 2007. Geo-registration of video sequences captured from Mini UAVs
approaches and accuracy assessment. In: The 5th International Symposium on Mobile
Mapping Technology. Padua, Italy, 29-31 May, 2007.

Furukawa, Y. and Ponce, J. 2010. Accurate, dense, and robust multiview stereopsis. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 32(8), pp. 1362-1376.

Gerke, M. and Przybilla, H.J. 2016. Accuracy Analysis of Photogrammetric UAV Image Blocks:
Influence of Onboard RTK-GNSS and Cross Flight Patterns. Photogrammetrie-
Fernerkundung-Geoinformation, 2016(1), pp. 17-30.

Grenzdörffer, G.J. and Niemeyer, F. 2011. UAV based BRDF-measurements of agricultural


surfaces with pfiffikus. International Archives of the Photogrammetry, Remote Sensing and
Spatial Information Sciences, 38(1/C22), pp.229-234.

Gruen, A., Baltsavias, E., & Henricsson, O. (Eds.). 2012. Automatic extraction of man-made
objects from aerial and space images (II). Birkhäuser.Grundmann, M., Kwatra, V., Castro,
D. and Essa, I., 2012. Calibration-free rolling shutter removal. In: Computational
Photography (ICCP), 2012 IEEE International Conference on, IEEE, pp. 1–8.

Haitao, X. and Lei, T. 2010. Method for automatic georeferencing aerial remote sensing(RS)
images from an unmanned aerial vehicle (UAV) platform. Biosystems Engineering, 108(2),
pp. 104-113.

Heipke, C. Woodsford, P. A. and Gerke, M. 2008. Updating geospatial databases from images.
In: Advances in Photogrammetry, Remote Sensing and Spatial Information Sciences: 2008
ISPRS Congress Book. Boca Raton: CRC Press, pp. 355-362.

Herold, M. Goldstein, N.C. and Clarke, K.C. 2003. The spatiotemporal form of urban growth:
measurement, analysis and modeling. Remote Sensing of Envorinment, 86(3), pp. 286-302.

Horkaew, P. Puttinaovarat, S. and Khaimook, K. 2015. River boundary delineation from remote
sensed imagery based on SVM and relaxation labeling of water index and DSM. Journal of
Theoretical and Applied Information Technology, 71(3), pp. 376–386.

Hirschmuller, H., 2008 Stereo Processing by Semiglobal Matching and Mutual Information, IEEE
Transactions on Pattern Analysis and Machine Intelligence, 30(2), pp.328-341.

Independent Expert Advisory Group on a Data Revolution for Sustainable Development, 2014. A
world that counts: mobilizing the data revolution for sustainable development. [pdf]
Available at: < http://www.undatarevolution.org/wp-content/uploads/2014/12/A-World-
That-Counts2.pdf > [Accessed 19 April 2016].

Kraft, T. Geßner, M. Meißner, H. Przybilla, H.J. and Gerke, M. 2016. Introduction of a


Photogrammetric Camera System for Rpas with Highly Accurate Gnss/imu Information for
Standardized Workflows. ISPRS-International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, Volume XL-3/W4, pp.71-75.

Küng, O. Strecha, C. Beyeler, A. Zufferey, J.-C. Floreano, D. Fua, P. and Gervaix, F. 2012. The
Accuracy of Automatic Photogrammetric Techniques on Ultra-Light Uav Imagery. In:
UAV-g: Conference on Unmanned Aerial Vehicle in Geomatics. Zurich, Switzerland, 14-16
September 2011.

Liu, H., and Jezek, K. C. 2004. Automated extraction of coastline from satellite imagery by
integrating Canny edge detection and locally adaptive thresholding methods. International
Journal of Remote Sensing, 25(5), pp. 937–958.

Lucieer, A., Robinson, S., Turner, D., Harwin, S. and Kelcey, J. 2012. Using a micro-UAV for
ultra-high resolution multi-sensor observations of Antarctic moss beds. International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences,
XXXIX-B1, pp. 429-433.

Madzharova, T., Petrova, V., Ivanova, K. and Koeva, M.N. 2008. Mapping from high resolution
data in GIS SOFIA Ltd. ISPRS 2008: Proceedings of the XXI congress: Silk road for
information from imagery: the International Society for Photogrammetry and Remote
Sensing, 3-11 July, Beijing, China. Comm. IV, WG IV/9. Beijing: ISPRS, 2008. pp. 1409-
1412.

Manyoky, M. Theiler, P. Steudler, D. and Eisenbeiss, H. 2011. Unmanned aerial vehicle in


cadastral applications. In: UAV-g: Conference on Unmanned Aerial Vehicle in Geomatics.
Zurich, Switzerland, 14-16 September 2011.

Molina, P. Parés, M. Colomina, I. Vitoria, T. Silva, P. Skaloud, J. Kornus, W. Prades, R.


and Aguilera, C. 2012. Drones to the Rescue! Unmanned aerial search missions based on
thermal imaging and reliable navigation. Inside GNSS, 7, pp. 36-47

Mourafetis, G. Apostolopoulos, K. Potsiou, C., and Ioannidis, C. 2015. Enhancing cadastral


surveys by facilitating the participation of owners. Survey Review, 47(344), pp. 316-324.

Nex, F., and Remondino, F. 2014. UAV for 3D mapping applications: a review. Applied
Geomatics, 6(1), pp. 1–15.
NLeSC 2016. Improving Open-Source Photogrammetric Workflows for Processing Big Datasets.
[online] Available at: < https://www.esciencecenter.nl/project/improving-open-source-
photogrammetric-workflows-for-processing-big-datasets > [Accessed 18 April 2016].

Ottichilo, W. and Khamala, E. 2002. Map Updating Using High Resolution Satelite Imagery-A
Case of the Kingdom of Swaziland. International Archives of Photogrammetry and Remote
Sensing, 34(6/W6), pp. 89-92.

Remondino, F., Barazzetti, L., Nex, F., Scaioni, M., and Sarazzi, D. 2011. UAV photogrammetry
for mapping and 3d modeling–current status and future perspectives. International Archives
of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII-
1/C22 UAV-g 2011, pp.25-31

Rinaudo, F. Chiabrando, F. Lingua, A.M. and Spanò, A.T. 2012. Archaeological site monitoring:
UAV photogrammetry can be an answer. International Archives of the Photogrammetry,
Remote Sensing and Spatial Information Sciences, vol. XXXIX n. B5, pp. 583-588

Rothermel, M., Wenzel, K., Fritsch, D., and Haala, N. 2012. SURE: Photogrammetric surface
reconstruction from imagery. In Proceedings LC3D Workshop, Berlin (Vol. 8).

Skarlatos, D. Procopiou, E. Stavrou, G. and Gregoriou, M. 2013. Accuracy assessment of


minimum control points for UAV photography and georeferencing. In: First International
Conference on Remote Sensing and Geoinformation of the Environment (RSCy2013).
Paphos, Cyprus, 8-10 April 2013.

Strecha, C., Zoller, R., Rutishauser, S., Brot, B., Schneider-Zapp, K., Chovancova, V., ... &
Glassey, L. 2015. Quality assessment of 3D reconstruction using fisheye and perspective
sensors. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information
Sciences, Volume II-3/W4, pp. 215.

Turner, D. Lucieer, A. and Watson, C. 2012. An automated technique for generating georectified
mosaics from ultra-high resolution Unmanned Aerial Vehicle (UAV) imagery, based on
Structure from Motion (SFM) point clouds. Remote Sensing, 4(5), pp. 1392–1410.

Van der Molen, P. 2015. Rapid urbanisation and slum upgrading: What can land surveyors do?.
Survey Review, 47(343), pp. 231-240.

Vautherin, J., Rutishauser, S., Schneider-Zapp, K., Choi, H. F., Chovancova, V., Glass, A., &
Strecha, C. 2016. PHOTOGRAMMETRIC ACCURACY AND MODELING OF
ROLLING SHUTTER CAMERAS. ISPRS Annals of Photogrammetry, Remote Sensing &
Spatial Information Sciences, Volume III-3, pp.139.

Watts, A.C. Ambrosia, V.G. and Hinkley, E.A. 2012. Unmanned aircraft systems in remote
sensing and scientific research: Classification and considerations of use. Remote Sensing,
4(6), pp. 1671–1692.

Wenzel, K., Rothermel, M., and Fritsch, D. 2013. SURE–The ifp software for dense image
matching. Photogrammetric Week’13.Berlin:Wichmann/VDE Verlag

Zarco-Tejada, P., and Berni, J. 2012. Vegetation monitoring using a micro-hyperspectral imaging
sensor onboard an unmanned aerial vehicle (UAV). In: Proceedings of the EuroCOW 2012,
European Spatial Data Research (EuroSDR), Castelldefels, Spain, 8-10 February 2012.

Das könnte Ihnen auch gefallen