Sie sind auf Seite 1von 12

Reality Modeling Academy® | Drone Capture

Reality Modeling Drone Capture Guide


Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with
ContextCapture, Bentley’s reality modeling software. Learn the limits of capturing vertical (nadir)
images and techniques for obliques, and how to add robustness to your photogrammetric drone
projects.

Materials
Drones
There are two main categories of drones: fixed wing unmanned aerial vehicles (UAVs) and
multirotor drones.

Fixed Wing UAVs (e.g. Topcon Sirius Pro)


These drones are used for medium- or large-scale cartography and terrain modeling purposes. Fixed
wing UAVs have a high level of autonomy and can quickly cover large distances. However, these
types of drones usually cannot capture oblique imagery, lowering the quality of reality data output
on complex scenes. Fixed wing UAVs are optimal for terrain modeling, 2.5D-models, and orthophoto
production. These capabilities, along with real 3D reality modeling, are at the core of ContextCapture
technology.

Multirotor Drones (e.g. Topcon Falcon 8, DJI phantom 4 Pro)


For advanced 3D modeling projects, these drones are required because they can capture oblique
photos. The autonomy of these drones is not as good as a fixed wing UAV, but multirotor drones can
capture the required photos for complex sites.

The quality of the camera mounted on the drone will greatly impact the photogrammetric
performances; good 3D modeling requires good photographs. Therefore, the choice of UAV will also
be influenced by the payload that it can hold.

For professional use, we would recommend a drone like the DJI Phantom 4 Pro, a similar model, or
any drone capable of holding a good camera. Avoid drones that do not meet these requirements,
such as the DJI Mavic, Spark, Phantom 1, Phantom 2, or Phantom 3.

Cameras
Hardware
The quality of the photogrammetry processing will greatly depend on the quality of the camera
mounted on the drone.

Entry-price professional drones, such as the DJI Phantom 4, usually feature small cameras on a
gimbal, capable of capturing photos. However, these drones’ payload is limited and does not allow
for holding better cameras.

1
2018
Reality Modeling Academy® | Drone Capture

For greater photo quality, drones like the DJI Matrice and Topcon Falcon 8 can hold bigger cameras,
such as the Sony Alpha 6000 (hybrid models) or Sony A7R (DSLR). For the best results, consider the
medium format camera like Phase One.

Good photogrammetry also requires good optics. Try to avoid a long focal length, as
photogrammetry tends to be unstable due to the narrow boresight angles between consecutive
photos. We recommend prime lenses (fixed focal) with a focal length range of 15 to 25 millimeters.

Note that it is better to capture stills rather than videos for photogrammetry.

Calibration
Beyond hardware, we recommend that you input accurate camera-calibration values in
ContextCapture. Even though calibrating cameras is part of aerotriangulation, we recommend pre-
calibrating the camera on an easy project. Once the camera is robustly calibrated, the parameters
can be used on other complex projects.

This further aerotriangulation on complex projects, based on a pre-calibrated camera, is more


efficient than if it was done without using any initial parameters.

Below are examples of aerotriangulation on a single dataset with (Figure 1) and without (Figure 2)
initial calibration parameters.

Figure 2:
1: Aerotriangulation without
3D view initial
on a calibrated
calibrationcamera
parameters

Figure 1: Aerotriangulation 3D view on a non-calibrated camera

The spherical effects seen above will be more important with nadir-only acquisitions.
Prior camera calibration will be required for vertical-view patterns and highly recommended for
any type of acquisition. This one-time calibration will take approximately 10 minutes and can be
reused for future complex projects. Here is the step-by step procedure:

1. Choose a small, stationary asset that you can turn around and can be shot under any angle to
run a robust calibration. The small asset should be highly textured, such as a statue, to be
perfectly suited for camera calibration (Figure 3).
2. Get the camera that you will use for your real projects and set it in real conditions with the
same image format and same focal length. We are only speaking about the camera. If you

2
2018
Reality Modeling Academy® | Drone Capture

use your camera mounted under a drone, then it is not mandatory to run a drone-flight for
the calibration. You can simply dissociate the camera from the drone, run the calibration,
and use the camera parameters once back under the drone.

3. Turn 360 degrees around the object/statue and shoot around 30 images that are equally
spaced from each other (Figure 4).

Figure 3: Scene suited for camera calibration Figure 4: 3D-view of camera calibration stage

4. Start ContextCapture, create a new project, and submit an aerotriangulation on the photos
that you just captured with the default settings. Once completed, your camera is calibrated.
5. You must save these camera parameters by going to the “Photos” tab and adding your
calibrated camera to the camera-database (Figure 5).

Figure 5: Add a custom camera to the camera database

6. Once completed, the calibration values will be automatically applied every time you add new
pictures from this camera, and you can use them for further aerotriangulation.
7. For further aerotriangulation starting from already calibrated values, you must turn the
radial distortion setting on “Keep” and make sure that your accurate calibration is properly

3
2018
Reality Modeling Academy® | Drone Capture

used (Figure 6).

Figure 6: Force usage of robust camera parameters

Battery
Using drones potentially requires you to capture of thousands of photos. Therefore, it is crucial to
estimate the number of batteries needed for a project, as missing photographs will affect the quality
of the final 3D model.

Ground Control Points


If accurate georeferencing is important for your photogrammetric project, ground control points
(GCPs) will be required. Depending on your image resolution, we recommend ground control points
to be spaced about 20,000 pixels from each other.

Example: Drone acquisition – (2 centimeters/pixel > 0.02) x 20,000 = 400 meters


The recommended space between neighbor GCPs is around 400 meters.

Ground control points would be targets that are visible from the sky (Figures 7 and 8) and measured
with survey equipment on the ground (e.g. total station)

4
2018
Reality Modeling Academy® | Drone Capture

Figure 7: Chessboard ground control point Figure 8: Aero propeller ground control point

Beyond georeferencing, ground control points will also help ensure aerotriangulation robustness.

GPS and IMU Sensors


We also recommend embedding global positioning system (GPS) and inertial measurement unit
(IMU) sensors on your drone. Initial GPS information will help for geo-registration and scaling.
Combined with IMUs, you will get full pose metadata that will facilitate:

1. Aerotriangulation: an initial guess is available, so computation can be lighter and faster.


2. Ground control points registration: a reliable imagery selection and pointing suggestion will
be automatically set up.

However, all sensors are not equal. Depending on their efficiency, they will help with computation in
different ways. For GPS sensors, there are two groups of options: basic and real-time kinematic (RTK)
or post-processed kinematic (PPK). For IMU sensors, the options are either basic or high-end sensors.
Below is a synthesis of GPS+IMU sensors’ influence, depending on their types.

Configuration Geo-registration Benefits Comments


Accuracy
No GPS + No IMU Not georeferenced None If geo-registration is important,
+ - arbitrary scale ground control points must be
Potential GCPs + used.
Around 1
centimeter
Basic GPS + No IMU Approximately 1-2 -Rough Recommended for small
+ meters georeferencing acquisitions where knowing the
Potential GCPs + -Slight help at location of the site is important
Around 1 aerotriangulation but scale and geo-registration
centimeter stage accuracy are not a concern.

5
2018
Reality Modeling Academy® | Drone Capture

Basic GPS + Basic Approximately 1-2 -Rough Recommended for small


IMU meters georeferencing acquisitions where knowing the
+ + -Slight help at location of the site is important
Potential GCPs Around 1 aerotriangulation but scale and geo-registration
centimeter stage accuracy are not a concern.
-Important help for
ground control The combination of GPS+IMU
points’ registration sensors will help with ground
control points’ registration
even though computation
won’t go faster.
RTK/PPK GPS + Around 5 -High accuracy geo- Recommended for any
Basic IMU centimeters registration acquisition where absolute
+ + -Enables "Adjust on accuracy is expected, especially
Potential GCPs Around 1 positions" mode if setting GCPs is challenging.
centimeter -Important help for
ground control The combination of GPS+IMU
points’ registration sensors will help with ground
control points’ registration
even though computation
won’t go faster.
RTK/PPK GPS + Around 5 -High accuracy geo- Recommended for any
High-end IMU centimeters registration acquisition where absolute
+ + -Enables "Adjust on accuracy is expected, especially
Potential GCPs Around 1 positions" mode if setting GCPs is challenging.
centimeter -Enables
adjustment on The combination of GPS+IMU
initial poses sensors will help with ground
-Important help for control points registration and
ground control computation will go faster.
points registration

Flight Planning
Data capture for photogrammetry requires a determined flight plan to capture good photos to
process 3D models, “3D-fly,” with various camera angles.

We do not recommend manual flights for photogrammetry projects.

The flight planner must execute complex flight plans, such as:

• Orbital flights around points of interest.


• Linear flights along a given axis.
• Camera angle adjustments along a given axis.

The flight plan must be prepared in advance, considering the flight speed (not too fast) and height of
flight (not too high) to avoid blurry images.

6
2018
Reality Modeling Academy® | Drone Capture

Flight Patterns – Best Practices


Limits of Vertical (Nadir) Grids
Nadir grids are often used to capture photos of a site. It is a quick and easy way to capture large
areas and limit the total number of acquired photos.

For photogrammetry to process nadir grids, it requires a sufficient overlap. We recommend a 70


percent overlap along a flight line, and a 60 percent overlap between photos from different flight
lines.

However, the results obtained with such patterns are limited. Some reasons include:

Poor Photo Resolution on Vertical Elements

With a vertical grid, all the photos look straight down. Resolution is high on horizontal surfaces, but
pixels are stretched on the vertical parts. This system will induce inaccuracies on all those elements,
leading to a bad reconstruction and even holes in the 3D model.

Similar Successive Points of Views

Since all the photos look at the scene from the same angle, the boresight angle difference is very
small. This similarity creates a big uncertainty when photos are used to extract 3D information,
especially along the z-axis.

Many Masks

The lack of differences in the point of view limits the areas that are completely covered by trees or
overhanging structures.

Comparison of Nadir vs. Nadir + Oblique


Considering the concerns with nadir acquisition, it is important to insist on the necessity of oblique
captures. They will increase both aerotriangulation robustness and the quality of the mesh.

Below are comparisons of scenes captured in two configurations (Figures 9, 10, and 11). These
images show nadir and oblique (left side) and nadir only (right side) images.

7
2018
Reality Modeling Academy® | Drone Capture

Figure 9: Oblique + Nadir vs. Nadir only

Figure 10: Oblique + Nadir vs. Nadir only

8
2018
Reality Modeling Academy® | Drone Capture

Figure 11 : Oblique + Nadir vs. Nadir only

Oblique Grid
Considering Figures 9, 10, and 11, as well as the inability of fixed wing UAVs to adopt complex flight
plans, the “oblique-grid” method can be a good compromise.

This method consists of flying the drone in four directions with a maximum oblique angle of 30
degrees to create a grid of oblique photos.

The setup consists of positioning the camera at an oblique angle (looking forward) and flying the
drone back and forth while following parallel flight lines along one axis. Then, you repeat the same
process along the perpendicular axis.

This practice will generate obliques looking in four directions, creating a robust acquisition pattern.

Figure 12: Oblique grid (Top view)

9
2018
Reality Modeling Academy® | Drone Capture

Note: In the same flight, you will capture obliques looking in the opposite direction. To ensure a good
overlap for photogrammetry, the two flight lines capturing obliques in the same direction (each
second line) should have an overlap of about 70 percent.

Overlapping Orbits
Overlapping orbits is a great technique to capture a complex site in full 3D. It is simple to execute and
will ensure a great robustness in the photogrammetric process.

This technique consists of capturing orbits over the area of interest with the camera pointing towards
the center of the orbit with a 45-degree oblique angle. The area will be covered with orbits that
overlap. We recommend a minimum 50 percent overlap between the orbits’ diameters (Figure 13
and Figure 14).

We recommend that successive photos have a maximum angle difference of 15 degrees, meaning
that a complete orbit should be captured with at least 24 photos. More photos can be useful when
capturing thin elements, especially when capturing complex sites like plants or substations.

Figure 13: Overlapping orbit (top view) Figure 14: Overlapping orbit (3D view)

Additional orbits at lower altitudes might be necessary to capture more detail on parts of the site.
The orbit diameter and height can be easily calculated. However, it is preferable to use a flight
planning application that can generate this pattern automatically, such as Drone Harmony.

Processing a Large Dataset


In the case of a massive drone acquisition, we recommend splitting the global dataset into smaller
parts to avoid memory overflows and ensure robustness. After splitting the blocks, they will be
merged together to create a seamless reconstruction. We recommend to not exceed 10,000 images
per block. Once extracted, here is the method to ensure seamless borders between sub-blocks:

1. Run your acquisition, choosing the most suitable flight plan.

10
2018
Reality Modeling Academy® | Drone Capture

2. Capture ground control points (one GCP for every 20,000 pixels).
3. Split your massive block in 10,000-image sub-parts. At this stage, it is very important to
make sure that neighbor blocks are sharing GCPs.
4. Register GCPs in your images and run an aerotriangulation on each of the blocks.
5. Merge the aerotriangulated blocks.
6. Run a single reconstruction.

Figure 15: Ground control points and sub-blocks extraction

11
2018

Das könnte Ihnen auch gefallen