Beruflich Dokumente
Kultur Dokumente
Materials
Drones
There are two main categories of drones: fixed wing unmanned aerial vehicles (UAVs) and
multirotor drones.
The quality of the camera mounted on the drone will greatly impact the photogrammetric
performances; good 3D modeling requires good photographs. Therefore, the choice of UAV will also
be influenced by the payload that it can hold.
For professional use, we would recommend a drone like the DJI Phantom 4 Pro, a similar model, or
any drone capable of holding a good camera. Avoid drones that do not meet these requirements,
such as the DJI Mavic, Spark, Phantom 1, Phantom 2, or Phantom 3.
Cameras
Hardware
The quality of the photogrammetry processing will greatly depend on the quality of the camera
mounted on the drone.
Entry-price professional drones, such as the DJI Phantom 4, usually feature small cameras on a
gimbal, capable of capturing photos. However, these drones’ payload is limited and does not allow
for holding better cameras.
1
2018
Reality Modeling Academy® | Drone Capture
For greater photo quality, drones like the DJI Matrice and Topcon Falcon 8 can hold bigger cameras,
such as the Sony Alpha 6000 (hybrid models) or Sony A7R (DSLR). For the best results, consider the
medium format camera like Phase One.
Good photogrammetry also requires good optics. Try to avoid a long focal length, as
photogrammetry tends to be unstable due to the narrow boresight angles between consecutive
photos. We recommend prime lenses (fixed focal) with a focal length range of 15 to 25 millimeters.
Note that it is better to capture stills rather than videos for photogrammetry.
Calibration
Beyond hardware, we recommend that you input accurate camera-calibration values in
ContextCapture. Even though calibrating cameras is part of aerotriangulation, we recommend pre-
calibrating the camera on an easy project. Once the camera is robustly calibrated, the parameters
can be used on other complex projects.
Below are examples of aerotriangulation on a single dataset with (Figure 1) and without (Figure 2)
initial calibration parameters.
Figure 2:
1: Aerotriangulation without
3D view initial
on a calibrated
calibrationcamera
parameters
The spherical effects seen above will be more important with nadir-only acquisitions.
Prior camera calibration will be required for vertical-view patterns and highly recommended for
any type of acquisition. This one-time calibration will take approximately 10 minutes and can be
reused for future complex projects. Here is the step-by step procedure:
1. Choose a small, stationary asset that you can turn around and can be shot under any angle to
run a robust calibration. The small asset should be highly textured, such as a statue, to be
perfectly suited for camera calibration (Figure 3).
2. Get the camera that you will use for your real projects and set it in real conditions with the
same image format and same focal length. We are only speaking about the camera. If you
2
2018
Reality Modeling Academy® | Drone Capture
use your camera mounted under a drone, then it is not mandatory to run a drone-flight for
the calibration. You can simply dissociate the camera from the drone, run the calibration,
and use the camera parameters once back under the drone.
3. Turn 360 degrees around the object/statue and shoot around 30 images that are equally
spaced from each other (Figure 4).
Figure 3: Scene suited for camera calibration Figure 4: 3D-view of camera calibration stage
4. Start ContextCapture, create a new project, and submit an aerotriangulation on the photos
that you just captured with the default settings. Once completed, your camera is calibrated.
5. You must save these camera parameters by going to the “Photos” tab and adding your
calibrated camera to the camera-database (Figure 5).
6. Once completed, the calibration values will be automatically applied every time you add new
pictures from this camera, and you can use them for further aerotriangulation.
7. For further aerotriangulation starting from already calibrated values, you must turn the
radial distortion setting on “Keep” and make sure that your accurate calibration is properly
3
2018
Reality Modeling Academy® | Drone Capture
Battery
Using drones potentially requires you to capture of thousands of photos. Therefore, it is crucial to
estimate the number of batteries needed for a project, as missing photographs will affect the quality
of the final 3D model.
Ground control points would be targets that are visible from the sky (Figures 7 and 8) and measured
with survey equipment on the ground (e.g. total station)
4
2018
Reality Modeling Academy® | Drone Capture
Figure 7: Chessboard ground control point Figure 8: Aero propeller ground control point
Beyond georeferencing, ground control points will also help ensure aerotriangulation robustness.
However, all sensors are not equal. Depending on their efficiency, they will help with computation in
different ways. For GPS sensors, there are two groups of options: basic and real-time kinematic (RTK)
or post-processed kinematic (PPK). For IMU sensors, the options are either basic or high-end sensors.
Below is a synthesis of GPS+IMU sensors’ influence, depending on their types.
5
2018
Reality Modeling Academy® | Drone Capture
Flight Planning
Data capture for photogrammetry requires a determined flight plan to capture good photos to
process 3D models, “3D-fly,” with various camera angles.
The flight planner must execute complex flight plans, such as:
The flight plan must be prepared in advance, considering the flight speed (not too fast) and height of
flight (not too high) to avoid blurry images.
6
2018
Reality Modeling Academy® | Drone Capture
However, the results obtained with such patterns are limited. Some reasons include:
With a vertical grid, all the photos look straight down. Resolution is high on horizontal surfaces, but
pixels are stretched on the vertical parts. This system will induce inaccuracies on all those elements,
leading to a bad reconstruction and even holes in the 3D model.
Since all the photos look at the scene from the same angle, the boresight angle difference is very
small. This similarity creates a big uncertainty when photos are used to extract 3D information,
especially along the z-axis.
Many Masks
The lack of differences in the point of view limits the areas that are completely covered by trees or
overhanging structures.
Below are comparisons of scenes captured in two configurations (Figures 9, 10, and 11). These
images show nadir and oblique (left side) and nadir only (right side) images.
7
2018
Reality Modeling Academy® | Drone Capture
8
2018
Reality Modeling Academy® | Drone Capture
Oblique Grid
Considering Figures 9, 10, and 11, as well as the inability of fixed wing UAVs to adopt complex flight
plans, the “oblique-grid” method can be a good compromise.
This method consists of flying the drone in four directions with a maximum oblique angle of 30
degrees to create a grid of oblique photos.
The setup consists of positioning the camera at an oblique angle (looking forward) and flying the
drone back and forth while following parallel flight lines along one axis. Then, you repeat the same
process along the perpendicular axis.
This practice will generate obliques looking in four directions, creating a robust acquisition pattern.
9
2018
Reality Modeling Academy® | Drone Capture
Note: In the same flight, you will capture obliques looking in the opposite direction. To ensure a good
overlap for photogrammetry, the two flight lines capturing obliques in the same direction (each
second line) should have an overlap of about 70 percent.
Overlapping Orbits
Overlapping orbits is a great technique to capture a complex site in full 3D. It is simple to execute and
will ensure a great robustness in the photogrammetric process.
This technique consists of capturing orbits over the area of interest with the camera pointing towards
the center of the orbit with a 45-degree oblique angle. The area will be covered with orbits that
overlap. We recommend a minimum 50 percent overlap between the orbits’ diameters (Figure 13
and Figure 14).
We recommend that successive photos have a maximum angle difference of 15 degrees, meaning
that a complete orbit should be captured with at least 24 photos. More photos can be useful when
capturing thin elements, especially when capturing complex sites like plants or substations.
Figure 13: Overlapping orbit (top view) Figure 14: Overlapping orbit (3D view)
Additional orbits at lower altitudes might be necessary to capture more detail on parts of the site.
The orbit diameter and height can be easily calculated. However, it is preferable to use a flight
planning application that can generate this pattern automatically, such as Drone Harmony.
10
2018
Reality Modeling Academy® | Drone Capture
2. Capture ground control points (one GCP for every 20,000 pixels).
3. Split your massive block in 10,000-image sub-parts. At this stage, it is very important to
make sure that neighbor blocks are sharing GCPs.
4. Register GCPs in your images and run an aerotriangulation on each of the blocks.
5. Merge the aerotriangulated blocks.
6. Run a single reconstruction.
11
2018