Beruflich Dokumente
Kultur Dokumente
Introduction
It is widely recognized that hydraulic construction engineering is information intensive and complex industry. Present trends in the hydraulic construction engineering have heightened the need for effective and efficient collecting, monitoring and analysis the construction progress data. In recent years, the use of digital video monitoring system (DVMS) in the surveillance phase of a project is rapidly growing which improves the progress controlling, safety monitoring and
work coordination during entire project[1].
However, information within thousands of digital
videos and images stored for a project from the DVMS
could not be obtained automatically.
A large number of components and their features
need to be inspected on construction sites[2-3]. Many of
these features need to be assessed based on tight tolerances, requiring that inspections be extremely accurate.
Received: 2008-05-30
344
1 System Overview
The system called UDTTS includes four parts: Userdefined part, data preprocessing, moving object detection and tracking. The input data is a video file or the
stream of images captured by a stationary digital video
mounted on a horizontal gantry or on a tripod and in
static positions at construction site.
1.1 User-defined process
This system can do many aspects of management by
user-define process. Users can define the application,
such as vehicle flow, human flow, grinding variables
by three steps. Images including targets and static
background should be provided to the UDTTS. Firstly,
generate the initial background model when the background image is input; secondly, define a target on the
target image captured on construction site; thirdly, define the controlling conditions that the target must satisfy; finally, define an output format. So the definition
of an application is finished.
1.2 Application analysis
Moving targets such as vehicles, humans, and other
things at construction site have variable colors, sizes,
shapes, speeds, and directions. Their features can be
utilized to detect and track them. As is shown in Fig. 1,
an application can be worked out from a targets trajectory which consists of its positions at sequential time.
The problem is how to know the positions of a target at
any time from the streams of color image. In the
Fig. 1
Application analysis
UDTTS, after the user-define process, the video captured on construction site is input to be processed. The
procedure performs several images processing tasks to
detect and tracking moving objects in the scene. The
result can be output as user-define format.
2 Tracking Method
The purpose of the tracking part is to detect moving
objects from the video stream and collect appropriate
data of their routes. Tracking is usually performed in
the context of higher-level applications that require the
location and/or shape of the object in every frame.
Typically, assumptions are made to constrain the tracking problem in the context of a particular application.
In its simplest form, tracking can be defined as the
problem of estimating the trajectory of an object in the
image plane as it moves around a scene.
The task of detecting and tracking moving objects
from video deals with the problem of extracting moving objects (foreground-background separation) and
generating corresponding persistent trajectories. In the
case of multiple objects in the scene, the tracking task
is equivalent with the task of solving the correspondence problem. At each frame a set of trajectories and
a set of measured objects (blobs) are available. Each
object is identified by finding the matching trajectory.
2.1 Detection of moving objects
Detection of moving objects in video streams is the
first relevant step of information extraction in many
computer vision applications. Aside from the intrinsic
usefulness of being able to segment video streams into
moving and background components, detecting moving objects provides a focus of attention for recognition, classification, and activity analysis, making these
later steps more efficient.
At hardware level, color images are usually captured,
stored and displayed using elementary R, G, B component images. The color images read from the frame
grabber are transformed to gray scale images with only
luminance information preserved in order to reduce the
computational load and to guarantee adequate frame
rate (around 10 fps) for tracking. Each incoming frame
goes through four successive image processing stages
where the raw intensity data is reduced to a compact
set of features which can be used for the matching
SHEN Qiaonan () et alA Target Tracking System for Applications in Hydraulic Engineering
method. These four stages are gray-scale transformation, background subtraction, threshold segmentation
and connected component labeling as is shown in
Fig. 2.
Fig. 2
The digital image processing steps
345
346
Fig. 4
Inconsecutive trajectory conditions
As described above, when blobs overlap the observation of a single merged blob does not allow reconstructing the trajectories of the original entering blobs.
Just add the blob to these trajectories for the latter consecutive judgement. Remember the frame number i
and the time at which crossing happens. When splitting
happens at frame k, direction consistence and correlative speed are used for matching the blobs and the trajectories based on the kinematic smoothness constraint.
In the case of entering or exiting, the blob must be
near to the boundary of the processing area.
3 An Example
Exiting
Crossing
events
event
events
Test results
Actual situations
Items
SHEN Qiaonan () et alA Target Tracking System for Applications in Hydraulic Engineering
347
Fig. 5
Tracking results for the video containing multiple moving objects