Beruflich Dokumente
Kultur Dokumente
x y
, (2)
where x
c
tan(
a
2
)d x x
c
+tan(
a
2
)d and y
c
tan(
a
2
)
d y y
c
+ tan(
a
2
) d. After the new camera position is
translated to an origin (0, 0), the new FOV
is denoted by
68
P(xc, yc)
x
y
0
a
a
IieldoIview
(0,0)
Fig. 1. FOV model of a camera
FOV
=
(3a)
=
x x
c
y y
c
(3b)
After the triangle is rotated counterclockwise degrees by
the coordinate origin, new FOV
are follows:
FOV
=
(4a)
=
cos() y
sin()
y
cos() + x
sin()
(4b)
Thus,
x
d (5)
tan(
a
2
) x
tan(
a
2
) x
(6)
Equation 7 is calculated by using equations 2, 3, 4, 5.
(x x
c
) cos() (y y
c
) sin() d (7)
Equation 8 is calculated by using equations 2, 3, 4, 6.
tan(
a
2
) {(x x
c
) cos() (y y
c
) sin()}
(y y
c
) cos() + (x x
c
) sin()
tan(
a
2
) {(x x
c
) cos() (y y
c
) sin()} (8)
In order to calculate the coverage of a camera considering
the detection ratio of moving objects, the space is divided
into accessible areas and inaccessible areas such as walls and
obstacles in the FOV of the camera. The utility of an area
is measured by the coverage ratio of FOVs. Thus, the utility
of accessible area U
acc
is calculated by ratio of FOVs and a
space as follows:
U
acc
(P) =
R
acc
(V )
R
zcc
(S)
, (9)
where P is a camera position, V is a FOV, S is a space, and
R
acc
is ratio of accessible areas to the overall space.
The utility of paths is calculated by the number of paths
that are passed by agents. Let R
path
be the path ratio of grid
cells of the area to overall grid cells on paths. Thus, the utility
of paths extracted by A* algorithm U
path
is calculated by
U
path
(P) = P
path
R
path
(V )
R
path
(S)
, (10)
where P
path
is the path probability and is determined by the
probability extracted from area layers.
Similarly, the utility of expansion paths U
exp
is calculated
by U
path
and the number of expansion paths as follows:
U
exp
(P) =
U
path
(P)
N
, (11)
where N is the number of expansion paths.
The utility of inaccessible areas U
inacc
should be considered
to calculate the coverage of a camera. After determining the
inaccessible areas such as walls and obstacles in the FOV of
the camera, U
inacc
is calculated by
U
inacc
(P) =
R
inacc
(V )
R
inacc
(S)
. (12)
Finally, the utility U(P) of each camera position is calcu-
lated by four priorities as follows:
U(P) = U
acc
(P) +U
path
(P) +U
exp
(P) U
inacc
(P). (13)
B. Camera Placement Method
The utility of visibility at each location on the 2D grid map
varies depending upon space utility values of cells in an FOV
of the camera with the performance as well as the cost of the
camera. The space utility of each cell on the 2D grid map
is expressed in a numerical value based on the amount of
movement of an agent in a specied area of the 2D grid map
and a probabilixty that the agent moves from a starting point
to a destination.
In this paper, a greedy algorithm is used to select locations
at which cameras are to be placed. Using the greedy algorithm,
a priority area is extracted from the space priority contained in
the area attribute of space modeling and path obtained using
an agent path-nding algorithm. Then camera locations are
selected in the extracted priority area. Through the greedy
strategy, all coordination of points on a space model are
examined to nd a coordination of points that has a highest
utility in the FOV of a camera.
The utility of visibility is set to a maximum value that can
be obtained at a specied coordination of point on a grid map
among sums of space utility values of cells within a virtual
FOV of a camera for all directions and angles. That is, the
utility of visibility is calculated based on the maximum space
utility coverage that is obtained at the position of a specied
cell on a grid map.
From all cells on the utility map of visibility, cells in which
a camera is to be placed are selected in order of highest to
lowest utility of visibility of the camera. Then, a predetermined
range from the position of a selected cell on the utility map of
visibility is recongured upon the installation of the camera.
If cameras cover all space priorities updated on the priority
layer of an input space in its FOV, the placement of the
camera is completed. However, the method based on space
utility coverage does not ensure the optimal placement of the
camera. Thus, we considered (1) a placement cost limit and
(2) minimum coverage of the observable space to determine
the optimal number of cameras to be placed. The placement
69
Fig. 2. Experimental environments and a simulation application
cost limit is the maximum cost when a maximum number of
cameras is placed which does not exceed the placement cost
limit. The minimum space coverage is ratio of the observable
space to the overall space when a minimum number of cameras
is placed which satises the minimum space utility coverage.
The number of cameras to be placed may be calculated based
on resources or the range of an area that can be monitored by
a camera as summarized in Algorithm 1.
Algorithm 1 Camera placement algorithm
1) Create the utility map of visibility based on the greedy
strategy by the cost of a camera and space coverage of
the FOV of a camera in the entire map space.
2) Select a point having a highest utility of visibility value
on the utility map of visibility.
3) Place the camera at the selected point and update a
camera placement list.
4) Terminate camera placement when it is determined that
an optimal number of cameras has been placed.
5) Recalculate the utility of visibility around the position
of the camera on the utility map of visibility.
6) Return to step 2.
V. SIMULATION RESULTS
A. Simulation Setup
In order to compare with simulation results and experimen-
tal results on level terrain (2-D), the experiments have been
conducted with one top-down camera installed in the fourth
oor. We used a layout of the building with base length 63
meter and height 40 meter. To input areas and regions, a
graphical user interface has been developed as shown in Figure
2.
The utility of visibility varies depending upon the perfor-
mance as well as cost of the camera. In this paper, three types
of cameras are used to evaluate the camera placement method.
The viewing distance of the camera B is 1.5 times longer than
that of the camera A. The available angle of the camera A is
about 1.33 times greater than that of the camera B. Though
the viewing distance of the camera C is 2 times longer than
that of the camera A, the installation cost of camera C is 2.5
times larger than that of camera A.
(a) Trajectories extracted for 3 hours (b) Trajectories extracted for 6 hours
Fig. 3. Priority layer with trajectories
(a) Cost limit is 500 (b) Cost limit is 1000
(c) Cost limit is 1200 (d) Cost limit is 1500
Fig. 4. Camera positions that meet a camera installation cost limit set
Figure 3(a) and Figure 3(b) shows the priority layers with
trajectories that are extracted from the structure layer and the
area layer of the building for 3 hours and 6 hours, respectively.
As time goes on, it is obvious that the extracted trajectories
are represented by target motion paths taken through an area
of interest as shown in Figure 3.
B. Results
The camera placement algorithm using the greedy algorithm
calculates the utility of visibility as well as the utility of cost of
each type of camera in each cell on a grid map. The utility map
of visibility is continuously updated until an optimal member
of cameras to be placed is nally set.
Figure 4 shows camera placement positions and FOVs
depend on each camera installation cost limit set. In the gure,
it is optimal to install 6 A-type cameras, 5 C-type cameras,
15 A-type cameras, and 18 A-type cameras, when cost limits
are 500, 1000, 1200, 1500, respectively.
Figure 5 shows the space coverage until 20 cameras are
installed. As shown in Figure 5(a), as the number of installed
C-type cameras increases, coverage rate per camera is lower
than the others. The camera C is the most appropriate for
installation, when the number of camera is less than 5 cameras.
70
The number of cameras
0 5 10 15 20
S
p
a
c
e
c
o
v
e
r
a
g
e
0.0
20.0x10
3
40.0x10
3
60.0x10
3
80.0x10
3
100.0x10
3
120.0x10
3
Camera A
Camera B
Camera C
(a) Space coverage
The number of cameras
0 5 10 15 20
S
p
a
c
e
c
o
v
e
r
a
g
e
r
a
te
p
e
r
c
o
s
t
0
100
200
300
400
500
600
700
Camera A
Camera B
Camera C
(b) Space coverage rate per cost
The number of cameras
0 5 10 15 20
A
c
c
u
m
u
la
te
d
s
p
a
c
e
c
o
v
e
r
a
g
e
r
a
te
(
%
)
0
20
40
60
80
100
Camera A
Camera B
Camera C
(c) Accumulated space coverage rate
Fig. 5. Space coverage
When the number of camera is more than 6 cameras and
less than 8 cameras, the camera C is the most appropriate
for installation. The camera A is the most appropriate for
installation when the number of camera is more than 9
cameras. As shown in Figure 5(b), the camera A has the
highest coverage rate per cost. Figure 5(c) shows accumulated
space utility coverage rate. When 20 cameras are installed,
coverage rates of the camera A, B, C are 85.74%, 96.69%,
99.87%, respectively. Although the coverage rate is 98.98%
when 14 C-type cameras are installed, the cost of the camera
C is the highest. Because cameras are selected in order of
highest to lowest utility of visibility of the camera, Figure
5(c) shows the fastest rate of increase of coverage at rst.
However, the rate of increase of coverage rate is continuing to
decline. For example, the coverage rate of the camera C falls
below 0.08% when the number of installed cameras is over
15.
Figure 6 shows the space utility coverage and the cost when
each type of camera is installed with the cost limit or the
minimum space coverage rate. When the cost limit is below
810 or over 1200, the camera A is the most appropriate for
installation as shown in Figure 6(a). The camera C is the most
appropriate for installation, when the cost limit is over 810
and below 1200. As shown in Figure 6(b), the camera C is
the most appropriate when the coverage rate is less than 54%.
The camera A is the most appropriate when the coverage rate
is more than 54%.
VI. CONCLUSIONS
We have presented the camera placement with certain
task-specic constraints and the minimal camera setup cost,
assumptions dened as the performance of real-world cameras.
The camera placement method uses an agent which is modeled
and implemented using the A* algorithm to estimate the
Cost
500 600 700 800 900 1000 1100 1200 1300 1400 1500
C
o
v
e
r
a
g
e
100x10
3
200x10
3
300x10
3
400x10
3
500x10
3
600x10
3
Camera A
Camera B
Camera C
(a) Space utility coverage
Coverage rate
20% 40% 60% 80%
C
o
s
t
0
200
400
600
800
1000
1200
1400
1600
Camera A
Camera B
Camera C
(b) Space utility coverage cost
Fig. 6. Space utility coverage and cost
trajectories of moving people. The camera placement method
determines appropriate camera positions and an appropriate
number of cameras to be installed in view of installation cost.
The number of cameras to be placed has been calculated
based on minimum camera placement cost and maximum
space coverage. We considered three types of camera and
sum of installation cost of each camera. It is essential to
deal with a combination of various types of cameras to
determine appropriate camera positions and an appropriate
number of cameras. In the future, we will improve our method
to calculate overall utility of visibility considering various
types of camera by mixture.
REFERENCES
[1] J. ORourke, Art gallery theorems and algorithms. New York, NY,
USA: Oxford University Press, Inc., 1987.
[2] Y. Yao and P. Allen, Computing robust viewpoints with multi-
constraints using tree annealing, in Systems, Man and Cybernetics,
1995. Intelligent Systems for the 21st Century., IEEE International
Conference on, vol. 2, Oct. 1995, pp. 993 998 vol.2.
[3] U. M. Erdem and S. Sclaroff, Optimal placement of cameras in
oorplans to satisfy task requirements and cost constraints, in In Proc.
of OMNIVIS Workshop, 2004.
[4] R. Bodor, A. Drenner, P. Schrater, and N. Papanikolopoulos, Optimal
camera placement for automated surveillance tasks, Journal of Intelli-
gent & Robotic Systems, vol. 50, pp. 257295, 2007.
[5] Y. Yao, C.-H. Chen, B. Abidi, D. Page, A. Koschan, and M. Abidi,
Sensor planning for automated and persistent object tracking with
multiple cameras, in Computer Vision and Pattern Recognition, 2008.
CVPR 2008. IEEE Conference on, 2008, pp. 1 8.
[6] J. Ryu, Y. Nam, W.-D. Cho, and M. Stanacevi c, Camera placement for
minimizing occlusion in object tracking systems, Journal of Ubiquitous
Convergence Technology, vol. 3, no. 1, pp. 1319, 2009.
[7] J. Wang and N. Zhong, Efcient point coverage in wireless
sensor networks, Journal of Combinatorial Optimization, vol. 11,
pp. 291304, 2006, 10.1007/s10878-006-7909-z. [Online]. Available:
http://dx.doi.org/10.1007/s10878-006-7909-z
[8] H. Topcuoglu, M. Ermis, and M. Sifyan, Positioning and utilizing
sensors on a 3-d terrain part i - theory and modeling, Systems, Man,
and Cybernetics, Part C: Applications and Reviews, IEEE Transactions
on, vol. 41, no. 3, pp. 376 382, may 2011.
[9] P. Hart, N. Nilsson, and B. Raphael, A formal basis for the heuristic de-
termination of minimum cost paths, Systems Science and Cybernetics,
IEEE Transactions on, vol. 4, no. 2, pp. 100 107, 1968.
[10] P. Ballard and F. Vacherand, The manhattan method: a fast cartesian
elevation map reconstruction from range data, in Robotics and Automa-
tion, 1993. Proceedings., 1993 IEEE International Conference on, May
1993, pp. 143 148 vol.3.
71