Sie sind auf Seite 1von 2

Post filter

The human body illustrates the Golden Section or Divine Proportion.


The Divine Proportion in the Body
The white line is the bodys height.
The blue line, a golden section of the white line, defines the distance from the head to
the finger tips.
The yellow line, a golden section of the blue line, defines the distance from the head to
the navel and the elbows.
The green line, a golden section of the yellow line, defines the distance from the head to
the pectorals and inside top of the arms, the width of the shoulders, the length of the
forearm and the shin bone.
The magenta line, a golden section of the green line, defines the distance from the head
to the base of the skull and the width of the abdomen. The sectioned portions of the
magenta line determine the position of the nose and the hairline.

Hence for identification of human beings golden ratio is a logical approach when face is not visible or
when someone is standing backwards.
When golden ratio was used to locate head and shoulder on our sample set, we obtained an error.
Position of eye and shoulder with respect to height was calculated on a sample set of 25 humans at two
specific distances. The values obtained were
R11 = 0.0906;
R13 = 0.3166;
R21 = 0.105;
R23 = 0.3885;
R = 0.352;

R is the zoom ratio, it is points to abdomen of human and helps to calculate distance mode. R11, R12,
R21, R23 represent eye and shoulder position in different modes, which is explained next.
Width is found in pixel by multiplying R by object length. This width when divided by total length
provides mode of operation.
A = width/object length;
A >= 0.30 && A <= 0.449 MODE 1
R11
R13
A >= 0.45 && A <= 0.6 MODE 2
R21
R23


By finding modes we able to identify how far the object is located from the camera. We have analyzed
for two positions which can maximum go up to four.
After obtaining black & white image from pre-filter we analyze various objects in it. In a loop, all objects
in it are tested for humans. Let us assume presence of a human in our frame, after obtaining its
boundaries in the last stage of pre-filtering in form of pixels, these are stored in an array. This array is
analyzed for two components which are width and height. From this data first we obtain the width by
subtracting extreme values of X co-ordinate on the same Y-axis.
We find W(n) = X
a
(Y=a) X
b
(Y=a) where a is some constant
n = 1, 2, 3 (perimeter size of object), W(n) is width matrix.
Through this we obtain the width of the object in every row. All these values are stored in an array.
Similarly, we obtain different heights by repeating the same on columns. Top left corner is assumed to
be (0, 0) pixel, so our image actually lies in negative Y-axis. This means we need to calculate the gap (Y-
top) between its head and X-axis. We obtain this by subtracting total height from total length of object
from X-axis. Y-top needs to be added to the height for calculation of location of eye and shoulder.
Eye (co-ordinate pixel) = Total height (of object)*R11(MODE 1) + Y-top
Shoulder (co-ordinate pixel) = Total height (of object)*R13(MODE 1) + Y-top
Through this we find the location of head and shoulder of the object. Neck width is calculated by finding
the minimum width array component between eye and shoulder. By comparing these three widths we
conclude the object as human. This step defines detection of humans as other objects fail to do so.
Along with this ratio of area between head and abdomen are taken which again provides a ratio of 0.33
but in our analysis found to be an extra step which had almost no effect on final result. Moreover for
any of the objects to be detected as human, they should have width to height ratio of head and shoulder
in same proportion as a human being.

Das könnte Ihnen auch gefallen