Sie sind auf Seite 1von 40

ACKNOWLEDGEMENT

We sincerely thank all those individuals whose opinions and suggestions mattered a lot to us
while working on this seminar project. We are highly thankful to our parents and teachers who
were there all the time and gave support to our endeavour from behind the scene. It is their
support which has been the pushing drive for us to complete the project within time. We would
also like to thank all the faculties and staffs, Department of Electronics and Communication
Engineering, Vasavi College of Engineering, for their assistance during the project. Their
excessive support has been the source of motivation to perform our best in the project. Last
but not the least,I am also grateful to all our fellow mates of Vasavi College ,Hyderabad, who
directly or indirectly have been instrumental in helping me to pursue my project.

1
ABSTRACT

Iris Recognition is a highly efficient biometric identification system with great


possibilities for future in the security systems to avoid future fraudulent use. Iris recognition
systems obtain a unique mapping for each person. Identification of this person is possible by
applying appropriate matching algorithm .Here, normalization segmentation is done with canny
segmentation technique.

Detection of edges may help the image for image segmentation, normalization, data
compression. Here we are seeing mainly the Canny edge detection techniques. On comparing
Canny to other edge detection technique we can see that canny edge detector performs better
than all other edge detectors on various aspects such as give better results for noisy image,
remove streaking problem & adaptive in nature etc. Using minimum number of Curvelets
coefficients, we can get up to 100 % accuracy and the time consumption of the system is also
very low to identify iris. The Implementation and iris detection has given better results

2
Chapter 1

INTRODUCTION
Edge Detection is a basic tool used in image processing, basically for feature detection and
extraction, which aim to identify points in a digital image where brightness of image changes
sharply and find discontinuities. The purpose of edge detection is significantly reducing the
amount of data in an image and preserves the structural properties for further image
processing. In a grey level image the edge is a local feature that, with in a neighborhood
separates regions in each of which the gray level is more or less uniform with in different values
on the two sides of the edge.

Basics of Edge detection


Edge detection is an image processing technique for finding the boundaries of objects
within images. It works by detecting discontinuities in brightness. Edge detection is used
for image segmentation and data extraction in areas such as image processing, computer
vision, and machine vision.
We can also say that sudden changes of discontinuities in an image are called as edges.
Significant transitions in an image are called as edges. Most of the shape information of an
image is enclosed in edges. So first we detect these edges in an image and by using these filters
and then by enhancing those areas of image which contains edges, sharpness of the image will
increase and image will become clearer.

Types of Edges

Generally edges are of three types:

Horizontal edges
Vertical Edges
Diagonal Edges

3
Figure:Types of edges

Variables involved in the selection of an edge detection operator


Edge orientation: The geometry of the operator determines a characteristic direction in
which it is most sensitive to edges. Operators can be optimized to look for horizontal,
vertical, or diagonal edges.

Noise environment: Edge detection is difficult in noisy images, since both the noise and
the edges contain high-frequency content. Attempts to reduce the noise result in
blurred and distorted edges. Operators used on noisy images are typically larger in
scope, so they can average enough data to discount localized noisy pixels. This results in
less accurate localization of the detected edges.

Edge structure: Not all edges involve a step change in intensity. Effects such as
refraction or poor focus can result in objects with boundaries defined by a gradual
change in intensity. The operator needs to be chosen to be responsive to such a gradual
change in those cases. Newer wavelet-based techniques actually characterize the nature
of the transition for each edge in order to distinguish, for example, edges associated
with hair from edges associated with a face.

4
Chapter 2

Methods of Edge detection


Gradient: The gradient method detects the edges by looking for the maximum and
minimum in the first derivative of the image.

Laplacian: The Laplacian method searches for zero crossings in the second derivative
of the image to find edges. An edge has the one-dimensional shape of a ramp and
calculating the derivative of the image can highlight its location. Suppose we have the
following signal, with an edge shown by the jump in intensity below:

If we take the gradient of this signal (which, in one dimension, is just the first derivative with
respect to t) we get the following:

5
Clearly, the derivative shows a maximum located at the center of the edge in the original signal.
This method of locating an edge is characteristic of the gradient filter family of edge detection
filters and includes the Sobel method. A pixel location is declared an edge location if the value
of the gradient exceeds some threshold. As mentioned before, edges will have higher pixel
intensity values than those surrounding it. So once a threshold is set, you can compare the
gradient value to the threshold value and detect an edge whenever the threshold is exceeded.
Furthermore, when the first derivative is at a maximum, the second derivative is zero. As a
result, another alternative to finding the location of an edge is to locate the zeros in the second
derivative. This method is known as the Laplacian and the second derivative of the signal is
shown below:

EDGE DETECTION TECHNIQUES


Sobel Operator:

The operator consists of a pair of 33 convolution kernels as shown in Figure . One kernel is
simply the other rotated by 90.

6
Figure:Sobel operator

These kernels are designed to respond maximally to edges running vertically and horizontally
relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels
can be applied separately to the input image, to produce separate measurements of the
gradient component in each orientation (call these Gx and Gy). These can then be combined
together to find the absolute magnitude of the gradient at each point and the orientation of
that gradient. The gradient magnitude is given by:

Typically, an approximate magnitude is computed using:

which is much faster to compute.

The angle of orientation of the edge (relative to the pixel grid) giving rise to the spatial gradient
is given by:

Roberts cross operator:


The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient
measurement on an image. Pixel values at each point in the output represent the estimated
absolute magnitude of the spatial gradient of the input image at that point.

The operator consists of a pair of 22 convolution kernels as shown in Figure. One kernel is
simply the other rotated by 90. This is very similar to the Sobel operator.

7
Figure:Roberts operator

These kernels are designed to respond maximally to edges running at 45 to the pixel grid, one
kernel for each of the two perpendicular orientations. The kernels can be applied separately to
the input image, to produce separate measurements of the gradient component in each
orientation (call these Gx and Gy). These can then be combined together to find the absolute
magnitude of the gradient at each point and the orientation of that gradient. The gradient
magnitude is given by:

although typically, an approximate magnitude is computed using:

which is much faster to compute.

The angle of orientation of the edge giving rise to the spatial gradient (relative to the pixel grid
orientation) is given by:

Prewitts operator:

Prewitt operator is similar to the Sobel operator and is used for detecting vertical and
horizontal edges in images.

8
Figure:Prewitt operator

Laplacian of Gaussian:
The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian
of an image highlights regions of rapid intensity change and is therefore often used for edge
detection. The Laplacian is often applied to an image that has first been smoothed with
something approximating a Gaussian Smoothing filter in order to reduce its sensitivity to noise.
The operator normally takes a single graylevel image as input and produces another graylevel
image as output.

The Laplacian L(x,y) of an image with pixel intensity values I(x,y) is given by:

Since the input image is represented as a set of discrete pixels, we have to find a discrete
convolution kernel that can approximate the second derivatives in the definition of the
Laplacian. Three commonly used small kernels are shown in Figure.

Figure :Commonly used discrete approximation to the Laplacian filter

9
Because these kernels are approximating a second derivative measurement on the image, they
are very sensitive to noise. To counter this, the image is often Gaussian Smoothed before
applying the Laplacian filter. This pre-processing step reduces the high frequency noise
components prior to the differentiation step.

In fact, since the convolution operation is associative, we can convolve the Gaussian smoothing
filter with the Laplacian filter first of all, and then convolve this hybrid filter with the image to
achieve the required result. Doing things this way has two advantages:

Since both the Gaussian and the Laplacian kernels are usually much smaller than the
image, this method usually requires far fewer arithmetic operations.

The LoG (`Laplacian of Gaussian') kernel can be precalculated in advance so only one
convolution needs to be performed at run-time on the image.

The 2-D LoG function centered on zero and with Gaussian standard deviation has the
form:

and is shown in Figure below

10
.

Figure : Discrete approximation to LoG function with Gaussian = 1.4

Note that as the Gaussian is made increasingly narrow, the LoG kernel becomes the same as the
simple Laplacian kernels shown in Figure. This is because smoothing with a very narrow
Gaussian ( < 0.5 pixels) on a discrete grid has no effect. Hence on a discrete grid, the simple
Laplacian can be seen as a limiting case of the LoG for narrow Gaussians.

11
Chapter 3

Anatomy Of Human Eye


The eye is one of the major sensory organs in the human body. It is responsible for vision, color
differentiation (the human eye can differentiate between approximately 10 million colors) and
maintaining the biological clock of the human body. To understand how the eye does
everything that it does, we need look into the structure of human eye.

The eye is not shaped like a perfect sphere, rather it is a fused two-piece unit, composed of the
anterior segment and the posterior segment. The anterior segment is made up of the cornea,
iris and lens. The cornea is transparent and more curved, and is linked to the larger posterior
segment, composed of the vitreous, retina, choroid and the outer white shell called the sclera.
The cornea is typically about 11.5 mm (0.3 in) in diameter, and 1/2 mm (500 um) in thickness
near its center. The posterior chamber constitutes the remaining five-sixths; its diameter is
typically about 24 mm. The cornea and sclera are connected by an area termed the limbus. The
iris is the pigmented circular structure concentrically surrounding the center of the eye, the
pupil, which appears to be black. The size of the pupil, which controls the amount of light
entering the eye, is adjusted by the iris dilator and sphincter muscles.
Light energy enters the eye through the cornea, through the pupil and then through the lens.
The lens shape is changed for near focus (accommodation) and is controlled by the ciliary
muscle. Photons of light falling on the light-sensitive cells of the retina (photoreceptor cones
and rods) are converted into electrical signals that are transmitted to the brain by the optic
nerve and interpreted as sight and vision.
Size
Dimensions typically differ among adults by only one or two millimetres, remarkably consistent
across different ethnicities. The vertical measure, generally less than the horizontal, is about
24 mm. The transverse size of a human adult eye is approximately 24.2 mm and the sagittal size
is 23.7 mm with no significant difference between sexes and age groups. Strong correlation has
been found between the transverse diameter and the width of the orbit (r = 0.88).The typical
adult eye has an anterior to posterior diameter of 24 millimetres, a volume of six cubic
centimetres (0.4 cu. in.),and a mass of 7.5 grams (weight of 0.25 oz.).
The eyeball grows rapidly, increasing from about 1617 millimetres (about 0.65 inch) at birth to
22.523 mm (approx. 0.89 in) by three years of age. By age 13, the eye attains its full size.

12
Dynamic range
The retina has a static contrast ratio of around 100:1 (about 6.5 f-stops). As soon as the eye
moves rapidly to acquire a target (saccades), it re-adjusts its exposure by adjusting the iris,
which adjusts the size of the pupil. Initial dark adaptation takes place in approximately four
seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal
rod photoreceptors is 80% complete in thirty minutes. The process is nonlinear and multifaceted,
so an interruption by light exposure requires restarting the dark adaptation process over again.
Full adaptation is dependent on good blood flow; thus dark adaptation may be hampered by
retinal disease, poor vascular circulation and high altitude exposure.
The human eye can detect a luminance range of 1014, or one hundred trillion
(100,000,000,000,000) (about 46.5 f-stops), from 106 cd/m2, or one millionth (0.000001) of a
candela per square meter to 108 cd/m2 or one hundred million (100,000,000) candelas per
square meter.This range does not include looking at the midday sun (10 9cd/m2) or lightning
discharge.
At the low end of the range is the absolute threshold of vision for a steady light across a wide
field of view, about 106 cd/m2 (0.000001 candela per square meter).The upper end of the
range is given in terms of normal visual performance as 108 cd/m2 (100,000,000 or one hundred
million candelas per square meter).
The eye includes a lens similar to lenses found in optical instruments such as cameras and the
same physics principles can be applied. The pupil of the human eye is its aperture; the iris is the
diaphragm that serves as the aperture stop. Refraction in the cornea causes the effective
aperture (the entrance pupil) to differ slightly from the physical pupil diameter. The entrance
pupil is typically about 4 mm in diameter, although it can range from 2 mm (f/8.3) in a brightly
lit place to 8 mm (f/2.1) in the dark. The latter value decreases slowly with age; older people's
eyes sometimes dilate to not more than 5-6mm in the dark, and may be as small as 1mm in the
light.
Eye movement
The visual system in the human brain is too slow to process information if images are slipping
across the retina at more than a few degrees per second.Thus, to be able to see while moving,
the brain must compensate for the motion of the head by turning the eyes. Frontal-eyed
animals have a small area of the retina with very high visual acuity, the fovea centralis. It covers
about 2 degrees of visual angle in people. To get a clear view of the world, the brain must turn
the eyes so that the image of the object of regard falls on the fovea. Any failure to make eye
movements correctly can lead to serious visual degradation.
Having two eyes allows the brain to determine the depth and distance of an object, called
stereovision, and gives the sense of three-dimensionality to the vision. Both eyes must point

13
accurately enough that the object of regard falls on corresponding points of the two retinas to
stimulate stereovision; otherwise, double vision might occur. Some persons with congenitally
crossed eyes tend to ignore one eye's vision, thus do not suffer double vision, and do not have
stereovision. The movements of the eye are controlled by six muscles attached to each eye, and
allow the eye to elevate, depress, converge, diverge and roll. These muscles are both controlled
voluntarily and involuntarily to track objects and correct for simultaneous head movements.
The eye is arguably the most complicated organ in the human body, with a number of parts
fitted together in a near-spherical structure. Each part in the system is responsible for a certain
action which is a part of the function of the eyes. The eye structure can be broadly classified as
External structure and Internal structure.

Figure:External structure of human eye

External structure of eye


The parts of the eye that are visible externally comprise of the external structure of the eye-

Sclera: It is a tough and thick white sheath that protects the inner parts of the eye. We know it
as the White of the eye.

Conjunctiva: It is a thin transparent membrane that is spread across the sclera. It keeps the
eyes moist and clear by secreting small amounts of mucus and tears.

14
Cornea: It is the transparent layer of skin that is spread over the pupil and the iris. The job of
the cornea is to refract the light that enters the eyes.

Iris: It is a pigmented layer of tissue that makes up the colored portion of the eye. Its primary
function is to control the size of the pupil, depending on the amount of light entering it.

Pupil: It is the small opening located at the middle of the Iris. It allows light to come in.

Figure:Internal structure of human eye

Internal structure of eye


The internal structure of the eye includes the following parts:

Retina: It is the screen at the end of the eye, where all the images are formed. It is extremely
sensitive to light because of the presence of Photoreceptors, which are photosensitive cells that
detect dim and colored lights.

Lens: It is a biconvex, transparent and adjustable structure that focuses light to the retina,
hence forming images on it.

Aqueous humor: It is a watery fluid that is present in the area between the lens and the cornea.
It is responsible for the nourishment of both the lens and the cornea.

Vitreous Humor: it is a transparent semi-solid, jelly-like substance that fills the interior of the
eyes. Its role is that it maintains the shape of the eye and also causes refraction of light before
it reaches the retina.

15
Optic nerve: Located at the end of the eyes, behind the retina, the optic nerve is responsible for
carrying all the nerve impulses from the photoreceptors to the brain, without which vision
would not be possible.

16
Chapter 4

Cannys Edge Detection Algorithm

The Canny edge detection algorithm is known to many as the optimal edge detector. Canny's
intentions were to enhance the many edge detectors already out at the time he started his
work. He was very successful in achieving his goal and his ideas and methods can be found in
his paper, "A Computational Approach to Edge Detection". In his paper, he followed a list of
criteria to improve current methods of edge detection. The first and most obvious is low error
rate. It is important that edges occurring in images should not be missed and that there be NO
responses to non-edges. The second criterion is that the edge points be well localized. In other
words, the distance between the edge pixels as found by the detector and the actual edge is to
be at a minimum. A third criterion is to have only one response to a single edge. This was
implemented because the first 2 were not substantial enough to completely eliminate the
possibility of multiple responses to an edge.

Based on these criteria, the canny edge detector first smoothens the image to eliminate and
noise. It then finds the image gradient to highlight regions with high spatial derivatives. The
algorithm then tracks along these regions and suppresses any pixel that is not at the maximum
(non-maximum suppression). The gradient array is now further reduced by hysteresis.
Hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis
uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a
non-edge).

If the magnitude is above the high threshold, it is made an edge. And if the magnitude is
between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel
with a gradient above T2.

Step 1

In order to implement the canny edge detector algorithm, a series of steps must be followed.
The first step is to filter out any noise in the original image before trying to locate and detect
any edges. And because the Gaussian filter can be computed using a simple mask, it is used
exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian
smoothing can be performed using standard convolution methods. A convolution mask is
usually much smaller than the actual image. As a result, the mask is slid over the image,
manipulating a square of pixels at a time. The larger the width of the Gaussian mask, the lower
is the detector's sensitivity to noise. The localization error in the detected edges also increases

17
slightly as the Gaussian width is increased. The Gaussian mask used in my implementation is
shown below.

Step 2

After smoothing the image and eliminating the noise, the next step is to find the edge strength
by taking the gradient of the image. The Sobel operator performs a 2-D spatial gradient
measurement on an image. Then, the approximate absolute gradient magnitude (edge
strength) at each point can be found. The Sobel operator uses a pair of 3x3 convolution masks,
one estimating the gradient in the x-direction (columns) and the other estimating the gradient
in the y-direction (rows). They are shown below:

18
The magnitude, or edge strength, of the gradient is then approximated using the formula:

|G| = |Gx| + |Gy|


Step 3

The direction of the edge is computed using the gradient in the x and y directions. However, an
error will be generated when sumX is equal to zero. So in the code there has to be a restriction
set whenever this takes place. Whenever the gradient in the x direction is equal to zero, the
edge direction has to be equal to 90 degrees or 0 degrees, depending on what the value of the
gradient in the y-direction is equal to. If GY has a value of zero, the edge direction will equal 0
degrees. Otherwise the edge direction will equal 90 degrees. The formula for finding the edge
direction is just:

Theta = invtan (Gy / Gx)

Step 4

Once the edge direction is known, the next step is to relate the edge direction to a direction
that can be traced in an image. So if the pixels of a 5x5 image are aligned as follows:

x x x x x
x x x x x
x x a x x
x x x x x
x x x x x

19
Then, it can be seen by looking at pixel "a", there are only four possible directions when
describing the surrounding pixels - 0 degrees (in the horizontal direction), 45 degrees (along the
positive diagonal), 90 degrees (in the vertical direction), or 135 degrees (along the negative
diagonal). So now the edge orientation has to be resolved into one of these four directions
depending on which direction it is closest to (e.g. if the orientation angle is found to be 3
degrees, make it zero degrees). Think of this as taking a semicircle and dividing it into 5 regions.

Therefore, any edge direction falling within the yellow range (0 to 22.5 & 157.5 to 180 degrees)
is set to 0 degrees. Any edge direction falling in the green range (22.5 to 67.5 degrees) is set to
45 degrees. Any edge direction falling in the blue range (67.5 to 112.5 degrees) is set to 90
degrees. And finally, any edge direction falling within the red range (112.5 to 157.5 degrees) is
set to 135 degrees.

Step 5

After the edge directions are known, nonmaximum suppression now has to be applied.
Nonmaximum suppression is used to trace along the edge in the edge direction and suppress
any pixel value (sets it equal to 0) that is not considered to be an edge. This will give a thin line
in the output image.

Step 6

Finally, hysteresis is used as a means of eliminating streaking. Streaking is the breaking up of an


edge contour caused by the operator output fluctuating above and below the threshold. If a
single threshold, T1 is applied to an image, and an edge has an average strength equal to T1,
then due to noise, there will be instances where the edge dips below the threshold. Equally it
will also extend above the threshold making an edge look like a dashed line. To avoid this,
hysteresis uses 2 thresholds, a high and a low. Any pixel in the image that has a value greater
than T1 is presumed to be an edge pixel, and is marked as such immediately. Then, any pixels

20
that are connected to this edge pixel and that have a value greater than T2 are also selected as
edge pixels. If you think of following an edge, you need a gradient of T2 to start but you don't
stop till you hit a gradient below T1.

Visual Comparison of various edge detection Algorithms

Figure : Image used for edge detection analysis (wheel.gif)

Edge detection of all four types was performed on Figure Canny yielded the best results. This
was expected as Canny edge detection accounts for regions in an image. Canny yields thin lines
for its edges by using non-maximal suppression. Canny also utilizes hysteresis when
thresholding.

21
Figure: Results of edge detection on Figure .Canny had the best results.

Motion blur was applied to Figure. Then, the edge detection methods previously used were
utilized again on this new image to study their affects in blurry image environments. No
method appeared to be useful for real world applications. However, Canny produced the best
the results out of the set.

22
Figure:Results of edge detection using different operators

Comparison of Edge detection Algorithm


Original image Sobel

23
Prewitt Robert

Laplacian Laplacian of Gaussian

Figure:Comparison of edge detection algorithm

Performance of Edge Detection Algorithms

Gradient-based algorithms such as the Prewitt filter have a major drawback of being
very sensitive to noise. The size of the kernel filter and coefficients are fixed and cannot
be adapted to a given image. An adaptive edge-detection algorithm is necessary to
provide a robust solution that is adaptable to the varying noise levels. Gradient-based

24
algorithms such as the Prewitt filter have a major drawback of being very sensitive to
noise. The size of the kernel filter and coefficients are fixed and cannot be adapted to a
given image. An adaptive edge-detection algorithm is necessary to provide a robust
solution that is adaptable to the varying noise levels of these images to help distinguish
valid image contents from visual artifacts introduced by noise.

The performance of the Canny algorithm depends heavily on the adjustable parameters,
, which is the standard deviation for the Gaussian filter, and the threshold values, T1
and T2. also controls the size of the Gaussian filter. The bigger the value for , the
larger the size of the Gaussian filter becomes. This implies more blurring, necessary for
noisy images, as well as detecting larger edges. As expected, however, the larger the
scale of the Gaussian, the less accurate is the localization of the edge. Smaller values of
imply a smaller Gaussian filter which limits the amount of blurring, maintaining finer
edges in the image. The user can tailor the algorithm by adjusting these parameters to
adapt to different environments.

Cannys edge detection algorithm is computationally more expensive compared to


Sobel, Prewitt and Roberts operator. However, the Cannys edge detection algorithm
performs better than all these operators under almost all scenarios.

25
Chapter 5

Coding
clear all;
close all;
clc;
%Reading the image
Img=imread('002L_12.png');
%%Pre Processing and Normalisation
figure;imshow(Img);title('INPUT EYE IMAGE');
%%Step 1: Converting to Gray sclae from rgb
Gray_imag=rgb2gray(Img);
figure;imshow(Gray_imag);title('IMAGE after Gray conversion');
%Deleting extra portion
t2=Gray_imag(:,65:708);
t3=t2(18:563,:);
figure;imshow(t3);title('IMAGE after Deleting extra portion');
%%Step 2: Resizing the image(546x644) to 512 x 512
t4=imresize(t3,[512,512],'bilinear');
figure;imshow(t4);title('IMAGE after resize');
%%Step 3: Histogram Equlisation
Hist_eq_img = histeq(t4,512);
figure;imshow(Hist_eq_img);title('IMAGE after Histogram Equlisation');
% Step 4: Gaussian Filtering
G = fspecial('gaussian',[512 512],20);
%Filter it
Hist_eq_img=double(Hist_eq_img);
Ig = imfilter(Hist_eq_img,G,'same');
%Display
%%Step 5: Canny Edge detection
BW2 = edge(Ig,'canny',0.53,1);
figure;imshow(BW2);title('IMAGE after canny edge detection');

26
Result and discussion

Thus by using the canny edge detection algorithm we found the edges for an input eye
image,which is pre-requisite for the human eye recognition.We have taken an input eye image
for that and it run through various operations as indicated above like smoothing,finding
gradient,non-maximum suppression and finally edge tracking by hysteresis.As in my project I
am doing it for human eye recognition so I have taken an input eye image and finally I got the
edges like the following way:

Figure 1:Input image of human eye

27
Figure 2:Image after gray conversion

28
Figure 3:Image after deleting the extra portion of the image

29
Figure 4: Image after resize the image

30
Figure 5:Image after histogram equilisation

31
Figure 6:Edge detected image after canny edge detection technique

32
CONCLUSION

We have presented fast, efficient algorithm for detecting the edges of an input eye image. The
algorithm is bases on six fundamental steps:filter the noise,finding the gradient,finding the
edge direction,co-relate the direction,non-maximum suppression and finally doing the
hysteresis.. Although the algorithm does make small errors in finding the exact edge points of
the input image, it was designed to minimize the number of gross errors in the analysis. The
algorithm has been found to be sufficiently reliable and accurate that is currently being used in
on-line experiments on edge detection.

33
REFERENCES

A.Iris Segmentation and Detection System for Human Recognition Using Canny Detection
Algorithm

By Anandhi1, Dr.M.S.Josephine2, Dr.V.Jeyabalaraja3, S.Satthiyaraj4


1. Research Scholar, Department of Computer Applications, St.Peters University, Chennai.
2. Professor, Department of Computer Applications, Dr.MGR University, Chennai.
3. Professor, Department of Computer Applications, Velammal Engineering College, Chennai.
4. Teaching Fellow, Department of EEE, University College of Engineering, Panruti.

B.Johnson R G, (1991) Can Iris Patterns be used to Identify People? Chemical and Laser
Sciences Division, LA12331-PR, LANL, Calif. .

C.Daugman J.(2004)How iris recognition works, IEEE Transactions on Circuits and Systems for
Video Technology, Vol.14, No.1, pp.21-30.

D.Proenc H, and Alexandre L,(2004) UBIRIS: Iris Image Database, Available:


http://iris.di.ubi.pt.

E.Zhang D.(2003) Detecting eyelash and reflection for accurate iris segmentation,
International journal of Pattern Recognition and Artificial Intelligence, Vol. 1, No.6, pp.1025-
1034.

34
APPENDIX

MATLAB CODE:
% path for writing diagnostic images
global DIAGPATH
DIAGPATH = 'diagnostics';

%normalisation parameters
radial_res = 20;
angular_res = 240;
% with these settings a 9600 bit iris template is
% created

%feature encoding parameters


nscales=1;
minWaveLength=18;
mult=1; % not applicable if using nscales = 1
sigmaOnf=0.5;

eyeimage = imread(eyeimage_filename);

savefile = [eyeimage_filename,'-houghpara.mat'];
[stat,mess]=fileattrib(savefile);

if stat == 1
% if this file has been processed before
% then load the circle parameters and
% noise information for that file.
load(savefile);

else

% if this file has not been processed before


% then perform automatic segmentation and
% save the results to a file

[circleiris circlepupil imagewithnoise] = segmentiris(eyeimage);


save(savefile,'circleiris','circlepupil','imagewithnoise');

end

% WRITE NOISE IMAGE

imagewithnoise2 = uint8(imagewithnoise);
imagewithcircles = uint8(eyeimage);

%get pixel coords for circle around iris

35
[x,y] =
circlecoords([circleiris(2),circleiris(1)],circleiris(3),size(eyeimage));
ind2 = sub2ind(size(eyeimage),double(y),double(x));

%get pixel coords for circle around pupil


[xp,yp] =
circlecoords([circlepupil(2),circlepupil(1)],circlepupil(3),size(eyeimage));
ind1 = sub2ind(size(eyeimage),double(yp),double(xp));

% Write noise regions


imagewithnoise2(ind2) = 255;
imagewithnoise2(ind1) = 255;
% Write circles overlayed
imagewithcircles(ind2) = 255;
imagewithcircles(ind1) = 255;
w = cd;
cd(DIAGPATH);
imwrite(imagewithnoise2,[eyeimage_filename,'-noise.jpg'],'jpg');
imwrite(imagewithcircles,[eyeimage_filename,'-segmented.jpg'],'jpg');
cd(w);

% perform normalisation

[polar_array noise_array] = normaliseiris(imagewithnoise, circleiris(2),...


circleiris(1), circleiris(3), circlepupil(2), circlepupil(1),
circlepupil(3),eyeimage_filename, radial_res, angular_res);

% WRITE NORMALISED PATTERN, AND NOISE PATTERN


w = cd;
cd(DIAGPATH);
imwrite(polar_array,[eyeimage_filename,'-polar.jpg'],'jpg');
imwrite(noise_array,[eyeimage_filename,'-polarnoise.jpg'],'jpg');
cd(w);

% perform feature encoding


[template mask] = encode(polar_array, noise_array, nscales, minWaveLength,
mult, sigmaOnf);

function [polar_array, polar_noise] = normaliseiris(image, x_iris, y_iris,


r_iris,...
x_pupil, y_pupil, r_pupil,eyeimage_filename, radpixels, angulardiv)

global DIAGPATH

radiuspixels = radpixels + 2;
angledivisions = angulardiv-1;

r = 0:(radiuspixels-1);

theta = 0:2*pi/angledivisions:2*pi;

36
x_iris = double(x_iris);
y_iris = double(y_iris);
r_iris = double(r_iris);

x_pupil = double(x_pupil);
y_pupil = double(y_pupil);
r_pupil = double(r_pupil);

% calculate displacement of pupil center from the iris center


ox = x_pupil - x_iris;
oy = y_pupil - y_iris;

if ox <= 0
sgn = -1;
elseif ox > 0
sgn = 1;
end

if ox==0 && oy > 0

sgn = 1;

end

r = double(r);
theta = double(theta);

a = ones(1,angledivisions+1)* (ox^2 + oy^2);

% need to do something for ox = 0


if ox == 0
phi = pi/2;
else
phi = atan(oy/ox);
end

b = sgn.*cos(pi - phi - theta);

% calculate radius around the iris as a function of the angle


r = (sqrt(a).*b) + ( sqrt( a.*(b.^2) - (a - (r_iris^2))));

r = r - r_pupil;

rmat = ones(1,radiuspixels)'*r;

rmat = rmat.* (ones(angledivisions+1,1)*[0:1/(radiuspixels-1):1])';


rmat = rmat + r_pupil;

% exclude values at the boundary of the pupil iris border, and the iris
scelra border
% as these may not correspond to areas in the iris region and will introduce
noise.
%

37
% ie don't take the outside rings as iris data.
rmat = rmat(2:(radiuspixels-1), :);

% calculate cartesian location of each data point around the circular iris
% region
xcosmat = ones(radiuspixels-2,1)*cos(theta);
xsinmat = ones(radiuspixels-2,1)*sin(theta);

xo = rmat.*xcosmat;
yo = rmat.*xsinmat;

xo = x_pupil+xo;
yo = y_pupil-yo;

% extract intensity values into the normalised polar representation through


% interpolation
[x,y] = meshgrid(1:size(image,2),1:size(image,1));
polar_array = interp2(x,y,image,xo,yo);

% create noise array with location of NaNs in polar_array


polar_noise = zeros(size(polar_array));
coords = find(isnan(polar_array));
polar_noise(coords) = 1;

polar_array = double(polar_array)./255;

% start diagnostics, writing out eye image with rings overlayed

% get rid of outling points in order to write out the circular pattern
coords = find(xo > size(image,2));
xo(coords) = size(image,2);
coords = find(xo < 1);
xo(coords) = 1;

coords = find(yo > size(image,1));


yo(coords) = size(image,1);
coords = find(yo<1);
yo(coords) = 1;

xo = round(xo);
yo = round(yo);

xo = int32(xo);
yo = int32(yo);

ind1 = sub2ind(size(image),double(yo),double(xo));

image = uint8(image);

image(ind1) = 255;
%get pixel coords for circle around iris
[x,y] = circlecoords([x_iris,y_iris],r_iris,size(image));
ind2 = sub2ind(size(image),double(y),double(x));

38
%get pixel coords for circle around pupil
[xp,yp] = circlecoords([x_pupil,y_pupil],r_pupil,size(image));
ind1 = sub2ind(size(image),double(yp),double(xp));

image(ind2) = 255;
image(ind1) = 255;

% write out rings overlaying original iris image


w = cd;
cd(DIAGPATH);

imwrite(image,[eyeimage_filename,'-normal.jpg'],'jpg');

cd(w);

% end diagnostics

%replace NaNs before performing feature encoding


coords = find(isnan(polar_array));
polar_array2 = polar_array;
polar_array2(coords) = 0.5;
avg = sum(sum(polar_array2)) / (size(polar_array,1)*size(polar_array,2));
polar_array(coords) = avg;
function [template, mask] = encode(polar_array,noise_array, nscales,
minWaveLength, mult, sigmaOnf)

% convolve normalised region with Gabor filters


[E0 filtersum] = gaborconvolve(polar_array, nscales, minWaveLength, mult,
sigmaOnf);

length = size(polar_array,2)*2*nscales;

template = zeros(size(polar_array,1), length);

length2 = size(polar_array,2);
h = 1:size(polar_array,1);

%create the iris template

mask = zeros(size(template));

for k=1:nscales

E1 = E0{k};

%Phase quantisation
H1 = real(E1) > 0;
H2 = imag(E1) > 0;

% if amplitude is close to zero then


% phase data is not useful, so mark off
% in the noise mask

39
H3 = abs(E1) < 0.0001;

for i=0:(length2-1)

ja = double(2*nscales*(i));
%construct the biometric template
template(h,ja+(2*k)-1) = H1(h, i+1);
template(h,ja+(2*k)) = H2(h,i+1);

%create noise mask


mask(h,ja+(2*k)-1) = noise_array(h, i+1) | H3(h, i+1);
mask(h,ja+(2*k)) = noise_array(h, i+1) | H3(h, i+1);

end

end

40