Sie sind auf Seite 1von 6

CHAPTER 1

INTRODUCTION
1.1 INTRODUCTION

The contrast enhancement problem in digital images can be approached from


various methodologies, among which is mathematical morphology (MM). Such operators
consist in accordance to some proximity criterion, in selecting for each point of the analyzed
image, a new grey level between two patterns (primitives). Even though morphological
contrast has been largely studied, there are no methodologies, from the point of view MM,
capable of simultaneously normalizing and enhancing the contrast in images with poor
lighting. On the other side, one of the most common techniques in image processing to
enhance dark regions is the use of nonlinear functions, such as logarithm or power functions ;
otherwise, a method that works in the frequency domain is the homomorphism filter . In
addition, there are techniques based on data statistical analysis, such as global and local
histogram equalization. During the histogram equalization process, grey level intensities are
reordered within the image to obtain an uniform distributed histogram. However, the main
disadvantage of histogram equalization is that the global properties of the image cannot be
properly applied in a local context, frequently producing a poor performance in detail
preservation. In a method to enhance contrast is proposed; the methodology consists in
solving an optimization problem that maximizes the average local contrast of an image.

This project deals with the detection of background in images with poor contrast.
Some morphological transformations are used to detect the background in images
characterized by poor lighting. Lately, contrast image enhancement has been carried out by
the application of two operators based on the Weber’s law notion. The first operator employs
information from block analysis, while the second transformation utilizes the opening by
reconstruction, which is employed to define the multi-background notion. The objective of
contrast operators consists in normalizing the grey level of the input image with the purpose
of avoiding abrupt changes in intensity among the different regions. Finally, the performance
of the proposed operators is illustrated through the processing of images with different
backgrounds, the majority of them with poor lighting conditions. The complete image
processing is done using MATLAB simulation model.
The optimization formulation includes a perceptual constraint derived directly
from human supra threshold contrast sensitivity function. The authors apply the proposed
operators to some images with poor lighting with good results. On the other hand a
methodology to enhance contrast based on color statistics from a training set of images which
look visually appealing is presented. Here, the basic idea is to select a set of training images
which look good perceptually, next a Gaussian mixture model for the color distribution in the
face region is built, and for any given input image, a color tone mapping is performed so that
the color statistics in the face region matches the training examples. In this way, even though
the reported algorithms to compensate changes in lighting are varied, some are more adequate
than others. In this work, two methodologies to compute the image background are proposed.
Also, some operators to enhance and normalize the contrast in grey level images with poor
lighting are introduced. Contrast operators are based on the logarithm function in a similar
way to Weber’s law the use of the logarithm function avoids abrupt changes in lighting. Also,
two approximations to compute the background in the processed images are proposed. The
first proposal consists in an analysis by blocks, whereas in the second proposal, the opening
by reconstruction is used given its following properties: a) it passes through regional minima,
and b) it merges components of the image without considerably modifying other structures.

Finally, this paper is organized as follows. Morphological transformation and webers


law presents a brief background on Weber’s law and some morphological transformations.
Image background approximation to the background by means of block analysis in
conjunction with transformations that enhance images with poor lighting. The multi back
ground notion is introduced by means of the opening by reconstruction. Shows a comparison
among several techniques to improve contrast in images. Finally, conclusions are presented.

1.2 Edge detection


Edge detection is a fundamental tool in image processing, machine vision and computer
vision, particularly in the areas of feature detection and feature extraction, which aim at
identifying points in a digital image at which the image brightness changes sharply or, more
formally, has discontinuities. The same problem of finding discontinuities in 1D signal is
known as step detection.

The edges extracted from a two-dimensional image of a three-dimensional scene can be


classified as either viewpoint dependent or viewpoint independent. A viewpoint independent
edge typically reflects inherent properties of the three-dimensional objects, such as surface
markings and surface shape. A viewpoint dependent edge may change as the viewpoint
changes, and typically reflects the geometry of the scene, such as objects occluding one
another. A typical edge might for instance be the border between a block of red color and a
block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small
number of pixels of a different color on an otherwise unchanging background. For a line,
there may therefore usually be one edge on each side of the line.

1.3 Canny Technology

A canny system provides automatic recognition of an individual based on some sort of


unique feature or characteristic possessed by the individual. As technology and services have
developed in the modern world, human activities and transactions have proliferated in which
rapid and reliable personal identification is required. Examples include passport control,
computer login control, bank automatic teller machines and other transactions authorization
premises access control, and security systems generally. All such identification efforts share
the common goals of speed, reliability and automation.

The developments in science and technology have made it possible to use canny in
applications where it is required to establish or confirm the identity of individuals.

Applications such as passenger control in airports, access control in restricted area,


border control, database access and financial services are some of the examples where the
canny technology has been applied for more reliable identification and verification. In recent
years, canny identity cards and passports have been issued in some countries based on iris,
fingerprint and face recognition technologies to improve border control process and simplify
passenger travel at the airports. In UK and Australia, canny passports based on face
recognition are being issued.

The technology is designed to automatically take a picture from the passengers and
match it to the digitized image stored in the canny passports. Recently, US government is
also conducting a Registered Travellers Program which uses a combination of fingerprint and
edge detection recognition technology to speed up the security check process at some
airports. In the field of financial services, canny technology has shown a great potential in
offering more comfort to customers while increasing their security. As an example, banking
services and payments based on canny are going to be much safer, faster and easier than the
existing methods based on credit and debit cards. Proposed forms of payments such as pay
and touch scheme based on fingerprint or smart cards with stored edge detection information
on them are examples of such applications. Although there are still some concerns about
using canny in the mass consumer applications due to information protection issues, it is
believed that the technology will find its way to be widely used in many different
applications. Moreover, access control applications such as database access and computer
login also benefit from the new offered technologies. Compared to passwords, canny
technologies offer more secure and comfortable accessibility and have dealt with problems
such as forgetting or hacking passwords. Overall, the future of canny technology is believed
to be open for more investments based on the new services it has to offer to the society.

Canny such as signatures, photographs, fingerprints, voiceprints, DNA and retinal blood
vessel patterns all have significant drawbacks. Face Recognition: Changes with Age,
Expression, Viewing angle, Illumination. Finger Print Recognition: Fingerprints or
handprints require physical contact, and they also can be counterfeited and marred by
artefacts. Speech Recognition: Electronically recorded voiceprints are susceptible to changes
in a person’s voice, and they can be counterfeited. Signature Recognition: Signatures and
photographs are cheap and easy to obtain and store, they are impossible to identify
automatically with assurance, and are easily forged

1.4 Motivation
The purpose of detecting sharp changes in image brightness is to capture important
events and changes in properties of the world. It can be shown that under rather general
assumptions for an image formation model, discontinuities in image brightness are likely to
correspond to

 discontinuities in depth,
 discontinuities in surface orientation,

 changes in material properties and

 Variations in scene illumination.

Edges extracted from non-trivial images are often hampered by fragmentation,


meaning that the edge curves are not connected, missing edge segments as well as false
edges not corresponding to interesting phenomena in the image – thus complicating the
subsequent task of interpreting the image data. An inherent observation about noises in the
normalized edge detection images describes two factors firstly, the pupil and eyelash regions
have lower intensity values, and the reflection and eyelid regions have higher intensity
values. If the information could be well infused in a model to detect noises, edge detection
segmentation would be successful. Motivated by this idea, this work proposes a new noise-
removing segmentation method for recognition system.

Edge detection is one of the fundamental steps in image processing, image analysis,
image pattern recognition, and computer vision techniques. During recent years, however,
substantial (and successful) research has also been made on computer vision methods that do
not explicitly rely on edge detection as a pre-processing step.

Edge detection segmentation is very important for an edge detection recognition


system. If the edge detection regions were not correctly segmented, there would possibly
exist four kinds of noises in segmented edge detection regions namely eyelashes, eyelids,
reflections and pupil, which will result in poor recognition performance. In the ideal case, the
result of applying an edge detector to an image may lead to a set of connected curves that
indicate the boundaries of objects, the boundaries of surface markings as well as curves that
correspond to discontinuities in surface orientation. Thus, applying an edge detection
algorithm to an image may significantly reduce the amount of data to be processed and may
therefore filter out information that may be regarded as less relevant, while preserving the
important structural properties of an image. If the edge detection step is successful, the
subsequent task of interpreting the information contents in the original image may therefore
be substantially simplified. However, it is not always possible to obtain such ideal edges from
real life images of moderate complexity.

1.5 Objective

The objective of this work is to develop a new edge detection recognition system for
canny recognition. The development tool used will be MATLAB, and emphasis will be only
on the software for performing recognition, and not hardware for capturing a canny image.
To study the existing edge detection recognition system along with their merits and
demerits. To present an improved noise removing approach based on infusion of edge and
region information. To determine the uniqueness of edge detection patterns in terms of
hamming distance distribution by comparing template generated from different eyes. To find
the recognition performance rate and computational complexity.
1.6 ORGANIZATION OF DOCUMENTATION

The rest of the thesis is organized as follows. The basic concept of edge detection
recognition, various elements with a survey of some well known iris recognition methods
represent in Chapter 2

As the first stage, iris segmentation is very important for an iris recognition system.
The proposed segmentation, normalization and infusion of edge and region information is
discussed in Chapter 3.

Result Analysis in Chapter 4.


The conclusions and future directions of the present work are discussed in Chapter 5.

Das könnte Ihnen auch gefallen