Sie sind auf Seite 1von 29

CHAPTER 1

INTRODUCTION TO DIGITAL IMAGE

1.1 Representation of a digital image


The information contained in images can be represented in entirely different ways. Among them
the most important is the spatial representation. A monochrome image is a 2-dimensional light
intensity function, f (m, m), where m and n are spatial coordinates and the value of f at (m, n) is
proportional to the brightness of the image at that point.
A digital image is an image that has been discretized both in spatial co- ordinates and in
brightness. It is represented by a 2-dimensional integer array. The digitized brightness value is
called the grey level value. Thus, a digital image can be represented by following matrix:

0,0  0,  1
 
 1,0   1,  1

Each element of the array is called a pixel or a pel derived from the term "picture element". Each
pixel represents not just a point in the image but rather a rectangular region, the elementary cell
of the grid. The position of the pixel is given in the common notation for matrices. The first
index, m, denotes the position of the row, the second, n, the position of the column. If the digital
image contains M ×N pixels, i.e., is represented by an M ×N matrix, the index n runs from 0 to
N-1,and the index m from 0 to M-1. M gives the number of rows, N the number of columns.

1
Figure 1.1: Representation of digital images by arrays of discrete points on a rectangular grid

1.2 Some Basic Concept of Digital Image

1.2.1 Pixels and Bitmaps


Digital images are composed of pixels (short for picture elements). Each pixel represents the
color (or gray level for black and white photos) at a single point in the image, so a pixel is like a
tiny dot of a particular color. By measuring the color of an image at a large number of points, we
can create a digital approximation of the image from which a copy of the original can be
reconstructed. Pixels are a little like grain particles in a conventional photographic image, but
arranged in a regular pattern of rows and columns and store information somewhat differently.
A digital image is a rectangular array of pixels sometimes called a bitmap.

How many pixels are sufficient? There is no general answer to this question. For visual
observation of a digital image, the pixel size should be smaller than the spatial resolution of the
visual system from a nominal observer distance. For a given task the pixel size should be smaller
than the finest scales of the objects that we want to study.

2
Figure 1.2: The figure shows the same image with a 3 × 4, b 12 × 16, c 48 × 64, and d 192× 256
pixels. If the image contains sufficient pixels, it appears to be continuous.

1.2.2 Resolution
The number of pixels packed into a unit of measure [e.g. inch] that determines the quality of the
image. This value is the image resolution. Image resolution most commonly refers to the number
of pixels per inch. This is called “dots per inch,” or dpi. In most cases, higher resolution [higher
dpi] results in better image quality. Remember, however, that final image quality is limited by
the quality of your image source. While image resolution can always be reduced, increasing
resolution will not improve image quality.

3
1.2.3 Bit-depth
Bit-depth refers to the number of bits assigned to a single pixel and determines the number of
colors from which a particular pixel value can be selected. Where a bit is the lowest level of
electronic value in a digital image, defines a pixel’s color value in combination with other bits.
Each bit can have one of two values: 1 or 0. For example A one-bit image can assign only one of
two values to a single pixel: 1 or 0 (black or white). An 8-bit grayscale image can assign one of
256 (28) colors to a single pixel. A 24-bit RGB image (8-bits each for red, green and blue color
channels) can assign one of 16.8 (224) million colors to a single pixel.

1.3 Types of Digital Image


There are three basic types of digital image
1) Binary
2) Grayscale
3) True color or RGB.

1) Binary:-Each pixel is just black or white. Since there are only two possible values for each
pixel, we only need one bit per pixel. Such images can therefore be very efficient in terms of
storage. Images for which a binary representation may be suitable include text (printed or
handwriting), fingerprints, or architectural plans.

Figure 1.3: A binary image

4
2) Grayscale:-. Each pixel is a shade of grey, normally from 0 (black) to) 255 (white). This range
means that each pixel can be represented by eight bits, or exactly one byte. This is a very natural
range for image _le handling. Other grayscale ranges are used, but generally they are a power of
2. Such images arise in medicine (X-rays), images of printed works, and indeed 256 different
grey levels is sufficient for the recognition of most natural objects.

Figure 1.3: A grayscale image

3) True color or RGB: Here each pixel has a particular color; that color being described by the
amount of red, green and blue in it. If each of these components has a range 0-255 this gives a
total of 2553 =16,777,216 different possible colors in the image. This is enough colors for any
image. Since the total number of bits required for each pixel is 24, such images are also called
24-bit color images. Such an image may be considered as consisting of a “stack” of three
matrices; representing the red, green and blue values for each pixel. This means that for every
pixel there correspond three values.

5
Figure 1.4: A true color image

1.4 Image File Type and Size

1.4.1 Image File Type


There are lots of file types that are used to encode digital images such as JPG, GIF, TIFF, PNG,
BMP. One reason for so many file types is the need for compression. Image files can be quite
large, and larger files mean more disk usage and slower download. Compression is a term used
to describe ways of cutting the size of the file.
There are two types of compression "lossy" and "lossless". A lossless compressed file discards
no information. It looks for more efficient ways to represent an image, while making no
compromises in accuracy. In contrast, lossy compressed files accept some degradation in the
image in order to achieve smaller file size. Another reason for the many file types is that images

6
differ in the number of colors they contain. If an image has few colors, a file type can be
designed to exploit this as a way of reducing file size.

The 5 most common digital image file types are as follows:

1) TIFF (Tagged Image File Format): It is a very flexible format that can be lossless or lossy.
The details of the image storage algorithm are included as part of the file. In practice, TIFF is
used almost exclusively as a lossless image storage format that uses no compression at all. Most
graphics programs that use TIFF do not compress. Consequently, file sizes are quite big.

2) PNG (Portable Network Graphics): It is also a lossless storage format. However, in contrast
with common TIFF usage, it looks for patterns in the image that it can use to compress file size.
The compression is exactly reversible, so the image is recovered exactly.

3) GIF (Graphics Interchange Format): It creates a table of up to 256 colors from a pool of 16
million. If the image has fewer than 256 colors, GIF can render the image exactly. When the
image contains many colors, software that creates the GIF uses any of several algorithms to
approximate the colors in the image with the limited palette of 256 colors available. Better
algorithms search the image to find an optimum set of 256 colors.

4) JPG (Joint Photographic Experts Group): It is optimized for photographs and similar
continuous tone images that contain many, colors. It can achieve astounding compression ratios
even while maintaining very high image quality. GIF compression is unkind to such images. JPG
works by analyzing images and discarding kinds of information that the eye is least likely to
notice. It stores information as 24 bit color.

5) Camera RAW: It is a lossless compressed file format that is proprietary for each digital
camera manufacturer and model. A camera RAW file contains the 'raw' data from the camera's
imaging sensor. Some image editing programs have their own version of RAW too. However,
camera RAW is the most common type of RAW file. The advantage of camera RAW is that it
contains the full range of color information from the sensor. This means the RAW file contains
12 to 14 bits of color information for each pixel.

1.4.2 Image File Size


File size refers to the amount of memory needed to store a given image document. File size is
directly proportional to the number of pixels in an image; the more pixels, the greater the file
size. Since resolution measures dots per square inch, file size is proportional to the square of
Image Resolution. For instance, the file size of a 300 dpi image is 9 times that of a 100 dpi
image. File size also depends on the kind of pixels that comprise the image; e.g., since a full-
color pixel needs more memory than a black & white pixel, a 100 dpi color image will consume
more memory than a 100 dpi grayscale image. A good rule of thumb is that color images are
approximately three times larger than grayscale images. The file format of an image document
can also affect its file size.

7
CHAPTER 2

IMAGE ENHANCEMENT

2.1 Objective of Image Enhancement


Millions of pictures ranging from biomedical images to the images of natural surroundings and
activities around us enrich our daily visual experience. All these images create elegant perception
in our sensory organs. They also contain a lot of important information and convey specific
meanings in diverse domains of application. When such pictures are converted from one form to
another by processes such as imaging, scanning, or transmitting, the quality of the output image
may be inferior to that of the original input picture. There is thus a need to improve the quality of
such images, so that the output image is visually more pleasing to human observers from a
subjective point of view. To perform this task, it is important to increase the dynamic range of
the chosen features in the image, which is essentially the process of image enhancement.
Enhancement has another purpose as well, that is to undo the degradation effects which might
have been caused by the imaging system or the channel. The growing need to develop automated
systems for image interpretation necessitates that the quality of the picture to be interpreted
should be free from noise and other aberrations. Thus it is important to perform preprocessing
operations on the image so that the resultant preprocessed image is better suited for machine
interpretation. Image enhancement thus has both a subjective and an objective role and may be
viewed as a set of techniques for improving the subjective quality of an image and also for
enhancing the accuracy rate in automated object detection and picture interpretation.
Enhancement refers to accentuation or sharpening of image features, such as contrast,
boundaries, edges, etc. The process of image enhancement, however, in no way increases the
information content of the image data. It increases the dynamic range of the chosen features with
the final aim of improving the image quality. Modeling of the degradation process, in general, is
not required for enhancement. However, knowledge of the degradation process may help in the
choice of the enhancement technique. The realm of image enhancement covers contrast and edge
enhancement, noise filtering, feature sharpening, and so on. These methods find applications in
visual information display, feature extraction, object recognition, and so on. These algorithms are
generally interactive, application dependent, and employ linear or nonlinear local or global
filters. Image enhancement techniques, such as contrast stretching, map each gray level into
another gray level using a predetermined transformation function. One example of it is histogram
equalization method, where the input gray levels are mapped so that the output gray level
distribution is uniform. This is a powerful method for the enhancement of low-contrast images.
Other enhancement techniques may perform local neighborhood operations as in convolution;

8
transform operations as in discrete Fourier transform; and other operations as in pseudo-coloring
where a gray level image is mapped into a color image by assigning different colors to different
features.
An important issue in image enhancement is quantifying the criterion for enhancement. Many
image enhancement techniques are empirical and require interactive procedures to obtain
satisfactory results.

2.2 Mathematical Model for Image Enhancement


Image enhancement simply means, transforming an image f into image g using T. Where T is the
transformation. The values of pixels in images f and g are denoted by r and s, respectively. As
said, the pixel values r and s are related by the expression,

s = T(r) (2.1)
Where T is a transformation that maps a pixel value r into a pixel value s. The results of this
transformation are mapped into the grey scale range as we are dealing here only with grey scale
digital images.

2.3 Classification of Image Enhancement Techniques


There exist many techniques that can enhance a digital image without spoiling it. The
enhancement methods can broadly be divided into the following two categories:

1. Spatial Domain Methods


2. Frequency Domain Methods

In spatial domain techniques, we directly deal with the image pixels. The pixel values are
manipulated to achieve desired enhancement. In frequency domain methods, the image is first
transferred in to frequency domain. It means that, the Fourier Transform of the image is
computed first. All the enhancement operations are performed on the Fourier transform of the
image and then the Inverse Fourier transform is performed to get the resultant image. These
enhancement operations are performed in order to modify the image brightness, contrast or the
distribution of the grey levels. As a consequence the pixel value (intensities) of the output image
will be modified according to the transformation function applied on the input values.

9
CHAPTER 3

HISTOGRAM EQUALIZATION

3.1 Image Histogram


An image histogram is a graphical representation of the brightness distribution in a digital image.
It plots the number of pixels for each grey level value. By looking at the histogram for a specific image a
viewer will be able to judge the entire brightness distribution at a glance.

Figure 3.1: A grayscale image and it’s histogram

10
The histogram gives primarily the global description of the image. For example, if the image
histogram is narrow, then it means that the image is poorly visible because the difference in gray
levels present in the image is generally low. Similarly a widely distributed histogram means that
almost all the gray levels are present in the image and thus the overall contrast and visibility
increases. The shape of the histogram of an image reveals important contrast information, which
can be used for image enhancement.

3.1.1 Histogram Function


The histogram of a digital image with gray levels in the range [0, L-1] is the discrete function

      0,1, … ,  1 3.1.1


where,  is number of pixel with grey level  and L is the total number of possible gray levels
in the image.

3.1.2 Normalised Histogram Function


The normalised histogram function is the histogram function divided by the total number of the
pixels of the image.

 
      0,1, … ,  1 3.1.2
 

where M and N are the row and column dimension of the image, consequently n=MN is the total
number of pixel in the image.

The function   represents the fraction of the total number of pixels with gray value  . It
gives a measure of how likely is for a pixel to have a certain gray value.That is, it gives the
probability of occurrence the intensity.The sum of the normalised histogram function over the
range of all gray values is 1.
If we consider the gray values in the image as realizations of a random variable R, with some
probability density, normalised histogram provides an approximation to this probability density.
In other words,

Pr         0,1, … ,  1 3.1.3


11
3.2 Different Types of Image Histogram
In a dark image, the components of the histogram are concentrated on the low (dark) side of the
gray scale. Similarly, the components of the histogram of the bright image are biased toward the
high side of the gray scale. An image with low contrast has a histogram that will be narrow and
will be centered toward the middle of the gray scale. For a monochrome image this implies a
dull, washed-out gray look. Finally, we see that the components of the histogram in the high-
contrast image cover a broad range of the gray scale and, further, that the distribution of pixels is
not too far from uniform, with very few vertical lines being much higher than the others.
Intuitively, it is reasonable to conclude that an image, whose pixels tend to occupy the entire
range of possible gray levels and, in addition, tend to be distributed uniformly, will have an
appearance of high contrast and will exhibit a large variety of gray tones. The net effect will be
an image that shows a great deal of gray-level detail and has high dynamic range.

Four basic image types: dark, light, low contrast, high contrast, and their corresponding
histograms are given below.

Figure 3.2: A dark image and its histogram

12
Figure 3.3: A bright image and its histogram

Figure 3.4: A low contrast image and its histogram

13
.

Figure 3.5: A high contrast image and its histogram

3.3 Histogram Equalization


Histogram equalization is a technique which consists of adjusting the gray scale of the image so
that the gray level histogram of the input image is mapped onto a uniform histogram. The basic
assumption used here is that the information conveyed by an image is related to the probability
of occurrence of gray levels in the form of histogram in the image. By uniformly distributing the
probability of occurrence of gray levels in the image, it becomes easier to perceive the
information content of the image.

3.3.1 Development of the Method


Consider for a moment continuous intensity values and let the variable r denote the intensities of
an image to be processed. As usual, we assume that r is in the range [0, L-1] with r = 0
representing black and r = L-1 representing white.
For any r satisfying these conditions, we focus attention on transformations (intensity mappings)
of the form

 !   0" " 1  3.3.1

14
that produce a level s for every pixel value r in the original image. For reasons that will become
obvious shortly, we assume that the transformation function T(r) satisfies the following
conditions:
(a) T(r) is single-valued and monotonically increasing in the interval 0 ≤ r ≤ L-1; and

(b) 0 ≤ T(r) ≤ L-1 for 0 ≤ r ≤ L-1.

The requirement in condition (a) that T(r) be monotonically increasing guarantees that output
intensities values will never be less than the corresponding input values thus preventing artifacts
created by reversals of intensity. Condition (b) guarantees that the range of output intensities is
the same as input. Following figure gives an example of a transformation function that satisfies
these two conditions.

Figure 3.6: A function that satisfies condition (a) and (b).

The inverse transformation from s back to r is denoted by

  ! #$   0 " s " L 1 3.2.2

It can be shown that even if T(r) satisfies conditions (a) and (b), it is possible that the
corresponding inverse may fail to be single valued.

15
The gray levels in an image may be viewed as random variables in the interval [0, 1]. One of the
most fundamental descriptors of a random variable is its probability density function (PDF). Let
'  and (   denote the probability density functions of random variables r and s,
respectively, where the subscripts on p are used to denote that ' and ( are different functions .A
basic result from an elementary probability theory is that, if ' and T(r) are known and
satisfies condition (a), then the probability density function (   of the transformed variable s
can be obtained using a rather simple formula.

*
(    '  ) ) 3.3.3
*
Thus, the probability density function of the transformed variable, s, is determined by the
gray-level PDF of the input image and by the chosen transformation function.

A transformation function of particular importance in image processing has the form

'
 !   1 +- ' ,  *, (3.3.4)

where w is a dummy variable of integration. The right side of Eq. (3.3.4) is recognized as the
cumulative distribution function (CDF) of random variable r. Since probability density functions
are always positive, and recalling that the integral of a function is the area under the function, it
follows that this transformation function is single valued and monotonically increasing, and,
therefore, satisfies condition (a). When the upper limit in this equation is r = (L-1) the integral
evaluates to 1(the area under a PDF curve always is 1), so the maximum value of s is (L-1) and
condition (b) is satisfied also.

To find the (   corresponding to the transformation just discussed we use equation (3.3.3).
We know from basic calculus (Leibniz’s rule) that the derivative of a definite integral with
respect to its upper limit is simply the integrand evaluated at that limit.
In other words,

* *!

* *
* '
  1 ./ ' , *,0
* -
  1'  3.3.5
Substituting this result for dr/ds into Eq. (3.3.3), and keeping in mind that all probability values
are positive, yields

16
*
(    '  ) )
*
1
 '  ) )
 1' 
1
 0" " 1 3.3.6
 1

We recognize the form of p4 s given in Eq. (3.3.6) as a uniform probability density function.
Simply stated, we have demonstrated that performing the transformation function given in Eq.
(3.3.4) yields a random variable s characterized by a uniform probability density function. It is
important to note from Eq. (3.3.4) that T(r) depends on p5 r, but, as indicated by Eq. (3.3-6), the
resulting p4 s always is uniform, independent of the form p5 r

For discrete values we deal with probabilities and summations instead of probability density
functions and integrals. The probability of occurrence of gray level  in an image is

 
approximated by

      0,1, … ,  1 3.3.7


 
where, as noted at the beginning of this chapter, n is the total number of pixels in the image,  is
the number of pixels that have gray level  , and L is the total number of possible gray levels in
the image. The discrete version of the transformation function given in Eq. (3.3.4) is

  ! 


  1 7 ' 89 :
9;-

 1
 7 9   0,1, … ,  1 3.3.8

9;-

Thus, a processed (output) image is obtained by mapping each pixel with level r= in the input
image into a corresponding pixel with level s= in the output image via Eq. (3.3.8). As indicated
earlier, a plot of p5 r=  versus r= is called a histogram. The transformation (mapping) given in
Eq. (3.3-8) is called histogram equalization.
Unlike its continuous counterpart, it cannot be proved in general that this discrete transformation
will produce the discrete equivalent of a uniform probability density function, which would be a
uniform histogram. However, as will be seen shortly, use of Eq. (3.3.8) does have the general
tendency of spreading the histogram of the input image so that the levels of the histogram
equalized image will span a fuller range of the gray scale.

17
3.3.2 A Numerical Example of Histogram Equalization.

Figure 3.7: A histogram indicating poor contrast

Suppose a 4-bit grayscale image has the histogram shown in figure associated with a table of the
numbers n? of grey values:

Gray level i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
n? 15 0 0 0 0 0 0 0 0 70 110 45 80 40 0 0

With n=MN=360 and L=16, we would expect this image to be uniformly bright, with a few dark
dots on it. To equalize this histogram, we form running totals of the n? , and multiply each by
15/360 =1/24.

18
Gray level i n? ∑ A (1/24) ∑ A Rounded value
0 15 15 0.63 1
1 0 15 0.63 1
2 0 15 0.63 1
3 0 15 0.63 1
4 0 15 0.63 1
5 0 15 0.63 1
6 0 15 0.63 1
7 0 15 0.63 1
8 0 15 0.63 1
9 70 85 3.65 4
10 110 195 8.13 8
11 45 240 10 10
12 80 320 13.33 13
13 40 360 15 15
14 0 360 15 15
15 0 360 15 15

We now have the following transformation of grey values, obtained by reading of the first and last
columns in the above table:

Original gray level i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15


Final gray level j 1 1 1 1 1 1 1 1 1 4 8 10 13 15 15 15

Figure 3.8: The Histogram of figure 3.7 after equalization

The histogram of the values is shown in figure 3.8. This is far more spread out than the original
histogram, and so the resulting image should exhibit greater contrast.

19
3.3.3 Results of Histogram
istogram Equalization
We have the following dark image.

Figure 3.9: A dark image and its histogram

The result of histogram equalization on the above dark image is

Figure 3.10: Histogram eqalized image and its histogram

20
CHAPTER 4

HISTOGRAM MATCHING
As indicated in the preceding chapter, histogram equalization automatically determines a
transformation function that seeks to produce an output image that has a uniform histogram.
When automatic enhancement is desired, this is a good approach because the results from this
technique are predictable and the method is simple to implement. We show in this chapter that
there are applications in which attempting to base enhancement on a uniform histogram is not the
best approach. In particular, it is useful sometimes to be able to specify the shape of the
histogram that we wish the processed image to have. This can be accomplished by specifying a
particular histogram shape, or by calculating the histogram of a target image. The method used to
generate a processed image that has a specified histogram is called histogram matching or
histogram specification.

4.1 Development of the Method


Let us return for a moment to continuous gray levels r and z (considered continuous random
variables), and let '  and B C denote their corresponding continuous probability density
functions. In this notation, r and z denote the gray levels of the input and output (processed)
images, respectively. We can estimate '  from the given input image, while B C is the
specified probability density function that we wish the output image to have.

Let s be a random variable with the property

'
 !   1 / ' , *, 4.1.1
-

where w is a dummy variable of integration. We recognize this expression as the continuous


version of histogram equalization given in Eq. (3.3-4). Suppose next that we define a random
variable z with the property

B
E  F C   1 / B G  *G  4.1.2
-

21
where t is a dummy variable of integration. It then follows from these two equations that
G(z)=T(r) and, therefore, that z must satisfy the condition

C  F #$ H!I  F #$   4.1.3
The transformation T(r) can be obtained from Eq. (4.1.1) once '  has been estimated from the
input image. Similarly, the transformation function G(z) can be obtained using Eq. (4.1.2)
because B C is given.

Equation (4.1.1) through (4.1.2) shows that an image whose intensity levels have a specified
probability density function can be obtained from a given image by using the following
procedure.

1) Obtain '  from the input image and use Eq. (4.1.1) to obtain the values of s

2) Use the specified probability density function in equation (4.1.2) to obtain the
transformation function G(z).

3) Obtain the inverse transformation C  F #$   ; because z is obtain from s, this process is


a mapping from s to z, the latter being the desired values.

4) Obtain the output image by first equalizing the input image using Eq.(4.4.1); the pixel
values in this image are the s values. For each pixel with value s in equalized image,
perform the inverse mapping C  F #$   to obtain the corresponding pixel values in
output image. When all pixels have been thus processed, the probability density function
of the output image will be equal to the specified probability function.

The following diagram shows the process of histogram matching

Figure 4.1: Process of histogram matching.

22
The discrete formulation of Eq. (4.1.1) is

  ! 


  1 7 ' 89 :
9;-

 1
 7 9   0,1, … ,  1 4.1.4

9;-

Where as before, MN is the total number of pixels in the image, 9 is the number of pixels that
have intensity value 9 and L is the total number of possible intensity levels in the image.
Similarly, given a specific value of  , the discrete formulation of Eq. (4.1.2) involves
computing the transformation function

F8CJ :   1 7 B CA 
A;-
J
 1
 7 A K  0,1, … ,  1 4.1.5

A;-

for a value of q so that

F8CJ :   4.1.6

where B CA  is the ith value of specified histogram. We find the desired value CJ by obtaining
the inverse transformation:

CJ  F #$    4.17

In other words, this operation gives a value of z for each value of s, thus it performs a mapping
from s to z

23
4.2 Algorithm for histogram specification

1) Compute the histogram '  of the given image and use it to find the histogram
equalization transformation in Eq. (4.1.4). Round the resulting values,  , to the integer
range [0,L-1].

2) Compute all values of the transformation function G using the Eq. (4.1.5) for
q = 0, 1,… L-1, where B CA  are the value of specified histogram. Round the values of G
to integers in the range [0, L-1]. Store the values of G in a table.

3) For every value of  ,k =0, 1, …, L-1, use the stored value of G from step 2 to find the
corresponding value of CJ so that F8CJ : is closest to  and store these mappings from s
to z . When more than one value of CJ satisfies the given  (i.e. the mapping is not
unique), choose the smallest value by convention.

4) From the histogram specified image by first histogram-equalizing the input image and
then mapping every equalized pixel values,  , of this image to the corresponding value
CJ in the histogram specified image using the mappings found in step 3.

4.3 Numerical Example of Histogram matching


Consider a 3-bit image (L=8) of size 64L64 pixel (MN=4096) has the intensity distribution
shown in following table, where the intensity levels are integers in the range [0, L-1] = [0, 7].

Gray level r= 0 1 2 3 4 5 6 7
n= 790 1023 850 656 329 245 122 81

It is desired to transform this histogram so that it will have the value specified in the following

Gray level zN 0 1 2 3 4 5 6 7
nN 0 0 0 614 819 1230 819 614

24
First we compute the scaled histogram equalized values for input image like as follows

Gray level r= n= ∑  (7/4096) ∑  Equalized gray level s=


0 790 790 1.35 1
1 1023 1813 3.09 3
2 850 2663 4.55 5
3 656 3319 5.67 6
4 329 3648 6.23 6
5 245 3893 6.65 7
6 122 4015 6.86 7
7 81 4096 7 7

Next we compute the values of the transformation function, G, using Eq. (4.1.5)

Gray level zN nN ∑ J (7/4096) ∑ J FCJ 


0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 614 614 1.05 1
4 819 1433 2.45 2
5 1230 2663 4.55 5
6 819 3482 5.95 6
7 614 4096 7 7

In the third step of the procedure we find the smallest value of zN so that the value
FCJ  is the closest to s= . We do this every value of s= to create the required mapping from s to
z. For example, s- =1 , and we see that F CO   1, which is a perfect match in this case, so we
have the correspondence s- P CO . That is every pixel whose value is 1 in the histogram
equalized image would map to a pixel valued 3 (in the corresponding location) in the histogram
specified image. Continuing in this manner we get the following mapping.

s= 1 3 5 6 7
zN 3 4 5 6 7

In the final step of procedure we use above mapping to map every pixel in the histogram
equalize image into a corresponding pixel in the newly created histogram specified image.

25
r= 0 1 2 3 4 5 6 7
s= 1 3 5 6 6 7 7 7
z= 3 4 5 6 6 7 7 7

After that we get the following image histogram

r= 0 1 2 3 4 5 6 7
n= 0 0 0 790 1023 850 985 448

We see, the actual histogram of the output image does not exactly but only approximately
matches with the specified histogram. This is because we are dealing with discrete histograms.

4.4 Result of Histogram Matching

A low contrast image and its histogram are given below

Figure 4.2 : Input image and its histogram.

26
Histogram equalized image of input image and its histogram are given below. We see histogram
equalized image is not very satisfactory for visual perception.

Figure 4.3 : Histogram equalized image and its histogram

Now we try to match the histogram of input image with following specified histogram

Figure 4.4 : Specified histogram for matching

27
Histogram specified image of input image and its histogram are given below. We see histogram
specified image is better than histogram equalized image.

Figure 4.5 : Histogram specified image and its histogram

28
CHAPTER 5

DISCUSSION AND CONCLUSION

Histogram equalization is useful in images with backgrounds and foregrounds that are both
bright or both dark. In particular, the method can lead to better views of bone structure in x-ray
images, and to better detail in photographs that are over or under-exposed. A key advantage of
the method is that it is a fairly straightforward technique and an invertible operator. So in theory,
if the histogram equalization function is known, then the original histogram can be recovered.
The calculation is not computationally intensive. A disadvantage of the method is that it is
indiscriminate. It may increase the contrast of background noise, while decreasing the usable
signal.

Histogram equalization often produces unrealistic effects in photographs; however it is very


useful for scientific images like thermal, satellite or x-ray images, often the same class of images
that user would apply false-color to. Also histogram equalization can produce undesirable effects
(like visible image gradient) when applied to images with low color depth. For example, if
applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth
(number of unique shades of gray) of the image. Histogram equalization will work the best when
applied to images with much higher color depth than palette size, like continuous data or 16-bit
gray-scale images.

Histogram matching is similar in concept to histogram equalization. The difference is that in


histogram matching we don't use an even distribution, but a distribution that is supplied to us
from another image or perhaps from a subset of the same image. In other words, we want to
manipulate the pixel distribution of one image to mirror the pixel distribution of another image.
The pixel distribution of an image is, of course, its histogram.
Histogram matching is a trial-and-error process. In general, however, there are no rule for
specifying histograms, and one must resort to analysis on a case-by-case basis for any given
enhancement task.

29

Das könnte Ihnen auch gefallen