Sie sind auf Seite 1von 18

13-08-2019

Digital Image Processing and


GPU Programming
MCA V Sem
CBCS 51
July 2019

Chapter 1: Introduction

Digital Image Processing, 4th ed.


Gonzalez & Woods
www.ImageProcessingPlace.com

1
13-08-2019

Books
• Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins, "Digital Image
Processing Using MATLAB," 3rd ed., Gatesmark Publishing, 2015. Digital
Image Processing Using MATLAB (3rd ed).
• Rafael C. Gonzalez and Richard E. Woods, "Digital Image Processing“, 4th
ed., Pearson, Prentice Hall, 2018
(http://www.imageprocessingplace.com/).
• Kenneth R. Castleman, DIP, P.Ed., 2010
• James H. McClellan, Ronald W. Schafer, and Mark A. Yoder, "DSP FIRST: A
Multimedia Approach," Prentice Hall, 1998.
• Al Bovik, editor, "Handbook of Image and Video Processing," 2nd ed.,
Elsevier, Academic Press, 2005.
• Dan E. Dudgeon and Russell M. Mersereau, "Multidimensional Digital
Signal Processing," Prentice Hall, 1984.
• Jae S. Lim, "Two-Dimensional Signal and Image Processing," Prentice Hall,
1990.

2
13-08-2019

Digital Image
A picture is worth more than ten thousand words.

• An image may be defined as a 2D function f(x,y),


• where x and y are spatial/plane coordinates, and the
• amplitude of f at any pair of coordinates (x,y) is called the intensity or gray level of image at
that point.

• When x, y, and the intensity values of f are all finite, discrete quantities, we call the
image a digital image.

• A digital image is composed of a finite number of elements, each of which has a


particular location and value; these elements are called picture elements, pels, and
pixels.

• The field of digital image processing refers to processing digital images by means of
a digital computer.

Electromagnetic Spectrum (EM)

• Humans are limited to visual band of electromagnetic spectrum, but the imaging
machines cover almost entire EM spectrum, ranging from gamma to radio waves.

• Thus, digital image processing encompasses a wide and varied field of applications.

3
13-08-2019

Tasks of Image Processing


• There does not exist a clear cut boundary between different fields such as Image
Analysis, Image Processing, and Vision.

• Three levels of image processing operations are: low, mid, and high level processes.

• Low level process is characterized by the fact that both its inputs and outputs are
images.
• Low-level tasks involve noise reduction, contrast enhancement, image sharpening etc.

• Mid-level process is characterized by the fact that its inputs are generally images but
its outputs are attributes extracted from images.
• E.g. edges, contours, identity of individual objects, partitioning image into regions or objects
(segmentation), classification/recognition of individual objects, description of those objects to
reduce them to a form suitable for computer processing etc.

• High-level processing involves making sense of an ensemble of recognized objects,


as in image analysis, performing the cognitive functions normally associated with
vision.

Origins of Image Processing


• Bartlane cable picture transmission system, introduced in early 1920s, reduced the
time required to transport a picture across Atlantic from more than a week to less
than 3 hours, was used for sending images between London and New York for a
newspaper agency.
• Specialized printing equipment coded pictures for cable transmission and then
reconstructed them at the receiving end.
• In 1921, a new technique based on photographic reproduction made from tapes
perforated at telegraph receiving terminal rather than specialized printing .

4
13-08-2019

Origins of Image Processing


• The early Bartlane systems were capable of coding images in 5 distinct gray levels,
then it was increased to 15 gray levels in 1929 (e.g. Figure 1.3).

• During this period, introduction of a system for developing a film plate via light
beams were modulated by the coded picture tape improved the reproduction
process considerably.

• Examples just cited involve digital image, but they are not considered digital image
processing results as computers were not involved in their creation.

Origins of Digital Image Processing


• The birth of digital image processing can be traced back to the availability of digital computers
(powerful enough to carry out meaningful image processing tasks) in the early 1960s and to the
onset of space program during that period.

• In 1964, at Jet Propulsion laboratory, digital image processing work started on images of moon
transmitted by a US spacecraft.

• During early 1970s, DIP began to be used in medical imaging, remote earth resource observations,
and astronomy.

5
13-08-2019

Electromagnetic Energy Spectrum


• The areas of application of DIP are so varied that some form of categorization is
needed to capture breadth of this field.
• One criteria may be according to source of images.
• The principal energy source for images in use today is the electromagnetic energy spectrum.
• Other sources of energy include acoustic, ultrasonic, and electron (in form of electron beams
used in electron microscopy).
• The electromagnetic (EM) spectrum is the range of all types of EM radiation.
• Radiation is energy that travels and spreads out as it goes – the visible light that
comes from a lamp in your house and the radio waves that come from a radio
station are two types of electromagnetic radiation.
• The other types of EM radiation that make up the electromagnetic spectrum are
microwaves, infrared light, ultraviolet light, X-rays and gamma-rays.

EM Spectrum
• The electromagnetic spectrum from lowest energy/longest
wavelength (at the top) to highest energy/shortest wavelength
(at the bottom). (Credit: NASA's Imagine the Universe)
• Electromagnetic radiation can be expressed in terms of energy,
wavelength, or frequency.
• Frequency is measured in cycles per second, or Hertz.
• Wavelength is measured in meters.
• Energy is measured in electron volts. Each of these three
quantities for describing EM radiation are related to each other in
a precise mathematical way.

6
13-08-2019

EM Spectrum
• EM waves can be conceptualized as a stream of massless particles each travelling in a
wavelike pattern, and moving at the speed of light.
• Each mass less proton contains a certain amount/bundle of energy.
• Each bundle of energy is called a photon.
• If spectral bands are grouped according to energy per photon, we obtain the spectrum
shown in fig 1.5 ranging from gamma rays (highest energy) at one end to radio waves
(lowest energy) at the other with smooth transition from one to another.

Light and the Electromagnetic Spectrum: Revisited


• In 1666, Sir Issac Newton discovered that when a beam of sunlight is passed through
a glass prism, the emerging beam of light is not white but consists instead of a
continuous spectrum of colors ranging from violet at one end to red at the other.

• As fig 2.10 shows, range of colors we perceive in visible light represents a very small
portion of electromagnetic spectrum.

• On one end of spectrum are radio waves with wavelengths billions of times longer
than those of visible light and on the other end are gamma rays with wavelengths
millions of times smaller than those of visible light.
• The EM spectrum can be expressed in terms of wavelength (λ) measured in meter,
frequency (ν) measured in Hertz, or energy by the expression:
𝑐
𝜆=
𝜐
• here c is the speed of light 2.988X108 m/s

7
13-08-2019

EM Spectrum

Green objects reflect


lights with λ in range
500-570 nm while
absorbing most of
energy at other
wavelength.

Light and the Electromagnetic Spectrum


• Light is a particular type of electromagnetic radiation that can be sensed by human eye.
• The energy of various components of EM spectrum is given by expression (electron-volt)
E = h ν where h is Planck’s constant.
• As fig 2.10 shows, range of colors we perceive in visible light represents a very small portion
of electromagnetic spectrum.
• The wavelength (λ) is measured in meters, with units
• microns (μm=10-6 m) and
• nanometers (nm=10-9 m),
• frequency (ν) is measured in hertz with 1 hertz=1 cycle of a sinusoidal wave per second.
• Colors that human perceive in an object are determined by nature of light reflected from
object.
• An object that reflects light relatively balanced in all visible wavelengths appears white to
observer.
• An object that favors reflectance in a limited range of visible wavelengths exhibits some
shades of color.

8
13-08-2019

Achromatic and chromatic light


• There are 2 kinds of images: colored or gray-scale
• Light that is void of color is called achromatic or monochromatic or colorless light.
• Term gray scale is used commonly to denote monochromatic intensity and these 2
terms are used interchangeably.
• Intensity of monochromatic light is perceived to vary from black to grays and finally
to white.
• Monochromatic images are frequently referred to as gray-scale images.
• Chromatic/color light spans EM spectrum from approx. 0.43 to 0.79 micro meter.
• 3 basic quantities used to describe quality of a chromatic light source are: Radiance,
Luminance, Brightness
• Radiance is total amount of energy that flows from light source, measured in walts (W)
• Luminance (measured in lumens), gives amount of energy an observer perceives from a light
source, e.g. light emitted from a source operating in far infrared region of spectrum could have
significant energy/radiance but observer would hardly perceive it, its luminance would be almost
zero.
• Brightness is a subjective descriptor of light perception that is practically impossible to measure.

Application areas of Digital Image Processing


• Medicine: Screening of X-Rays, Blood samples etc.
• Space Programs: Pollution patterns, Rain patterns, environmental assessment
weather prediction from aerial and satellite imagery
• Astronomy: Solar observatory
• Biology: Gene expression data, NGS data etc.
• Nuclear Medicine
• Law Enforcement
• Biometrics: fingerprints, Retina etc.
• Industrial machine vision: for automated product assembly and inspection
• Defense: Military recognizance, etc.

9
13-08-2019

Gamma-Rays Imaging
• Major uses of imaging based
on gamma rays include nuclear
medicine and astronomical
observation.
• In nuclear medicine, the
approach is to inject a patient
with a radioactive isotope
that emits gamma rays as it
decays.
• Images are produced from the
emissions collected by gamma
ray detectors.

X-Ray Imaging
• These are different uses of X-Rays for imaging:
• Medical diagnosis
• Industrial imaging
• Angiography
• Computerized Axial Tomography (CAT)

10
13-08-2019

Imaging in Ultraviolet Band

• These are different uses of UV-


Rays for imaging:
• lithography
• Industrial inspection
• microscopy
• Lasers
• Biological imaging
• Astronomical observations

Imaging in Visible and Infrared Bands


• Imaging in visual band outweighs by far all
others in terms of breadth of application.
• Infrared band is often used in conjunction
with visual imaging.
• Infrared band radiates heat, which makes it
useful in imaging applications that rely on
heat signatures.

11
13-08-2019

Imaging in Visible and Infrared Bands


• Another major area of Imaging in visual and infrared bands is Remote sensing
which usually includes several bands as shown in table 1.1.
• The primary function of LANDSAT is to obtain and transmit images of Earth from
space for purposes of monitoring environmental conditions on the planet.

Imaging in Visible and Infrared Bands

12
13-08-2019

Synthetic Aperture Radar – Systems and Signal Processing

Synthetic Aperture Radar (SAR): Diverse


Applications
• Cartography – DEM, DTM
• Geology – Geological Mapping
• Seismology – Co-seismic displacement field
• Volcanology – Prediction of volcano eruption
• Forestry – Forest classification, deforest monitoring
• Soil Science – Soil moisture
• Glaciology – Glacier motion
• Oceanography – Ocean wave, wind, circulation, bathymetry
• Agriculture – Crop monitoring
• Hydrology – Wetland assessment
• Environment – Oil spill, hazard monitoring
• Archaeology – Sub-surface mapping

Steps in DIP
• Image acquisition is simple given an image that is already in digital form. It may
involve pre-processing such as scaling.
• Image enhancement is process of manipulating an image so that result is more
suitable than original for a specific application.
• Enhancement techniques are problem-oriented and there is no general theory
for it.
• e.g. A method that is quite useful for enhancing X-ray images might not be the
best approach for enhancing satellite images taken in Infrared band of EM.
• Image restoration also deals with improving appearance of an image.
• Unlike enhancement, which is subjective, image restoration is objective, as
restoration techniques are based on mathematical or probabilistic models of
image degradation.
• Color image processing has gained significant importance because of Internet.
• Wavelets are foundation for representing images in various degrees of resolution.

13
13-08-2019

Steps in DIP
• Image compression deals with techniques for reducing storage required to save
image or bandwidth required to transmit it.
• e.g. images stored in the form of image file extensions such as JPEG (Joint Photographic Experts
Group) image compression method.
• Morphological processing deals with tools for extracting image components that
are useful in representation and description of shape.
• From this step begins a transition from processes that output images to processes that output
image attributes.
• Segmentation procedures partition an image into its constituent parts or objects.
• Autonomous segmentation is one of the most difficult tasks in DIP.
• A rugged segmentation procedure brings the process a long way toward successful solution of
imaging problems that require object to be identified individually.
• A weak or erratic segmentation algorithms almost always guarantee eventual failure, i.e. the
more accurate the segmentation, the more likely recognition is to succeed.

Steps in DIP
• Representation and description follows the output of a segmentation stage, which
usually is raw pixel data, constituting either the boundary of a region or all the
points in the region itself i.e. the set of pixels separating one image region from
another.
• In either case, converting data to a form suitable for computer processing is
necessary.
• Description, also called feature selection, deals with extracting attributes that result
in some quantitative information of interest or are basic for differentiating one
class of objects from another.
• The following are the decisions to be made for representation:
• Whether data should be represented as a boundary or as a complete region
• Boundary representation is appropriate when focus is on external shape characteristics, such as
corners and inflections
• Regional representation is appropriate when the focus is on internal properties, such as texture
or skeletal shape
• In some applications, both may be needed.

14
13-08-2019

Steps in DIP
• Recognition is the process that assigns a label to an object based on its descriptors.
• Knowledge about a problem domain is coded into image processing system in the
form of Knowledge base.
• This knowledgebase may be used for the following:
• Details of regions of image where the information of interest is known to be located.
• Interrelated list of all major possible defects in a materials inspection problem.
• An image database containing high-resolution satellite images of a region in connection with
change detection application.
• It guides the operation of each processing module.
• It controls the interaction between modules.

Semantic Segmentation

15
13-08-2019

Fundamental steps in DIP

Basic
Components

16
13-08-2019

Components of a DIP system


• Large scale specialized image processing systems still are being sold for massive imaging
applications, such as processing of satellite images etc.
• But the trend continues toward miniaturizing and blending of general purpose small
computers with specialized image processing hardware.
• Fig 1.24 shows basic components comprising a typical general purpose system used for
DIP.
• For sensing images, 2 elements are required to acquire digital images
• A physical device that is sensitive to energy radiated by object we wish to image.
• A digitizer, a device for converting output of physical sensing device into digital form.
• e.g. In a digital video camera, sensors produce electrical output proportional to light intensity then
a digitizer converts these outputs to digital data.
• Specialized image processing h/w usually consists of digitizer and h/w that performs
other primitive operations such as ALU in parallel on entire images.
• S/w for DIP consists of specialized modules that perform specific tasks.
• Image displays in use today are mainly color flat screen TV monitors.

Graphics Processing Units (GPU's)


• Many desktop computers and laptops now come with fairly powerful GPU.

• Initially, GPU's were mostly used to power computations for graphics applications, but
soon people realized that they are just as useful for any kind of numerical computing.

• GPU's are made of a large number of processing units which by themselves aren't very
powerful, but become powerful when used in parallel.

• For any kind of processing to be done that is parallelizable, the GPU is a great fit.

• A lot of image processing algorithms are data-parallel, meaning


• the same task/computation needs to be performed on many elements of the data.
• Lots of image processing algorithms either operate on pixels independently or rely only
on a neighborhood around pixels (like image filtering).

• In order to interact with the GPU from MATLAB, you require the Parallel Computing
Toolbox.

17
13-08-2019

GPU (graphics processing units) based DIP system


• We will use open-source GPU code library for the speedup of MATLAB Image Processing Toolbox
(IPT).
• To perform an image processing operation on a GPU, follow these steps:
• Move the data from the CPU to the GPU. You do this by creating an object of type gpuArray, using the
gpuArray function.
• Perform the image processing operation on the GPU.
• Move the data back onto the CPU from the GPU.
• It has a dozen of representative functions from IPT and based on their inherent characteristics,
functions grouped into four categories: data independent, data sharing, algorithm dependent
and data dependent.
• To run image processing code on a graphics processing unit (GPU), you must have the Parallel
Computing Toolbox™ software with support for CUDA-enabled NVIDIA GPUs (compute
capability 1.3 or later)
• Use of GPU show drastic speedups for the functions in the data-independent or data-sharing
category by leveraging hardware support judiciously; and moderate speedups for those in the
algorithm-dependent category by careful algorithm selection and parallelization.
• For the functions in the last category, fine-grain synchronization and data-dependency
requirements are the main obstacles to an efficient implementation on GPUs.
• Source:http://in.mathworks.com/help/images/image-processing-on-a-gpu.html

18

Das könnte Ihnen auch gefallen