Sie sind auf Seite 1von 59

ECE885 Computer Vision

Prof. Bhupinder Verma


Dean ECE & Venkata. B. Kranthi HoD (Robotics & Automation) Email: bhupinder.verma@lpu.co.in venkata

ECE885-Computer Vision
COURSE OBJECTIVES: The course addresses various dimensions of image analysis, image understanding, automated visual inspections of industrial processes, medical imaging, intelligent robotics etc.
After learning this course the student should be able to visualise software & hardware issues related to industrial applications of image processing.

Attendance: CA: MTE: ETE:

05 20 25 50

Five Senses of a Human


vision, hearing, smell, taste, and touch

Of these five, vision is undoubtedly the one that we have come to depend upon above all others, and indeed the one that provides most of the data we receive.
Not only do the input pathways from the eyes provide megabits of information at each glance, but the data rate for continuous viewing probably exceed 10 megabits per second. However, much of this information is redundant and is compressed by the various layers of the visual cortex, so that the higher centers of the brain have to interpret abstractly only a small fraction of the data.

Human Vision

Image Processing Examples

Types of Images

Types of Images

Digital Image Terminology:


0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 95 92 93 92 94 0 1 96 93 93 93 95 1 1 94 93 94 93 95 0 0 93 92 92 93 96 0 0 92 92 93 93 95 pixel (with value 94)

its 3x3 neighborhood


region of medium intensity

resolution (7x7)
binary image gray-scale (or gray-tone) image color image multi-spectral image range image labeled image

An image a) with low dynamic range b) Result of scaling (xC)

Image using intensity transformation & spatial filtering

Original digital mammogram


Negative image Result of expanding the intensity range [0.5 0.75] Result of enhancing the image with Gamma=2

Image using intensity transformation & spatial filtering

Bone Scan image


Image enhanced using contraststretching transformation Original image coutesy G.E. Medical Systems

Cardiac Cycle
The visulaised flow pattern in the right attrium of a normal subject at different phases

Medical Image Segmentation


Original PET image


Thresholded image at value of 9000 MRA Level 1 Reconstructed image

Effect of changing the dpi resolution while keeping number of pixels constant a) a 450x450 image at 200 dpi(2.25x2.25 in) b) The same 450x450 image at 300dpi (1.5x1.5 in)

Results using array indexing

Original image
Image flipped vertically Cropped image Sub-sampled image

A horizontal scan line through the middle of original image

Illustration of histogram equilization

Original image
Its histogram

Histogram equalised image and


Its histogram

Original image courtesy Dr. Roger's Research School of Biological Sciences at Australian National University Canberra

Illustration of histogram equilization

Image of the Mars Moon


Its histogram Histogram Equalized Image.

Its histogram
Original Image coutesy NASA

Image Process examples: Apple Grading

Image Processing Examples

Image Processing Examples

Image Processing Examples

Image Processing Examples

Image Processing Examples

Wood Surface Inspection

Examples of quality classes of pine wood lumber outward faces (Nordic Timber 1994). Printed with permission from the publisher. The original is a color picture.

IR images of Vegetables

Wavelengths used for taking images


Sr Name Wavelength (um) Characteristics & Uses

1
2 3 4 5 6 7

Visible Blue
Visible green Visible red Near Infrared Middle Infrared Thermal Infrared Middle Infrared

0.45-0.52
0.53-0.60 0.63-0.69 0.76-0.90 1.55-1.75 10.4-12.5 2.08-2.35

Max water penetration


Good for measuring plant vigor Vegetation discrimination Biomass & Shoreline mapping Moisture Content of soil & vegetation Soil moisture; thermal mapping Mineral mapping

Image Processing Examples

Image Processing Examples

Image Processing Examples

WSU for Industry, Intelligence & Integrity!

A Research Presentation

Dr.Vipin Chaudhry, a Lead researcher (Brain Imaging Lab)

Human Vision vs Computer Vision

Vision allows humans to perceive & understand the world surrounding them, while computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image. Vision = Geometry + Measurement + Interpretation

Giving computers the ability to see is not an easy task.


We live in three dimensional (3D) world and when computers try to analyze objects in 3D space, the visual sensors available usually give 2D imges, and this projection to lower number of dimesions incurs enormous loss of information

A typical Computer Vision System

Sequence of Operations in Comp Vision

Image capture
Early processing

Segmentation
Model fitting Motion prediction Qualitative / Quantitative conclusions

Image Processing Levels

IMAGE REPRESENTATION & IMAGE ANALYSIS TASKS

Image understanding by a machine can be seen as an attempt to find a relation between input image(s) and previously established models of the observed world. Transition from input images to model reduces the information contained in the image to relevant information for application domain. Image representation can be divided according to data organization at four levels (on next slide). This hierarchy of image representation and related algorithms is often categorized to simply three levels: Low Level Processing, Intermediate & High level image understanding

Data organisation at four levels.

Low Level Processing;


Low level image processing uses data which resemble the input image; for example an input image captured by a TV camera is 2D in nature, described by an image function f(x,y) whose value is usually brightness depending on two parameters x&y, the coordinates of the location in the image.

Low level processing includes


image compression, pre-processing methods for noise filtering,

edge extraction, and


image sharpening.

Low-Level

sharpening

blurring
42

Intermediate Level Processing:

Image segmentation is the key step at this level, in which the computer vision tries to separate objects from image background and from each other.

Two types of segmentation exists: Total segmentation & Partial Segmentation


Total segmentation is possible for simple tasks, an example being the recognition of dark non-touching objects from a light background. In more complex tasks, low level image processing techniques handle the partial segmentation tasks, in which only the cues that help high-level processing are extracted. For example, finding parts of object boundaries is an example of low-level partial segmentation.

Low-Level

Canny

original image

edge image

Mid-Level

ORT

data structure edge image circular arcs and line segments


44

Mid-level

K-means clustering (followed by connected component analysis)

original color image data structure

regions of homogeneous color

45

High Level Processing:

High level computer vision tries to imitate human cognition and ability to make decisions according to information contained in the images
Hi-level vision begins with some form of formal model of the world, and then the 'reality' perceived in the form of digitized images is compared to the model. A match is attempted, and when differences emerge, partial matches (or sub-goals) are sought that overcome the mis-matches. The computer switches to Low-level image processing to find information needed to update the model.

This process is then repeated iteratively and 'understanding' an image thereby becomes a cooperation between top-down and bottom-up processes.

Image Processing Levels

Computer Vision/Machine Vision

COMPUTER VISION Vs MACHINE VISION

A computer vision system recovers useful information about a scene from its 2D projections. Computer Vision is a field within artificial intelligence. goal is to develop methods for image understanding: recover the three-dimensional shape of objects in the scene or understand the environment with some purpose, like autonomous navigation

Medical images may be processed by a computer vision system to assist in diagnosis


Machine vision is the automatic acquisition of images by non-contact means and their automatic analysis to extract needed data for controlling a process or activity

HOW COMPUTER VISION IS RELATED TO OTHER FIELDS

Pattern recognition(PR):
Pattern Recognition classifies numerical and symbolic data. Many statistical and syntactical techniques have been developed for classification of patterns.

Techniques from pattern recognition play an important role in machine vision for recognising objects.

HOW COMPUTER VISION IS RELATED TO OTHER FIELDS Artificial Intelligence (AI):

AI is used to analyse scenes by computing a symbolic representation of the scene contents after the images have been processed to obtain features.

AI may be viewed as having three stages: perception, cognition, and action.


Perception translates signals from world into symbols. Cognition manipulates symbols and action translates symbols into signals that effect changes in the world. Many techniques from AI play important roles in all aspects of computer vision. Infact CV is often considered a sub-field of AI Artificial Neural Networks (ANN) have recently played an active role in Computer Vision.

HOW COMPUTER VISION IS RELATED TO OTHER FIELDS

Psychophysics:
Psychophysics alongwith cognitive science, has studied human vision for a long time.

Many researchers in Computer Vision are more interested in preparing computational models of human vision than in designing machine vision systems.
Machine vision produces measurements or abstractions from geometrical properties. It may be useful to remember the equation; Vision = Geometry + Measurement + Interpretation

HOW COMPUTER VISION IS RELATED TO OTHER FIELDS

Image Processing (IP):


Image processing techniques usually transform images into other images; the task of information recovery is left to a human user. IP includes topics such as image enhancement, image compression and correcting blurred or out of focus images.

IP algorithms are used in early stages of a computer vision/ machine vision systems.

HOW COMPUTER VISION IS RELATED TO OTHER FIELDS

Computer Graphics (CG):


CG generates images from geometric primitives such as lines, circles, and free-form surfaces.

CG techniques play a significant role in visulization and virtual reality.


Machine Vision is just the reverse, estimating the geometric primitives and other features from the image. Thus, Computer Graphics is the synthesis of images, and Machine Vision is the analysis of images.

WHY COMPUTER VISION (CV) IS DIFFICULT?

Loss of Information in going from 3D to 2D:


It is a phenomenon which occurs in typical image capture devices such as a camera or an eye, because their geometric properties have been approximated by a pin-hole model. The main trouble with the pin-hole model is that imaging geometry does not distinguish size of objects. A human needs a 'yard-stick' to guess the actual size of the object, which the computer does not have.

The pinhole model of imaging geometry does not distinguish size of objects.

WHY COMPUTER VISION (CV) IS DIFFICULT?

Interpretation:
Interpretation of image(s) constitutes the principal tool of computer vision to approach problems which humans solve unwittingly. When a human tries to understand an image then previous knowledge & experience is brought to the current observation. Artificial intelligence has invested several decades in attempts to endow computers with the capability to understand observations. Interpretation of images can be seen as a mapping: interpretation: image datamodel There may be several interpretations of the same image(s).

WHY COMPUTER VISION (CV) IS DIFFICULT?

Noise:
Noise is inherently present in each measurement in the real world which needs mathematical tools to cope up with it e.g. probability theory.

Too much data:


An A4 sheet of paper scanned monochromatically at 300 dots/inch (dpi) at 8 bits per pixel corresponds to 8.5 MB. Processing such a data at real time (25-30 fps) is really challenging

WHY COMPUTER VISION (CV) IS DIFFICULT?

Brightness: Brightness measured in the image is given by


complicated image formation physics.
The radiance (~brightness, image intensity) depends on the irradiance (light source type, intensity and position), the observer's position, the local geometry of the surface and surface reflectance properties. This is why inverse tasks are ill-posed e.g. to reconstruct local surface orientation from intensity variations. Often, a direct link between appearance of objects in scenes and their interpretation is sought.

Local window Vs need for global view:


Most often, an image analysis algorithms analyze a particular storage bin in an operational memory (e.g. a pixel in the image) and its local neighborhood; the computer sees the image through a keyhole. Seeing the world through a keyhole makes it very difficult to understand more global context