Sie sind auf Seite 1von 51

Lecture notes

Digital Image Processing by Jorma Kekalainen

Digital Image Processing


Jorma Kekalainen

Digital Image Processing


Image Formation

Lecture weeks 3 and 4

Page 1

Lecture notes

Digital Image Processing by Jorma Kekalainen

The history of image formation


The idea of a camera (= an imaging device) is
linked to how a human perceives the world
with her eyes.
But in the early days humans had only vague
or incorrect ideas about
What is light
How the eye maps objects in the 3D world to the
image that we perceive

Prior to the camera: the artist/painter

Camera obscura
Since ancient times it has been known that a
brightly illuminated scene can be projected to
an image
In a dark room (Latin: camera obscura)
Through a small hole (aperture)
The image becomes rotated 180o

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

104

Page 2

Lecture notes

Digital Image Processing by Jorma Kekalainen

Plenoptic function
At a point x = (x1,x2,x3) in space we can
measure how much light energy that travels in
the direction n = (n1,n2,n3), ||n|| = 1
The plenoptic function is the corresponding
radiance intensity function p(x,n)
A camera is a device that samples the
plenoptic function
Different types of cameras sample it in
different ways
Jorma Kekalainen

Digital Image Processing

105

Pinhole camera model

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

106

Page 3

Lecture notes

Digital Image Processing by Jorma Kekalainen

Lenses vs. infinitesimal aperture


The pinhole camera model doesnt work in
practice since
If we make the aperture small, too little light
enters the camera
If we make the aperture larger, the image
becomes blurred

Solution: we replace the aperture with a lens


or a system of lenses
Jorma Kekalainen

Digital Image Processing

107

Thin lenses
The simplest model of a lens
Focuses all points in an object plane onto the
image plane

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

108

Page 4

Lecture notes

Digital Image Processing by Jorma Kekalainen

Object plane
The object plane consist of all points that appear
sharp when projected through the lens onto the
image plane.
The object plane is an ideal model of where the
sharp points are located
In practice the object plane may be non-planar: e.g.
described by the surface of a sphere
The shape of the object plane depends on the quality
of the lens (system)

For thin lenses the object plane can often be


approximated as a plane.
Jorma Kekalainen

Digital Image Processing

109

Focal length
The thin lens is characterized by a single parameter:
the focal length fL

To change a (distance to object plane), we need to


change b since fL is constant
a = for b = fL !

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

110

Page 5

Lecture notes

Digital Image Processing by Jorma Kekalainen

Diffraction pattern
Due to the wave nature of light, even when
various lens effects are eliminated, light from
a single 3D point cannot be focused to an
arbitrarily small point if it has passed an
aperture.
For coherent light:
Huygens's principle: treat the incoming light as a
set of point light sources
Gives diffraction pattern at the image plane.
Jorma Kekalainen

Digital Image Processing

111

Diffraction limited systems

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

112

Page 6

Lecture notes

Digital Image Processing by Jorma Kekalainen

Diffraction limited systems


Each point along the aperture, at vertical position
x, acts as a wave source
In the image plane, at position x, each point
source contributes with a wave that has a phase
difference = 2 (x sin / ) relative the
position at the centre of the aperture
is the angle from point x to the aperture, and
assuming that x << x it follows that sin x / f
We get: 2 [xx /( f)]
Jorma Kekalainen

Digital Image Processing

113

Superposition
The principle of superposition means that the
resulting wave-function at the image plane is a
sum/integral of the contributions from the
different light sources:

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

114

Page 7

Lecture notes

Digital Image Processing by Jorma Kekalainen

Point spread function from a


single 3D point

Jorma Kekalainen

Digital Image Processing

115

Airy disk
The smallest resolvable distance in the image
plane, x, is given by

fL/D is the F-number of the lens or lens system


Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

116

Page 8

Lecture notes

Digital Image Processing by Jorma Kekalainen

Point spread function (Airy disk)


Conclusions:
The image cannot have a better resolution than
x!
No need to measure the image with higher
resolution than x!
Image resolution is not defined by number of
pixels in the cameras with high pixel resolution
and high diffraction!

Jorma Kekalainen

Digital Image Processing

117

Point spread function


The point spread function or Airy disk is also
called blur disk or circle of confusion or
sometimes modulation transfer function (MTF)
In general the point spread function can be
related to several effects that make the image of
a point appear blurred
Diffraction
Lens imperfections
Imperfections in the position of the image plane

Often modeled as constant over the image


Can be variable for poor optical systems
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

118

Page 9

Lecture notes

Digital Image Processing by Jorma Kekalainen

Lens distortion
A lens or a lens system can never map straight
lines in the 3D scene exactly to straight lines in
the image plane
Depending on the lens type, a square pattern
will typically appear like a barrel or a
pincushion

Jorma Kekalainen

Digital Image Processing

119

Cos4 law
In general, there is an attenuation of the
image towards the edges of the image,
approximately according to cos4
This effect can be compensated in a digital
camera

Note: The flux density decreases with the square of the distance to the light source:
cos2 . The effective area of the detector relative to the aperture varies as cos . The
Jorma Kekalainen
Digital Image Processing
120
effective
area of the aperture relative
to the detector varies as cos

Lecture weeks 3 and 4

Page 10

Lecture notes

Digital Image Processing by Jorma Kekalainen

Chromatic aberration
The refraction index of matter (lenses) is
wavelength dependent
E.g., a prism can decompose the light into its
spectrum
A ray of white light is decomposed into rays of different
colors that intersect the image plane at different points

Jorma Kekalainen

Digital Image Processing

121

Image formation

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

122

Page 11

Lecture notes

Digital Image Processing by Jorma Kekalainen

Image formation

Jorma Kekalainen

Digital Image Processing

123

Image formation

projection
through lens
image of object

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

124

Page 12

Lecture notes

Digital Image Processing by Jorma Kekalainen

Capturing image
For natural images we need a light source (:
wavelength of the source) .
E(x; y; z; ): incident light on a point (x; y; z world
coordinates of the point)

Each point in the scene has a reflectivity function.


r(x; y; z; ): reflectivity function

Light reflects from a point and the reflected light


is captured by an imaging device (= a camera).
c(x; y; z; ) = E(x; y; z; )r(x; y; z; ): reflected light.
Jorma Kekalainen

Digital Image Processing

125

Image formation

Camera(c(x; y; z; )) =
c=reflected light

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

126

Page 13

Lecture notes

Digital Image Processing by Jorma Kekalainen

Inside the Camera - Projection


Projection (P) from world coordinates (x; y; z)
to camera or image coordinates (x; y)
cp(x; y; ) = P(c(x; y; z; ))
Camera c(x; y; z; ) =

Jorma Kekalainen

Digital Image Processing

127

Projection
There are two types of projections (P) of interest to us:
1. Perspective projection
Objects closer to the capture device appear bigger.
Most image formation situations can be considered to be under
this category, including images taken by camera and the human
eye.

2. Ortographic projection
This is unnatural.
Objects appear the same size regardless of their distance to the
capture device.

Both types of projections can be represented via


mathematical formulas.
Ortographic projection is easier and is sometimes used as a
mathematical convenience
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

128

Page 14

Lecture notes

Digital Image Processing by Jorma Kekalainen

Perspective projection
Perspective projection: 1 = 2, l1 < l2 2 < 1.

Jorma Kekalainen

Digital Image Processing

129

Inside the camera - Sensitivity


Once we have the projection of the reflected light cp(x; y; )
the characteristics of the capture device take over.
V () is the sensitivity function of a capture device.
Each capture device has such a function which determines
how sensitive it is in capturing the range of wavelengths ()
present in cp(x; y; ).
The result is an image function which determines the
amount of reflected light that is captured at the camera
coordinates (x; y)
f(x; y) =cp(x; y; )V()d

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

130

Page 15

Lecture notes

Digital Image Processing by Jorma Kekalainen

Example: Sensitivity functions


Let us determine the image functions for the below sensitivity
functions V1() , V2() , V3() imaging the same scene:
1. This is the most realistic of the three. Sensitivity is
concentrated in a band around 0
f1(x; y) =cp(x; y; )V1()d
2. This is an unrealistic capture device which has sensitivity
only to a single wavelength 0 as determined by the delta
function. However there are devices that get close to such
selective behavior.
f2(x; y) =cp(x; y; )V2()d = cp(x; y; )(- 0)d
= cp(x; y; 0)
Jorma Kekalainen

Digital Image Processing

131

Example: Sensitivity functions


3. This is what happens if we take a picture without
taking the cap off the lens of our camera.
f3(x; y) =cp(x; y; )V3()d = cp(x; y; )0d=0

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

132

Page 16

Lecture notes

Digital Image Processing by Jorma Kekalainen

Image capturing

projection onto
discrete sensor
array
Jorma Kekalainen

digital camera
Digital Image Processing

133

Image capturing

sensors register
average color

Jorma Kekalainen

Lecture weeks 3 and 4

sampled image

Digital Image Processing

134

Page 17

Lecture notes

Digital Image Processing by Jorma Kekalainen

Image capturing

continuous colors
discrete locations

Jorma Kekalainen

discrete realvalued image


Digital Image Processing

135

Sampling and quantization

Sampled

Sampled and
quantized

Original real image


Quantized

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

136

Page 18

Lecture notes

Digital Image Processing by Jorma Kekalainen

Quantization

discrete color output

Continuous colors mapped to a finite, discrete


set of colors.

Jorma Kekalainen

continuous color input


Digital Image Processing

137

History of photography

1839: Daguerre develops the first practical method for photography


1839: Herschel invents glass negatives
1861: Maxwell demonstrates color photographs
1878: Muybridge demonstrates moving images
1887: Celluloid film is introduced
1888: Kodak markets its first easy-to-use camera
1891: Edison patents his kinetoscopic camera
1895: Lumire Bros. invent the cinmatographe
1925: Leica introduces the 35mm film format for still images
1936: Kodachrome color film
1948: Land invents the Polaroid camera
1957: First digitized image
1959: AGFA introduces the first automatic camera

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

138

Page 19

Lecture notes

Digital Image Processing by Jorma Kekalainen

History of photography
1969: Boyle and Smith invent the first CCD chip for image
capture (based on the bubble memory)
1973: Fairchild Semiconductor markets the first CCD chip
(100 100 pixels)
1975: Bayer at Kodak: first single chip color CCD camera
1981: Sony markets the Mavica, the 1st consumer digital
camera. Stores images on a floppy disc
1986: Kodak presents the first megapixel CCD camera
2006: Dalsa Corporation presents a 111 Mpixel CCD
camera
2009: Kodak announces that it will discontinue production
of Kodachrome film
Jorma Kekalainen

Digital Image Processing

139

Digital Image Processing


Image Acquisition
and Sensing

Lecture weeks 3 and 4

Page 20

Lecture notes

Digital Image Processing by Jorma Kekalainen

Main effects
Main effects how the image is measured to
produce a digital image
The image is spatially sampled and truncated
Photons are converted to electric charge/voltage
The charges are converted to voltage
The voltage is quantized

Jorma Kekalainen

Digital Image Processing

141

Photo-sensing chain

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

142

Page 21

Lecture notes

Digital Image Processing by Jorma Kekalainen

Light interacts with matter


The main or wanted interaction in a
photodetector is absorption
a photon is converted to an electron/hole pair

Electrons bond to atoms with a certain energy Qq


Photon absorption occurs with a certain probability
when the photon energy
E = h Qg

When a photon is absorbed: the electron is


released from the atom and becomes free.
Leaves a hole, a missing electron, in the atom
The hole is free to move around.
Jorma Kekalainen

Digital Image Processing

143

Basic layers of an image sensor


transparent coating and
conductors, a few m
Space charge region, a few m.
Contains an electric field which
effectively removes electrons.
Is localized: only deals with
electrons in a specific area
Semiconductor bulk, this is
where the photon-matter
interaction is designed to
happen. 2-100 m
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

144

Page 22

Lecture notes

Digital Image Processing by Jorma Kekalainen

Losses
When a photon enters the semiconductor
material, it may not interact as intended
1+2: reflection before
entering the active material
3+4: absorption before
entering the active material
5: absorption too deep in the material
6: the photon doesnt interact with the
material and exits at the back
Jorma Kekalainen

Digital Image Processing

145

Quantum efficiency
All these effects are wavelength dependent
The mean number of electron/hole pairs
created per photon is the quantum efficiency

Quantum efficiency for a


particular photo device as
a function of wavelength
Typically: Effect 1+6 here
Typically: Effect 6 here

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

146

Page 23

Lecture notes

Digital Image Processing by Jorma Kekalainen

Light electricity
Pure semiconductors can produce an electric
current Iphoto through the material if
It is embedded in an electric field
The material absorbs a photon with an energy
E=h which is larger than Qg, the gap between the
materials valence and conduction bands
Required:

Jorma Kekalainen

Digital Image Processing

147

Intrinsic absorption
This is called intrinsic absorption
No doping of the semiconductor is needed

Can be made in large arrays


Can be silicon based
Have high quantum efficiency
Basic effect in CCD-arrays
Different types of materials can be used for < 15 m
(shorter than IR) which are sensitive to

Near IR
Visible light
UV
X-ray

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

148

Page 24

Lecture notes

Digital Image Processing by Jorma Kekalainen

Photovoltaic detectors
Absorption of light can also be based on the
photovoltaic effect
When photons of sufficiently high energy are
absorbed by a material, electrons are released and
produce a voltage

This can be used in a photodiode to produce a


voltage
Solar cells
Image sensors in the range near IR - UV
Jorma Kekalainen

Digital Image Processing

149

Thermal excitation
Because of heat in the material electrons are
always excited (moved from the valence band
to the conduction band) due to thermal
energy in the material.
This induces an electric current Ithermo

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

150

Page 25

Lecture notes

Digital Image Processing by Jorma Kekalainen

Thermal noise
Ithermo is not a constant current, it is rather a
noise current of a mean given by the
expression

We have to treat it as a random signal added


onto the wanted signal Iphoto
Ithermo is a type of noise
Jorma Kekalainen

Digital Image Processing

151

Two main types of photo


detectors
Most image sensors are based on either of
two distinct types of photo detectors
The photodiode (the photovoltaic effect)
The MOS capacitance (intrinsic absorption)

Both can be manufactured using standard


semiconductor processes.
Both can be seen as a capacitor which is
discharged by means of Iphoto and Ithermo .
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

152

Page 26

Lecture notes

Digital Image Processing by Jorma Kekalainen

Photodiode
Electrons move from the
n+ region to fill holes in
the p-region

Holes from the p-region


move to the n+ region to
be filled with free electrons
The net result is that
A space-charge region
develops, depleted of free
electrons or holes.
The space-charge region is
an electric insulator.
An electric field is
established in the space
charge region (from n+ to p).

Jorma Kekalainen

Digital Image Processing

153

Photodiode
Apply a bias voltage of
the same polarity as the
internal field.
The space-charge region
Increases.
Since the space-charge
region is an insulator, no
current runs through the
Junction.

The diode acts as an electric capacitor:


It becomes electrically charged.
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

154

Page 27

Lecture notes

Digital Image Processing by Jorma Kekalainen

Photodiode
Remove the voltage
the charge remains.

It acts as a charged electric capacitor

Jorma Kekalainen

Digital Image Processing

155

Photodiode
The light creates a free
electron/hole pair.

lightly p-doped = mainly intrinsic absorption

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

Due to the electric field:


The electron sweeps to the
n+ region and cancels a hole.
The hole sweeps to the pregion and cancels an
electron.
This builds up a negative
voltage across the junction.
The electric field reduces
linearly to the number of
absorbed photons.
This voltage difference can be
measured.
156

Page 28

Lecture notes

Digital Image Processing by Jorma Kekalainen

Photodiode
The voltage difference
generated by the photons
occurs even if the diode
had not have been
precharged.
Caused by the photovoltaic
effect.
The diode can in principle
be used as a solar cell.
The pre-charging makes
the photovoltaic effect
stronger since it increases
the space-charge region.

Jorma Kekalainen

Digital Image Processing

157

Basic mode of photodiode


operation
1. Pre-charge to a specific voltage (few volts)
2. Let the photovoltaic effect discharge the diode
a specific time period (the exposure time)
3. The corresponding voltage difference is
proportional to the flux density incident to the
sensor area
4. Measure the voltage difference
5. Go to 1.
We need to measure voltage
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

158

Page 29

Lecture notes

Digital Image Processing by Jorma Kekalainen

MOS capacitor
The oxide layer is a
perfect insulator,
no current passes
through this layer.

Metal
Oxide

Semiconductor

Jorma Kekalainen

Digital Image Processing

159

MOS capacitor
Apply a voltage across
the capacitor.
Holes in the region under
the oxide will move into
the substrate and create
a space-charge region.
An electric field is
created across the oxide
barrier and through the
space-charge region.

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

160

Page 30

Lecture notes

Digital Image Processing by Jorma Kekalainen

MOS capacitor
An absorbed photon
creates an electron/hole
pair
The hole is swept into the
substrate.
The electron is drawn by
the electric field toward the
oxide barrier.
Due to the oxide
insulation, the electrons
accumulate below the
oxide barrier, no current
flows through the capacitor.
The amount of
accumulated electrons is
proportional to the number
of absorbed photons.
Jorma Kekalainen

Digital Image Processing

161

Basic mode of MOS capacitor


operation
1. Apply a voltage across the capacitor
2. Photon absorption creates an electron deposit under
the oxide layer
3. Allow this deposit to accumulate over a certain time
period (the exposure time)
4. The corresponding electron charge is proportional to
the incident flux density of the sensor
5. Move the charge deposit to somewhere where it can
be measured (typically CCD transport)
6. Go to 1.
We need to measure electric charge
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

162

Page 31

Lecture notes

Digital Image Processing by Jorma Kekalainen

Blooming
Both the photodiode and the MOS capacitor
collect electric charge in a small region
corresponding to the conductor region
When this region becomes saturated, the
charge spills over to neighboring elements
This is called blooming
Barriers between the detectors can reduce
this effect, but not eliminate it entirely
Jorma Kekalainen

Digital Image Processing

163

Fill factor
In practice, the light sensitive area of an image
sensor cannot fill the entire detector area.
Electronic components and wiring reduce the
light sensitive area
The fill factor is the percentage of the total
area which is light sensitive
Light sensitive area
Total pixel area
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

164

Page 32

Lecture notes

Digital Image Processing by Jorma Kekalainen

Micro-lenses
To overcome low fill factors, an array of microlenses in front of the sensor array can be used

At large incident angles, this spot may


miss the detector area
Jorma Kekalainen

Digital Image Processing

165

Pro and cons of micro-lenses


Micro-lenses enhance the fill factor
But
Due to the manufacturing process, the detector
area can often have an inhomogeneous sensitivity
When light is focused onto a smaller spot in the
sensor, the inhomogeneities become more
noticeable as measurement noise
At large incident angles, this spot may miss the
detector area
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

166

Page 33

Lecture notes

Digital Image Processing by Jorma Kekalainen

Transport problem
Light has caused a change in electric voltage or charge in a light
detector element (photodiode or MOS capacitor), and this change
needs to be measured to produce an image
Traditionally not measured per detector element
Would require many components per detector
Would give too small fill factor for 2D arrays

The transport problem:


The voltage/charge has to be transported out of the array and
measured outside
Often with a single measurement unit per sensor array

Two principles for solving the transport problem


The CCD array (MOS capacitor only)
Switches to a common signal/video line (photodiode or MOS
capacitor)
Regardless of whether the photo charge has been transported by a
CCD or a signal line, it needs to be converted to a voltage signal
Jorma Kekalainen

Digital Image Processing

Note: The fill factor is the percentage of the total area which is light sensitive.

167

CCD (Charge Coupled Device)


array
A chain of MOS capacitors where the voltages change in the
pattern shown below can move the charge
This transport can take place along an entire row/column of a
detector array
One pixel = 3 capacitors

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

168

Page 34

Lecture notes

Digital Image Processing by Jorma Kekalainen

Limiting factor of readout


A CCD array can have different ways of implementing
the readout of an entire image
Frame-transfer CCD
Interline-transfer CCD
Field-interline-transfer CCD

Limiting factors:
there is a maximal readout frequency from the entire array
this limits the readout speed from the individual pixel

the MOS-capacitors are sensitive to light exposure during


the transport

Charges should be moved to light insensitive areas as


quickly as possible
Jorma Kekalainen

Digital Image Processing

169

Example: Frame-transfer CCD


Active sensor area

Covered with an
opaque metal shield

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

170

Page 35

Lecture notes

Digital Image Processing by Jorma Kekalainen

Example: Frame-transfer CCD


Advantages
The whole of area A is light sensitive, fill factor can be
close to 1
Simple to manufacture

Disadvantages
It takes some time to shift the entire image from A to
B, during this time area A is still sensitive to light
after-exposure
Mechanical shutters can be used to remove afterexposure
Jorma Kekalainen

Digital Image Processing

171

CMOS camera (APS)


Developments from mid 1990s an onward have
led to an improved CMOS sensor called Active
Pixel Sensor (APS)
Basic idea:
Move the charge-to-voltage transistor in the amplifier
stage to the pixel (one per pixel):
Voltage readout instead of charge transport
The readout line becomes less sensitive to noise

With modern technology:


the extra transistors per pixel can be very small compared to
the rest of the pixel area devoted to light sensing
reasonable fill factor
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

172

Page 36

Lecture notes

Digital Image Processing by Jorma Kekalainen

4T APS
Add a fourth transistor to each pixel (4T)
This transistor acts as a memory during readout
All photo-charge is moved globally to the memory
transistor after exposure
The other three transistors operate as a standard 3T
APS
One for recharging the diode
One for transforming charge to voltage
One for connecting the voltage to the readout row

Can implement a global shutter read-out


Jorma Kekalainen

Digital Image Processing

173

Noise sources
Reset noise
The measured voltage depends on the fix bias
voltage over the photo diode or MOS capacitor
This voltage has always some amount of variation =
noise

Flicker or 1/f noise


Inhomogeneities and impurities in the materials
produce low-frequency noise due to statistical
fluctuations in various parameters which control the
photon-to-voltage conversion

These two factors may vary both across the array


(spatially) and over time
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

174

Page 37

Lecture notes

Digital Image Processing by Jorma Kekalainen

Noise sources
The space-charge region is not a perfect
isolator there is a small leakage current
Called dark current since it discharge the
capacitor even when no photons are absorbed

Thermal noise
Can be reduced by cooling

Design noise effect: blooming, after-effects


Note: Both the photodiode and the MOS capacitor collect electric charge in a small
region. When this region becomes saturated, the charge spills over to neighboring
Jorma Kekalainen
175
elements.
This is called blooming Digital Image Processing

Shot noise
Even if a constant number of photons hit the
photo detector, the absorption process is
probabilistic:
Each time we observe/measure the
voltage/charge difference at the detector, there
will a small variation in the result
This variation is the larger the shorter the
exposure time is, and vice versa
This noise has approximately a Poisson
distribution
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

176

Page 38

Lecture notes

Digital Image Processing by Jorma Kekalainen

SNR and dynamic range


V = The overall noise voltage measured at the output
V = the actual output voltage

V
SNR = 20 log

V
It means that darker images have a lower SNR than brighter
images (assuming constant average noise)
The dynamic range is the SNR of the largest detectable signal
Vmax

V
DR = 20 log max
V

Jorma Kekalainen

Typical values for CMOS


and CCD: DR 40-60 dB

Digital Image Processing

177

Saturation and noise


Saturation is the highest
value beyond which all
intensity levels are clipped

Noise appears often as


a grainy texture

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

178

Page 39

Lecture notes

Digital Image Processing by Jorma Kekalainen

Digitalization
The analogue voltage signal is normally transformed to a digital
representation by means of an analog-to-digital converter (ADC)
Two common principles:
Flash ADC (up to 8 bits)
Successive approximation (>8 bits)

Quantization noise is independent of method


If b bits are used to represent voltage up to Vmax:

b=

DR
20 log(2 )

gives a quantization noise of the same magnitude as the image


noise
Often, we want a few more bits than this to accurately represent
the image signal
Jorma Kekalainen

Digital Image Processing

179

Flash ADC
Also called the parallel A/D converter

It is formed of a series of
comparators, each one comparing
the input signal to a unique
reference voltage.
The comparator outputs connect to
the inputs of a priority encoder
circuit, which then produces a
binary output.
Vref is a stable reference voltage.
As the analog input voltage exceeds
the reference voltage at each
comparator, the comparator
outputs will sequentially saturate
to a high state.
The priority encoder generates a
binary number based on the
highest-order active input, ignoring
all other active inputs.

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

3-bit flash ADC circuit

180

Page 40

Lecture notes

Digital Image Processing by Jorma Kekalainen

Successive approximation ADC

The values of the resulting digital output


Instead of counting up in binary
sequence, this successiveare determined successively, from MSB to
approximation register register
LSB
counts by trying all values of bits
starting with the MSB and finishing at
the LSB.
Throughout the count process, the
register monitors the comparators
output to see if the binary count is
less than or greater than the analog
signal input, adjusting the bit values
accordingly.
The advantage to this counting
strategy is much faster results: the
DAC output converges on the analog
signal input in much larger steps than
with the 0-to-full count sequence of a
regular counter.

Jorma Kekalainen

Digital Image Processing

181

Example: Image acquisition using


CCD camera
CCD camera is one of the means for getting a picture into a
computer.
CCD camera has, in place of the usual film, an array of
photosensors, whose output is proportional to the
intensity of light falling on them.
For a camera attached to a computer, information from the
photosensors is then output to a suitable storage medium.
Generally this is done on hardware, as being much faster
and more efficient than software, using a frame-grabbing
card.
This allows a large number of images to be captured in a
very short time.

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

182

Page 41

Lecture notes

Digital Image Processing by Jorma Kekalainen

Example: Image acquisition using


flat bed scanner
Flat bed scanner works on a principle similar to
the CCD camera.
Instead of the entire image being captured at
once on a large array, a single row of
photosensors is moved across the image,
capturing it row-by-row as it moves.
Since this is a much slower process than taking a
picture with a camera, it is quite reasonable to
allow all capture and storage to be processed by
suitable software.
Jorma Kekalainen

Digital Image Processing

183

Image acquisition
Single imaging sensor

Line imaging sensor

Array imaging sensor

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

184

Page 42

Lecture notes

Digital Image Processing by Jorma Kekalainen

Image acquisition
Combining a single sensor with motion to
generate a 2-D image.

Jorma Kekalainen

Digital Image Processing

185

Color vision
The human eye has cones which are sensitive to
different wavelength bands

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

186

Page 43

Lecture notes

Digital Image Processing by Jorma Kekalainen

Three color channels


Grassmans law:
It is (in principle) sufficient to measure the light
spectrum in three distinct wavelength bands to
represent any perceivable color

We have not a synthetic sensor with sensitivity


curves identical to the human eye at our disposal
The three color channels are called red, green,
blue, even though they dont correspond to the
eyes curves
Jorma Kekalainen

Digital Image Processing

187

3 chip color cameras


sensor array for red light

sensor array for


blue light

sensor array for green light

Jorma Kekalainen

Lecture weeks 3 and 4

Three identical standard chips


Two semi-transparent mirrors that
refract different
wavelengths
Digital Image Processing

188

Page 44

Lecture notes

Digital Image Processing by Jorma Kekalainen

Three chip color cameras


Based on standard black-and-white sensor
chips (three identical sensor chips)
The three sensor arrays need to be aligned
with tolerances smaller than the inter-pixel
distance
Gives good performance
is expensive
used in professional cameras
Jorma Kekalainen

Digital Image Processing

189

One chip color cameras


To reduce cost:
use one sensor array
place a color filter on top of each detector
element
each detector area is now sensitive to only a
specific wavelength range
reduces the fill factor for each range
the colors are not measured at the same places

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

190

Page 45

Lecture notes

Digital Image Processing by Jorma Kekalainen

One chip color cameras


Standard RGB-filters
Each color channel is rather narrow
blocks more photons less effective

Cyan-Yellow-Magenta (white) filters


Each color channel is wider
blocks fewer photons more effective
Post-processing needed to convert to RGB

The eye is more sensitive to green light and less


to blue light
It makes sense to have more green detectors and
fewer blue detectors
Jorma Kekalainen

Digital Image Processing

191

Stripe filters: Examples


Darker area is a cell that represents one pixel

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

Extra green

192

Page 46

Lecture notes

Digital Image Processing by Jorma Kekalainen

Color post-processing
We can see the image detected by the sensor as a
mono-chrome signal (the raw image)
An RGB signal (3 components per pixel) is then
produced by interpolation from the raw image,
using different and space varying filters for each
of the three components (demosaicking)
Note: two types of filtering
An optical filter on the light before the sensor
An electronic filter on the image signal to produce
RGB signal
Jorma Kekalainen

Digital Image Processing

193

One chip color camera


Most consumer cameras output only the
interpolated image which is typically
compressed using JPEG
In more advanced cameras, the raw
uninterpolated image can be read out from
the camera and processed externally by the
user
JPEG ( Joint Photographic Experts Group) is a commonly used method of lossy
compression for digital images. The degree of compression can be adjusted,
Jorma Kekalainen
Digital Image Processing
194
allowing
a selectable tradeoff between
storage size and image quality.

Lecture weeks 3 and 4

Page 47

Lecture notes

Digital Image Processing by Jorma Kekalainen

Example

A photo with the compression rate


decreasing, and hence quality
increasing, from left to right.

Ville The Cat


Jorma Kekalainen

Digital Image Processing

195

Color processing
The perception of color is complex
Humans tend to perceive color independent of
illumination
A color camera makes a measurement of physical
quantities: very dependent on illumination

White balancing
Transforms the color measurement to make what we
perceive as white to give equal RGB-values
Automatic or manual

The color information may also be converted to some


other color space than RGB (e.g. HIS or XYZ)
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

196

Page 48

Lecture notes

Digital Image Processing by Jorma Kekalainen

Video camera
Basic idea: take one image after another in sequence
(temporal sampling)
Legacy television standards (PAL, NTCS,) require
interlaced video
Take one half-image with all odd rows and then another
half-image with all even rows, odd, even, etc.
Odd and even rows are exposed at different times
Motivation: better bandwidth usage in broadcasted TV

Today, progressive scan (or non-interlaced) video is


becoming more and more common
Used in many modern video standards
Jorma Kekalainen

Digital Image Processing

197

Interlaced vs. progressive scan


Interlaced scan
E.g., one half image
at 50 Hz
one full image at 25 Hz

Progressive scan
E.g., one full image
at 25 Hz
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

198

Page 49

Lecture notes

Digital Image Processing by Jorma Kekalainen

Interlaced vs. progressive scan


Sometimes interlaced video (top) is represented as a
sequence of complete images, but where the even and odd
lines are taken at different time points (bottom)
De-interlacing can be made by interpolation both spatially
and over time
loss of spatial resolution

Jorma Kekalainen

Digital Image Processing

199

Modern consumer cameras


The effects described here relate to any type of light
measuring digital camera
Modern cameras (e.g., in mobile phones), however,
include increasingly more and more sophisticated
processing of the image and control of the camera

Automatic exposure time control


Automatic focus
Red-eye removal
Color balancing
Motion compensation

Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

200

Page 50

Lecture notes

Digital Image Processing by Jorma Kekalainen

Rapid development
The technology related to image sensors is in
rapid development
The components are constantly becoming smaller
(Moores law)
New solutions to various problems appear at high
pace
More and more functionality is being integrated with
the image sensor
Image sensors are being integrated with other
functionalities (all kinds of supervision, control, and
surveillance anywhere and everywhere)
Jorma Kekalainen

Lecture weeks 3 and 4

Digital Image Processing

201

Page 51

Das könnte Ihnen auch gefallen