Sie sind auf Seite 1von 47

Image Compression

Image Compression
But life is short and information endless...
Abbreviation is a necessary evil and the
abbreviators business is to make the best of a job
which, although intrinsically bad, is still better than
nothing.
Aldous Huxley
M. Carli - 2005

Image Compression

Whats Image Compression?

Reduction of the amount of data removal


of redundant data
Math.: Transforming a 2-D pixel array into
a statistically uncorrelated data set

M. Carli - 2005

Image Compression

Why Compression?

Important in data storage and data transmission


Examples:

Progressive transmission of images/videos (Internet)


Video coding (HDTV, teleconferencing)
Digital libraries and image databases
Remote sensing
Medical imaging

M. Carli - 2005

Image Compression

Digital image
A digital image is a two-dimensional
sequence of samples

M. Carli - 2005

Image Compression

Discrete image intensities


Unsigned B-bits: x[n1, n2]{0,1,,2B-1}
Signed B-bits: x[n1, n2]{- 2B-1, - 2B-1+1,1,0,1,,2B-1-1}
Usually B=8
Interpretation:

M. Carli - 2005

Image Compression

Color
Color images: three values per sample
location, (red, green and blue)
xr [n1, n2], xg[n1, n2] xb[n1, n2]

M. Carli - 2005

Image Compression

Example: JPEG Standard

A 500 500 color image represented with 8


bits/channel/pixel requires
500 500 8 3 = 6M bits

With a compression standard like JPEG, it


is possible to represent it with a much
smaller number of bits!

M. Carli - 2005

Lossless vs. Lossy Compression

Image Compression

lossless

Compression techniques

lossy

Lossless (or information-preserving) compression:


compression
Images can be compressed and restored without
any loss of information (e.g., medical imaging,
satellite imaging)
Lossy compression:
compression Perfect recovery not possible;
larger data compression (e.g., TV signals,
teleconferencing)
M. Carli - 2005

Data and Information

Image Compression

Data are the means by which information is


conveyed
Various amounts of data may be used to
represent the same amount of information
Data redundancy: if n1 and n2 denote the
number of information-carrying units in two
data sets that represent the same information,
the relative data redundancy of the first data
set with respect to the second is

RD = 1

1
n
where C R = 1
CR M. Carli - 2005
n2

1
Examining RD = 1
CR

compression
ratio

Image Compression

Case 1: n2 = n1

CR = 1 and RD = 0
Case 2: n2

the first data set contains


no redundant information

n1

CR and RD 1
Case 3: n2

highly redundant data


significant compression

n1

CR 0 and RD
M. Carli - 2005

(undesirable)
data expansion

Image Compression

Redundancy

CR (0, ) and RD (,1)


In digital image compression there exist
three basic data redundancies:
1. Coding redundancy
2. Interpixel redundancy
3. Psychovisual redundancy

M. Carli - 2005

Image Compression

Example
Suppose that:

CR = 10 (or 10 :1)

This means that the first dataset needs 10


information-carrying units (e.g., bits) whereas the
second dataset just needs 1!

RD = 0.9

The corresponding redundancy is


which says that 90% of the data in the first set is
redundant

M. Carli - 2005

Image Compression

Coding Redundancy

Let rk [0,1] be a discrete random variable


representing the gray levels (L) in an image
Its probability is represented by
n
pr (rk ) = k , k = 0,1,, L 1
n
Let l ( rk ) be the total number of bits used to
represent each value of rk
The average number of bits required to
represent each pixel is
L 1
Lavg = l (rk ) pr (rk )
M. Carli - 2005

k =0

Image Compression

Coding
The average lenght of the code words assigned to
the various gray-level values: the sum of the
product of the number of bits used to represent
each gray level and the probability that the gray
level occurs.
Total number of bits to code an MxN image is
MNLavg
Usually l(rk) = m bits (constant). Lavg = k mpr (rk)=m
M. Carli - 2005

Image Compression

Coding
It makes sense to assign
fewer bits to those rk for
which pr(rk) are large in
order to reduce the sum.
achieves data
compression and results in
a variable length code.

Lavg =

L 1

l (rk ) pr (rk )

k =0

More probable gray levels


will have fewer number of
bits.
M. Carli - 2005

Image Compression

Example: Variable-Length Coding


variablelength
code

natural
binary
code

Gray-level distribution
Code 1: Lavg = 3 bits
Code 2: Lavg = 2.7 bits
2(0.19)+2(0.25)+...

3
= 1.11
2.7
1
RD = 1
= 0.099
1.11
CR =

M. Carli - 2005

Image Compression

Example: Rationale Behind VariableLength Coding


Assign shortest codewords to the most probable

gray levels and longest codewords to the least


probable

probability

codeword length

M. Carli - 2005

Image Compression

Interpixel Redundancy

Same2 histograms x
autocorrelation
coefficients along one
line of each image
A(n)
( n ) =
A(0)
trimodal

1 N 1n
f ( x, y) f ( x, y + n)
N n y =0
M. Carli - 2005

A( n) =

Image Compression

Interpixel Redundancy
Second image shows high correlation between
pixels 45 and 90 samples apart
Adjacent pixels of both images are highly correlated
Interpixel (or spatial) redundancy:
redundancy the value of any
given pixel can be reasonably predicted from the
values of its neighbors; as a consequence, any pixel
carries a small amount of information
Interpixel redundancy can be reduced through
mappings (e.g., differences between adjacent pixels)

M. Carli - 2005

Image Compression

Example: Run-Length Coding

CR =

1024 343 1
= 2.63
12,166 11

number of runs
11 bits are necessary to
represent each run-length pair

RD = 1

1
= 0.62
2.63

M. Carli - 2005

10

Image Compression

Psychovisual Redundancy
The eye does not respond with equal sensitivity to
all visual information
Certain information has less relative importance
than other information in normal visual processing
(psychovisually redundant)
It can be eliminated without significantly
impairing the quality of image perception

M. Carli - 2005

Image Compression

Irrelevance in Color Imagery


Human visual system has much lower acuity for color hue
and saturation than for brightness
Use color transform to facilitate exploiting that property

M. Carli - 2005

11

Compression by Quantization

8 bits

Image Compression

4 bits (2:1) Improved Gray Scale (IGS)


Quantization (2:1)

M. Carli - 2005

Image Compression

Objective Fidelity Criteria

f ( x, y ) is the input image


f ( x, y ) is the estimate or approximation of
f ( x, y ) resulting from compression and
decompression
Error between the two images

e( x, y ) = f ( x, y ) f ( x, y )

Root-mean square error:


error

erms

1
=
MN

M 1 N 1

f ( x, y) f ( x, y)

x =0 y =0

M. Carli - 2005

1/ 2
2

12

Image Compression

Signal-to-Noise Ratio (SNR)

Mean-square SNR of the output image


M 1 N 1

f ( x, y ) 2

SNRms =

x =0 y =0
M 1 N 1

f ( x, y ) f ( x, y ) 2

x =0 y =0

M. Carli - 2005

Image Compression

Subjective Fidelity Criteria

M. Carli - 2005

13

Image Compression

Image Compression Models

Source encoder:
encoder removes input redundancies
Channel encoder: increases the noise
immunity of the source encoders output

M. Carli - 2005

Image Compression

Typical structured compression system

M. Carli - 2005

14

Image Compression

Source Encoder and


Source Decoder Model

M. Carli - 2005

Image Compression

Source Encoder
Mapper:
Mapper designed to reduce interpixel redundancy;
redundancy
e.g.:
Run-length encoding
Transform encoding (e.g., DCT in JPEG standard)

Quantizer:
Quantizer reduces psychovisual redundancies (it
cannot be used in lossless compression)
Symbol Encoder:
Encoder creates a fixed/variable length code;
it reduces coding redundancies
M. Carli - 2005

15

Image Compression

Quantizer
Goal: reduce the number of possible amplitude values for
coding
Simple scalar quantizerwith four output indices

M. Carli - 2005

Image Compression

The Channel Encoder and Decoder


The channel encoder adds controlled
redundancy to the data to protect it from
channel noise
Hamming encoding: based on appending
enough bits to the data to ensure that some
minimum number of bits must change
between valid code words thereby providing
resiliency against transmission errors
M. Carli - 2005

16

Image Compression

Measuring Information

Information can be modeled as a


probabilistic process that can be measured
in a manner which agrees with intuition
A random event E that occurs with
probability P(E) is said to contain

I ( E ) = log

1
= log P ( E )
P( E )

units of information
I(E) is often called self-information
M. Carli - 2005

Image Compression

Self-Information

If P ( E ) = 1, I ( E ) = 0 (no information)
If the base of the logarithm is 2, the unit of
information is called a bit

I ( E ) = log 2 (1/ 2) = 1 bit


If P ( E ) = 1/,2
Example: flipping a coin and communicating
the result requires one bit of information
1 bit is the amount of information conveyed
M. Carli - 2005

17

Image Compression

Entropy
Source alphabet

{a1, a2 ,, aL }

information
source

Let p (al ), l = 1, 2,, L be the probability of


each symbol
Then the entropy (or uncertainty) of the source
is given by
L

H = p (al )log( p (al )) bits/symbol


l =1

M. Carli - 2005

Image Compression

Computing the Entropy of an Image


8-bit gray level source- statistically independent pixels emission

Consider the 8-bit image:


21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
Gray Level Counts Probability
21
12
3/ 8
95
169
243

4
4
12

1/ 8
1/ 8
3/ 8

H = 1.81 bits/pixel
(first-order estimate)

M. Carli - 2005

18

Image Compression

Using Mappings to Reduce Entropy


Keep first column and replace following
with the arithmetic difference between
adjacent columns
21 0 0 74 74 74 0 0
21 0 0 74 74 74 0 0
21 0 0 74 74 74 0 0
21 0 0 74 74 74 0 0
Gray Level
Counts Probability
or Difference
0
12
1/ 2
21
4
1/ 8
M.
74
12
3/Carli
8 - 2005

H = 1.41 bits/pixel
(first-order estimate)

Error-Free (or Lossless)


Compression
In some applications, error-free (lossless)
compression is the only acceptable means of
data reduction (e.g., medical and business
documents, satellite data)
Applicable to both binary and gray-scale
images
Consist of two operations:

Image Compression

1. Alternative image representation to reduce


interpixel redundancies
2. Coding the representation to eliminate coding
redundancies
M. Carli - 2005

19

Image Compression

Huffman (Variable-Length) Coding


Devised by Huffman in 1952 for removing
coding redundancy
Property: If the symbols of an information
source are coded individually, the Huffman
coding yields the smallest possible number
of code symbols per source symbol

M. Carli - 2005

Image Compression

Huffman Coding

Method: create a series of source reductions


by ordering the probabilities of the symbols
under consideration and combining the
lowest probability symbols into a single
symbol that replaces them in the next source
reduction
Rationale: To assign the shortest possible
codewords to the most probable symbols
M. Carli - 2005

20

Image Compression

Example: Huffman Source Reductions

symbols are ordered according to


decreasing probabilityM. Carli - 2005

Image Compression

Example: Huffman Code Assignment


Procedure

Lavg = 0.4 1 + 0.3 2 + 0.1 3 + 0.1 4 + 0.06 5 + 0.04 5 = 2.2 bits/symbol

H = p log 2 ( p ) = 2.14 bits/symbol


M. Carli - 2005

21

Huffman Coding

Image Compression

Coding/decoding is accomplished with a lookup


table
It is a block code:
code each source symbol is mapped
into a fixed sequence of code symbols
It is instantaneous:
instantaneous each code word in a string of
code symbols can be decoded without looking at
succeeding symbols
It is uniquely decodable:
decodable any string of code
symbols can be decoded only in one way;
example:
example

010100111 1 00
a3

a a a a6

2 2
1
M. Carli - 2005

Image Compression

Arithmetic Coding

Unlike the Huffman coding, arithmetic


coding generates nonblock codes
A one-to-one correspondence between
source symbols and code words does not
exist
An entire sequence of source symbols (or
message) is assigned a single arithmetic
code word
M. Carli - 2005

22

Image Compression

Arithmetic Coding

Property: The code word itself defines an


interval of real numbers between 0 and 1.
As the number of symbols increases, the
interval becomes smaller and the number of
bits necessary to represent it becomes larger
Each symbol of the message reduces the
size of the interval in accordance with its
probability of occurrence
M. Carli - 2005

Image Compression

Example: Arithmetic Coding


Suppose that a 4-symbol source generates
the sequence (or message) a1a2 a3 a3 a4

M. Carli - 2005

any number in
this subinterval,
e.g. 0.068, can
be used to
represent the
message

23

Image Compression

Lempel-Ziv-Welch (LZW) Coding

Uses a dictionary
Dictionary is adapted to the data
It assigns fixed-length codewords to
variable-length sequences of source symbols
Decoder builds the matching dictionary
based on the codewords received
Used in GIF, TIFF, and PDF formats

M. Carli - 2005

Image Compression

Example: LZW Coding

Consider the 4x4 8-bit image


39
39
39
39

39
39
39
39

126
126
126
126

126
126
126
126

M. Carli - 2005

24

Image Compression

Example: LZW Coding

M. Carli - 2005

Lossless Predictive Coding


Encoder

Image Compression

variable-length code

prediction error

en = f n fn
Decoder

f n = en + fn
M. Carli - 2005

25

Image Compression

The Predictor

Generates an estimate of the value of a


given pixel based on the values
of some past input pixels (temporal prediction)
prediction
or
of some neighboring pixels (spatial prediction)
prediction

Example:
Example

fn = round i f n i
i =1

Example:
Example

m=order of predictor

f ( x, y ) = round i f ( x, y i )
i =1

M. Carli - 2005
m

f ( x, y )

Image Compression

Example: Predictive Coding

f ( x, y ) = round [ f ( x, y 1)]
f ( x, y )

previous pixel
predictor

e( x, y ) = f ( x, y ) f ( x, y )

=1

histograms of original
and error image

Laplacian PDF

M. Carli - 2005

26

Image Compression

Lossy Predictive Coding


Encoder

f n = en + fn
Feedback loop prevents error
buildup at the decoders output

Decoder

f n = en + fn
M. Carli - 2005

Image Compression

fn = f n 1

Example: Delta Modulation


+ for en > 0
en =
(0 < < 1)
otherwise

M. Carli - 2005

27

Image Compression

Optimal Predictors

Minimize the encoders mean-square


prediction error

{ }

E en2 = E f n fn

subject to the constrain:

f n = en + fn en + fn = f n
and to

(quantization error is
assumed to be negligible)

fn = i f n i

i =1M. Carli - 2005

(prediction is a linear combination


of m previous pixels)

Image Compression

Optimal predictor

The optimization criterion is chosen to


MINIMIZE the MSPrediction Error
The quantization error is assumed to be
negligible
The prediction is constrained to a linear
combination of m previous pixels.

M. Carli - 2005

28

Image Compression

Differential Pulse Code Modulation


(DPCM)
2

The solution of min E f n i f ni


i
i =1


1

is = R r where
E { f n 1 f n1}

R=
E { f n m f n 1}

E { f n1 f nm }
autocorrelation
matrix
E { f n m f nm }

E { f n f n 1}
1

=
r=


m M. Carli - 2005 E { f n f n m }

Image Compression

Example: Comparison of Four Linear


Predictors
f ( x, y ) = 0.97 f ( x, y 1)

f ( x, y ) = 0.5 f ( x, y 1) + 0.5 f ( x 1, y )
f ( x, y ) = 0.75 f ( x, y 1) + 0.75 f ( x 1, y ) 0.5 f ( x 1, y 1)
0.97 f ( x, y 1) if h v
f ( x, y ) =
0.97 f ( x 1, y ) otherwise
where:

h = f ( x 1, y ) f ( x 1, y 1)

v = f ( x, y 1) f ( x 1, y 1)
M. Carli - 2005

29

Image Compression

Example: Comparison of Four Linear


Predictors

original image

M. Carli - 2005

Image Compression

Transform Coding
Encoder

Decoder

M. Carli - 2005

30

Image Compression

Transform Selection

Consider an image f ( x, y ) of size N N


The forward discrete transform is defined as

T (u , v) =

N 1 N 1

f ( x, y ) g ( x, y , u , v )

x =0 y =0

u , v = 0,1,, N 1
The inverse discrete transform is defined as

f ( x, y ) =

N 1 N 1

T (u, v)h( x, y, u, v)

u =0 v =0

x, y = 0,1,, N 1
M. Carli - 2005

Image Compression

Example: Discrete Fourier Transform


Forward transform kernel

g ( x, y , u , v ) = e

2
(ux + vy )
N

Inverse transform kernel

h ( x, y , u , v ) =

1
N

2
(ux + vy )
N

M. Carli - 2005

31

Image Compression

Discrete Cosine Transform (DCT)


g ( x, y , u , v ) = h ( x , y , u , v )
(2 x + 1)u
(2 y + 1)v
= (u ) (v) cos
cos

2 N

2N
where

N
(u ) =
2
N

for u = 0
for u = 1, 2,, N 1

and similarly for (v)


M. Carli - 2005

Image Compression

DCT Basis Functions For N=4

M. Carli - 2005

32

Image Compression

Example: DFT vs. DCT

(8x8 subimages and truncation of 50% of


transform coefficients)

DFT

rms = 1.28

DCT

rms = 0.68
M. Carli - 2005

Image Compression

Why DCT?
Blocking artifacts are less pronounced in the DCT
than in the DFT
The DCT is a good approximation of the
Karhunen-Loeve Transform (KLT) which is
optimal in terms of energy compaction
However, unlike the KLT, the DCT has imageindependent basis functions
The DCT is used in JPEG compression standards

M. Carli - 2005

33

Image Compression

Implicit Periodicity: DFT vs. DCT

M. Carli - 2005

Image Compression

Subimage Size Selection

M. Carli - 2005

34

Image Compression

Example: Different Subimage Sizes

Approximation of original image


(left) and relative error (right) with
8x8 blocks and keeping 25% of
DCT coefficients

2 2

original

4 4

88

M. Carli - 2005

Image Compression

Bit Allocation

In most transform coding systems, the


coefficients to retain are selected
on the basis of maximum variance (zonal
coding)
coding
or on the basis of maximum magnitude
(threshold coding)
coding

Bit allocation is the overall process of


truncating, quantizing, and coding the
coefficients of a transformed subimage
M. Carli - 2005

35

Image Compression

Threshold Coding

1. For each subimage, arrange the transform


coefficients in decreasing order of
magnitude
2. Keep only the top X% of the coefficients
and discard the rest
3. Code the retained coefficients using a
variable length code

M. Carli - 2005

Image Compression

Zonal Coding

1. Compute the variance of each of the


transform coefficients (this is done by
using the subimages)
2. Keep X% of the coefficients which have
maximum variance
3. Code the retained coefficients using a
variable length code proportional to the
variance

M. Carli - 2005

36

Image Compression

Typical Masks
zonal mask

1:= keep
0:=discard

zonal bit
allocation

threshold coding is
adaptive: the locations
of the coefficients to
keep depend on
threshold mask
subimages

thresholded
coefficients
ordering

M. Carli - 2005

Threshold Coding vs. Zonal Coding

Image Compression

Approximations
obtained by using
12.5% of the 8x8
DCT coefficients

Zonal
Coding

Threshold
Coding

M. Carli - 2005

37

Image Compression

JPEG (Joint Photographic Experts Group)

Is a compression standard for still images


It defines three different coding systems:
1. A lossy baseline coding system based on
DCT (adequate for most compression
applications)
2. An extended coding system for greater
compression, higher precision, or progressive
reconstruction applications
3. A lossless independent coding system for
reversible compression
M. Carli - 2005

Image Compression

M. Carli - 2005

38

Image Compression

M. Carli - 2005

Image Compression

JPEG Baseline Coding System


1.

2.

3.

Computing the DCT:


DCT the image is divided into 8X8
blocks; each pixel is level shifted by subtracting the
quantity 2n-1 where 2n is the maximum number of gray
levels in the image. Then, the DCT of each block is
computed. The precision of input and output data is
restricted to 8 bits whereas the DCT values are
represented with 11 bits
Quantization:
Quantization The DCT coefficients are thresholded and
coded using a quantization matrix, and the recorded using
zig-zag scanning to form a 1-D sequence
Coding:
Coding The non-zero AC coefficients are Huffman
coded. The DC coefficients of each block are DPCM
coded relative to the DC coefficient of the previous block
M. Carli - 2005

39

Image Compression

Example: Implementing the JPEG


Baseline Coding System with Matlab
8x8 subimage

f =

183 160
183 153
179 168
177 177
178 178
179 180
179 179
180 179

94 153 194 163 132 165


116 176 187 166 130 169

171 182 179 170 131 167


179 177 179 165 131 167

179 176 182 164 130 171

180 179 183 164 130 171


180 182 183 170 129 173

181 179 181 170 130 169

8x8 block taken from an


image with N=256 gray levels
M. Carli - 2005

Image Compression

Example: Level Shifting

fs = f 128 =

55 32 -34

25

66 35

55 25 -12

48

59 38

51 40
49 49

43
51

54 51
49 51

42
37

3
3

50 50
51 52

51
52

48 54
51 55

36
36

2
2

51 51
52 51

52
53

54 55
51 53

42
42

1
2

37
41

39
39

43

43
45

41

M. Carli - 2005

40

Image Compression

Example: Computing the DCT


312 56 -27
-38 -28 13

-20 -18 10
-11 -7 9
dcts = round (dct2( fs )) =

-6 1
6

3 3 0

3 5 0

3 1 -1

17 79 -60

26

45

31

-1

-24

33

21

-6

-16

15

10

-11

-13

-4

-7

-5

-2

-7

-4

-4

-8

-1

-2

-3

-1

-26
-10

-9
1

2
4

M. Carli - 2005

Image Compression

Example: The Quantization Matrix

qmat =

16
12
14
14
18
24
49
72

11
12
13
17
22
35
64
92

10
14
16
22
37
55
78
95

16
19
24
29
56
64
87
98

24
26
40
51
68
81
103
112

40
58
57
87
109
104
121
100

51 61
60 55

69 56
80 62
103 77

113 92
120 101

103 99

M. Carli - 2005

41

Image Compression

M. Carli - 2005

Image Compression

Example: Thresholding

20
-3

-1
-1
t = round (dcts./ qmat ) =
0

0
0

-3

-2

-2

-1

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

M. Carli - 2005

42

Image Compression

Example: Zig-Zag Scanning of the


Coefficients
20
-3

-1
-1
t = round (dcts./ qmat ) =
0

0
0

DC coefficient
0

-3

-2

-2

-1

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

[ 20,5, 3, 1, 2, 3,1,1, 1, 1, 0,0,1, 2,3, 2,1,1, 0,0,0,0, 0, 0,1,1,0,1, EOB]


M. Carli - 2005

end-of-block

Image Compression

Example: Coding the Coefficients


The DC coefficient is DPCM coded
(difference between the DC coefficient of
the previous block and current block)
The AC coefficients are mapped to runlength pairs:
(0,5),(0, 3),(0, 1),(0, 2),(0, 3),(0,1),
(0,1),(0, 1),(0, 1), (2,1),(0, 2),(0,3),(0, 2),
(0,1),(0,1), (6,1),(0,1), (1,1), EOB

These are then Huffman coded (codes are


specified in the JPEG scheme)
M. Carli - 2005

43

Image Compression

Example: Decoding the Coefficients

ds hat = t.* qmat =

0
0

0
0

0
0

320 55 -30 16 72 -80 51


-36 -24 14 38

26

-14 -13 16 24
-14 0
0 29

40
0

0
0

0
0

M. Carli - 2005

Image Compression

Example: Computing the IDCT

fs hat = roun d ( idct2 ( ds hat )) =

67
58

12
25

-9
15

20
30

69
65

43
40

-8
-4

46
41
44
49
53

41
52
54
52
50

44
59
58
53
53

40
43
40
40
46

59
57
58
61
63

38
42
47
47
41

0
3
3
1
0

55

50

56

53

64

34

-1

42
47

49
42

33

33
45

57

M. Carli - 2005

44

Example: Shifting Back the


Coefficients

f hat = fs hat + 128 =

to be compared to

f =

Image Compression

195 140 119 148 197 171 120 170


186 153 143 158 193 168 124 175

174 169 172 168 187 166 128 177


169 180 187 171 185 170 131 170
172 182 186 168 186 175 131 161

177 180 181 168 189 175 129 161


181 178 181 174 191 169 128 173

183 178 184 181 192 162 127 185

94 153 194 163 132 165


153 116 176 187 166 130 169

168 171 182 179 170 131 167


177 179 177 179 165 131 167

178 179 176 182 164 130 171

180 180 179 183 164 130 171


179 180 182 183 170 129 173

179 181 179 181 170 130 169


M. Carli - 2005

183 160
183
179
177
178
179
179
180

Image Compression

Example: Original Image vs.


Decompressed Image
Original

Decompressed

M. Carli - 2005

45

Image Compression

Example: JPEG Compression

CR = 34 :1

CR = 67 :1

rms = 3.42

rms = 6.33

M. Carli - 2005

Image Compression

M. Carli - 2005

46

Image Compression

M. Carli - 2005

47

Das könnte Ihnen auch gefallen