Sie sind auf Seite 1von 27

Bayesian Modeling of Uncertainty in Low-Level Vision

Paper by Richard Szeliski International Journal of Computer Vision (1990)

Presentation by Michael Ross

Bayesian methods for intrinsic images


Dense fields and inverse problems (depth maps, orientation, stereo matching). Traditionally solved by energy minimization or regularization methods. Replace these methods with Bayesian methods to get explicit modeling of uncertainty, better understanding of models.

Intrinsic images
Finding dense information fields from sparse or uncertain data. Interpolation and accounting for uncertainty in measurement.

Energy methods

E u
E d u ,d E u

1
1 2

E d u ,d
c i u x i ,y i i u
2 x

E
di
2

1 2

u dx dy

2 y

Discrete energy methods


E d u ,d E
p

1 u 2

Ad u

E u E u

1 T u A pu 2 1 T T u Au u b c 2 1 T u u' A u u' 2

Energy implies probability


Squared error terms imply Gaussian noise models. Smoothness (and discontinuity) can be modeled as a Markov random field. Using probability directly allows us access to information hidden by energy models (uncertainty modeling, model accuracy).

Markov random fields


A Markov random chain in two dimensions. Most commonly used to calculate a MAP estimate, the assignment which maximizes p(u|d).

p ui u

p ui uj j

Ni

Prior models
Describes p(u), our model of the world given no data. In principle, we can calculate it from the MRF terms. Gibbs distributions and sampling make these calculations simple and tractable. (Geman & Geman)
1 exp Zp E
p

p u

u T

Ec u

Graph cliques

Sampling from prior models


p ui u 1 exp Zp

ui u T

Thin plate on data.

Sample thin plate.

Coarse to fine sampling.


Why is this so much better?

Thin plate on data.

Coarse-fine thin plate sample.

Coarse to fine sampling.


Fourier analysis reveals that the surfaces defined by thin-plate models are inherently fractal (self-similar at different resolutions). Gibbs sampling at a single resolution is not a practical way to propagate the low-frequency components of the random field. Coarse to fine sampling is faster and produces more representative samples.

Sensor models
Directly incorporated and easy to swap.
1 exp Zd E d u ,d E d u ,d E id u i ,d i

p d u

Typical Gaussian sensor model (assuming uncorrelated errors).


1 2

p di u

exp
i

ui d i 2 i

Contaminated gaussians

p di u

1 2

exp

ui d i

2 1

exp

ui d

2 i

2 2

Virtual sensors
Use a lower-level vision algorithm (optical flow, for example) as a dense data source. Anandan and Weiss found that optical flow errors depend on the type of local data available (none, line, or corner). This can be used to construct a higher-level model that produces better flow estimates by explicitly modeling these dependencies in the sensor distribution.

Posterior models
The output (intrinsic image) given the data used to compute the MAP estimate of p(u | d). Described by a Gibbs distribution (MRF) with energy:
E u E
p

E d u ,d

Prior model

Sensor model

Loss functions
Generic:

L u ,u ' p u d du

Map:

u ,u ' p u d du

Loss functions allow for top-down influence.

Uncertainty estimatation
We can compute the covariance of the posterior models. In the Gaussian sensor model case:
E u 1 u 2 u' A u
T

u'

cov u

Stochastic variance estimation


Run the Gibbs sampler and estimate the variance with a Monte Carlo algorithm. Time averages are ergodic only over long time frames, unless we use multiresolution sampling.

1000 iterations

100 iterations

Dynamic models
Kalman filtering is a natural part of the MRF framework.
u N u 0, P 0
1

Prior model

u k F ku k

qk

System model

d k H ku k r k

Sensor model

Dynamic models
This work applies Kalman filtering to dense information, by using the sparse information matrices rather than the dense covariance matrices.
A 'k
'

A 'k

'

H T R k 1H T k k

' ' k

' ' k

H Rk d k

T k

uk

Ak b k

Dynamic models
Why are the information matrices sparse? In the prior model, we have local smoothness, so any column of the information matrix only has non-zero values for the locally neighboring points. In the sensor model, we usually assume that the sensor errors are uncorrelated, so the sensor information matrix is diagonal.

Applications
Incremental depth from motion, using the Kalman filtering approach. Motion estimation without correspondence (discover the estimate that maximizes the likelihood that two sets of points are drawn from the same smooth surface). Maximum likelihood estimation for the regularization parameter.

Incremental depth from motion


Compute a displacement estimate using correlation between frames. Transform to disparity map using camera motion, integrate with predicted map (Kalman filtering). Regularize map. Predict next map using known camera motion.

Motion without correspondence

Surface through set #1

Surface through set #2

Surface through both

Minimize the distance between point set #2 and the surface through set #1 and the surface through both sets.

ML parameter estimation
p u d
P0
p d

exp
2

E d u ,d

2 p

Ap
T

H P 0H

1 2

exp

1 T T d H P 0H 2

Take the log and maximize...

Conclusion
Probability models encompass energy models. We can draw samples to check their validity. We can model sensor behavior easily. Flexibility in choosing loss functions. Direct calculation of error.

Das könnte Ihnen auch gefallen