Beruflich Dokumente
Kultur Dokumente
org/wiki/Bayesian_probability
Bayesian probability
From Wikipedia, the free encyclopedia.
Bayesianism is the philosophical tenet that the mathematical theory of probability applies to the degree of plausibility of a
statement. This also applies to the degree of believability contained within the rational agents of a truth statement.
Additionally, when a statement is used with Bayes' theorem, it then becomes a Bayesian inference.
This is in contrast to frequentism, which rejects degree-of-belief interpretations of mathematical probability, and assigns
probabilities only to random events according to their relative frequencies of occurrence. The Bayesian interpretation of
probability allows probabilities to be assigned to random events, but also allows the assignment of probabilities to any
other kind of statement.
Whereas a frequentist and a Bayesian might both assign a 1/2 probability to the event of getting a head when a coin is
tossed, only a Bayesian might assign 1/1000 probability to a personal belief in the proposition that there was life on Mars
a billion years ago. This assertion is made without intending to assert anything about relative frequency.
Contents
1 History of Bayesian probability
2 Varieties of Bayesian probability
3 Bayesian and frequentist probability
4 Applications of Bayesian probability
5 Bayesian data analysis
6 See also
7 External links and references
For instance, Laplace estimated the mass of Saturn, given orbital data that were available
to him from various astronomical observations. He presented the result together with an
indication of its uncertainty, stating it like this: "It is a bet of 11,000 to 1 that the error of
this result is not 1/100th of its value". As Devender Sivia has remarked, "He would have
won the bet, as another 150 years' accumulation of data has changed the estimate by only Thomas Bayes
0.63%!"
The general outlook of Bayesian probability, promoted by Laplace and several later authors, has been that the laws of
probability apply equally to propositions of all kinds. Several attempts have been made to ground this intuitive notion in
formal demonstrations. One line of argument is based on betting, as expressed by Bruno de Finetti and others. Another
line of argument is based on probability as an extension of ordinary logic to degrees of belief other than 0 and 1. This
argument has been expounded by Harold Jeffreys, Richard T. Cox, Edwin Jaynes and I. J. Good. Other well-known
proponents of Bayesian probability have included L. J. Savage, Frank P. Ramsey, John Maynard Keynes, and B.O.
1 of 4 9/10/2005 9:31 PM
Bayesian probability - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Bayesian_probability
Koopman.
The frequentist interpretation of probability was preferred by some of the most influential figures in statistics during the
first half of the twentieth century, including R.A. Fisher, Egon Pearson, and Jerzy Neyman. The mathematical foundation
of probability in measure theory via the Lebesgue integral was elucidated by A. N. Kolmogorov in the book Foundations
of the Theory of Probability in 1933. Thus for some decades the Bayesian interpretation fell out of favor. Beginning about
1950 and continuing into the present day, the work of Savage, Koopman, Abraham Wald, and others has led to broader
acceptance. Nevertheless, the rift between the "frequentists" and "Bayesians" continues up to this day, with
mathematicians working on probability theory and empirical statisticians not talking to each other for the most part, not
attending each others conferences, etc.
Subjective probability is supposed to measure the degree of belief an individual has in an uncertain proposition.
Some Bayesians do not accept the subjectivity. The chief exponents of this objectivist school were Edwin Thompson
Jaynes and Harold Jeffreys. Perhaps the main objectivist Bayesian now living is James Berger of Duke University. Jose
Bernardo and others accept some degree of subjectivity but believe a need exists for "reference priors" in many practical
situations.
Advocates of logical (or objective epistemic) probability, (such as Harold Jeffreys, Richard Threlkeld Cox, and Edwin
Jaynes), hope to codify techniques that would enable any two persons having the same information relevant to the truth of
an uncertain proposition to independently calculate the same probability. Except for simple cases the methods proposed
are controversial. Critics challenge the suggestion that it is possible or necessary in the absence of information to start
with an objective prior belief which would be acceptable to any two persons who have identical information.
Bayes' theorem is often used to update the plausibility of a given statement in light of new evidence. For example, Laplace
estimated the mass of Saturn (described above) in this way. According to the frequency probability definition, however,
the laws of probability are not applicable to this problem. This is because the mass of Saturn is a constant and not a
random variable, therefore, it has no frequency distribution and so the laws of probability cannot be used.
Bayesian inference is proposed as a model of the scientific method in that updating probabilities via Bayes' theorem is
similar to the scientific method, in which one starts with an initial set of beliefs about the relative plausibility of various
2 of 4 9/10/2005 9:31 PM
Bayesian probability - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Bayesian_probability
hypotheses, collects new information (for example by conducting an experiment), and adjusts the original set of beliefs in
the light of the new information to produce a more refined set of beliefs of the plausibility of the different hypotheses.
Similarly the use of Bayes factors has been put forward as justifications for Occam's Razor.
Bayesian techniques have recently been applied to filter out e-mail spam with good success. After submitting a selection
of known spam to the filter, it then uses their word occurrences to help it discriminate between spam and legitimate email.
See Bayesian inference and Bayesian filtering for more information in this regard.
1. You have a box with white and black balls, but no knowledge as to the quantities
2. You have a box from which you have drawn N balls, half black and the rest white
3. You have a box and you know that there are the same number of white and black balls
If a Bayesian were to assign a probability to the event "the next drawn ball is black", they would choose probability 1/2 in
all three cases. However, frequentists will claim that this single number does not adequately model the above situations.
The confusion lies in the fact that frequentists assign probabilities only to random events, not fixed constants like the
probability a drawn ball will be black. Bayesians can easily assign a probability to a probability (a so-called
metaprobability). The above events would be modeled in the following way by a Bayesian:
1. You have a box with white and black balls, but no knowledge as to the quantities
Letting θ = p represent the statement that the probability that the next ball is black is p, a Bayesian might
assign a uniform Beta prior distribution:
Assuming that the ball drawing is modelled as a binomial sampling distribution, the posterior distribution, P(θ
| m,n), after drawing m additional black balls and n white balls is still a Beta distribution, with parameters αB
= 1 + m, αB = 1 + n. An intuitive interpretation of the parameters of a Beta distribution is that of imagined
counts for the two events. For more information, see Beta distribution.
2. You have a box from which you have drawn N balls, half black and the rest white
Letting θ = p represent the statement that the probability that the next ball is black is p, a Bayesian might
assign a Beta prior distribution, Β(N / 2 + 1,N / 2 + 1). The maximum aposteriori (MAP) estimate of θ is
, precisely Laplace's rule of succession.
3. You have a box and you know that there are the same number of white and black balls
Because frequentist statistics disallows metaprobabilities, frequentists have had to propose new solutions.. Cedric Smith
and Arthur Dempster each developed a theory of upper and lower probabilities. Glenn Shafer developed Dempster's
theory further, and it is now known as Dempster-Shafer theory.
3 of 4 9/10/2005 9:31 PM
Bayesian probability - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Bayesian_probability
See also
Uncertainty
Inference
Bayesian inference
Doomsday argument for a controversial use of Bayesian inference
4 of 4 9/10/2005 9:31 PM