You are on page 1of 1

REGRESSION A regression function is a property of the joint distribution of a pair of random variables.

Specifically, it is the expected value in the conditional distribution of one given the other. If the variates are X
and Y it is EX\Y y as a function of y or EY\X x as a function of x.
Let us examine the components of Bayes’ theorem as expressed

using several simple examples. We shall initially restrict ourselves to cases in which the parameter 𝛉 is scalar
and not vector valued and we shall consider only situations where each observation is also scalar. This will be
rather artificial since in almost all econometric applications the parameter has several, possibly many,
dimensions – even in our consumption income example the parameter (, ) had two dimensions and, as we
remarked before,
most economic models involve relations between several variables. Moreover the
examples use rather simple functional forms and these do not do justice to the
full flexibility of modern Bayesian methods. But these restrictions have the great
expositional advantage that they avoid computational complexity and enable us to
show the workings of Bayes’ theorem graphically.
The components of Bayes’ theorem are the objects appearing in (1.4). The object
on the left, p(\y), is the posterior distribution; the numerator on the right contains
the likelihood, p(y\), and the prior p(). The denominator on the right, p(y), is
called the marginal distribution of the data or, depending on the context, the predictive
distribution of the data. It can be seen that it does not involve and so for purposes
of inference about it can be neglected and Bayes’ theorem is often written as
p(\y) p(y\)p() (1.5)
where the symbol means “is proportional to.” This last relation can be translated
into words as “the posterior distribution is proportional to the likelihood times the
prior.” We shall focus here on the elements of (1.5).
Lancaster page 10