Sie sind auf Seite 1von 9

Dr.

Shalabh
Department of Mathematics and Statistics
Indian Institute of Technology Kanpur
LINEAR REGRESSION ANALYSIS
MODULE III
Lecture - 10
Multiple Linear Regression
Analysis
2
Maximum likelihood estimation
In the model it is assumed that the errors are normally and independently distributed with constant variance
i.e.,
The normal density function for the errors is
The likelihood function is the joint density of given as
Since the log transformation is monotonic, so we maximize instead of .
, y X = +
2

2
~ (0, ). N I
2
2
1 1
( ) exp 1,2,..., .
2 2
i i
f i n

(
= =
(

1 2
, ,...,
n

2
1
2
2 /2 2
1
2 /2 2
2 /2 2
( , ) ( )
1 1
exp
(2 ) 2
1 1
exp '
(2 ) 2
1 1
exp ( )'( ) .
(2 ) 2
n
i
i
n
i n
i
n
n
L f
y X y X

=
=
=
(
=
(

(
=
(

(
=
(

2
ln ( , ) L
2
( , ) L
2 2
2
1
ln ( , ) ln(2 ) ( )'( ).
2 2
n
L y X y X

=
3
The maximum likelihood estimators (m.l.e.) of and are obtained by equating the first order derivatives of
with respect to and to zero as follows:
The likelihood equations are given by

2
ln ( , ) L

2
2
2
2 2 2 2
ln ( , ) 1
2 '( ) 0
2
ln ( , ) 1
( )'( ).
2 2( )
L
X y X
L n
y X y X





= =

= +

2
' '
1

( )'( ).
X X X y
y X y X
n


=
=
Since so that the unique m.l.e. of and are obtained as
Next we verify that these values maximize the likelihood function. First we find
1
2
( ' ) '
1

( )'( ).
X X X y
y X y X
n

=
=

2 2
2 2
2 2
2 2 2 4 6
2 2
2 4
log ( , ) 1
'
log ( , ) 1
( )'( )
( ) 2
log ( , ) 1
'( ).
L
X X
L n
y X y X
L
X y X







=

( ) , rank X k =

2

4
Thus the Hessian matrix of second order partial derivatives of with respect to and is
which is negative definite at and .
This ensures that the likelihood function is maximized at these values.
Comparing with OLSEs, we find that
i. OLSE and m.l.e. of are same. So m.l.e. of is also an unbiased estimator of .
ii. OLSE of is which is related to m.l.e. of as So m.l.e. of is a biased estimator of .
2
ln ( , ) L
2

2 2 2 2
2 2
2 2 2 2
2 2 2 2
ln ( , ) ln ( , )
log ( , ) ln ( , )
( )
L L
L L




| |
|

|
|

|

\ .
=

2 2
=

2
s
2

2 2
.
n k
s
n


=
2

5
Consistency of estimators
(i) Consistency of b
Under the assumption that exists as a nonstochastic and nonsingular matrix (with finite elements), we
have
This implies that OLSE converges to in quadratic mean.
Thus OLSE is a consistent estimator of .
This holds true for maximum likelihood estimators also.
Same conclusion can also be proved using the concept of convergence in probability.
'
lim
n
X X
n

| |
=
|
\ .
1
2
2 1
1 '
lim ( ) lim
1
lim
0.
n n
n
X X
V b
n n
n

| |
=
|
\ .
=
=

Since
So
Thus b is a consistent estimator of . The same is true for maximumlikelihoodestimator also.
6
1
1
( ' ) '
' '
.
b X X X
X X X
n n

=
| |
=
|
\ .
1
1
*
' '
( )
.0
0.
plim plim plim


X X X
b
n n

| | | |
=
| |
\ . \ .
=
=

Anestimator converges to in probabilityif


andis denotedas
The consistencyof OLSE can be obtainedunder the followingweaker assumptions:
(i) exists andis a nonsingular andnonstochastic matrix.
(ii)

lim 0 0 for any


n
n
P

(
= >

( ) . plim
n
=
*
'
plim
X X
n
| |
=
|
\ .
'
0. plim
X
n
| |
=
|
\ .
7
(ii) Consistency of s
2
Now we look at the consistency of s
2
as an estimate of . We have
Note that consists of terms is a sequence of independently and identically
distributed randomvariables with mean . Usingthe lawof large numbers
Thus s
2
is a consistent estimator of . The same holds true for maximumlikelihoodestimator also.
2
1
1
1 1
1
'
1
'
1
1 ' ' ( ' ) '
' ' ' '
1 .
s e e
n k
H
n k
k
X X X X
n n
k X X X X
n n n n n


| |
( =
|

\ .
(
| | | |
=
(
| |
\ . \ .
(

2

'
n

2
1
1
, 1,2,..., }
2
i
and{
n
i
i
i n
n

=
=

2
1 1
1
*
2
'
' ' ' ' '
0. .0
0
( ) (
plim
'X
plim plim plim plim
n


plim
n
X X X X X X X
n n n n n
s



| |
=
|
\ .
( (
| | | | | | | |
=
( (
| | | |
\ . \ . \ . \ .
( (

=
=
=
1 2
2
1 0) 0
.

(

=
2

8
Cramer-Rao lower bound
Let . Assume that both and are unknown. If , then the Cramer-Rao lower bound for is
greater than or equal to the matrix inverse of
Then
is the Cramer-Rao lower bound matrix of and
2
( , )' =
2


( ) E =

2
2 2
2 2
2 2
2 2 2 2
2 4
4 4
ln ( )
( )
'
ln ( , ) ln ( , )
ln ( , ) ln ( , )
( )
' '( )
( )' ( )'(
2
L
I E
L L
E E
L L
E E
X X X y X
E E
y X X n y X y
E E




(
=
(


( ( (

( ( (


(
=
(
( (
(

( (
(

( (

( (

=

(

(

6
2
4
)
'
0
.
0
2
X
X X
n

(
(
(
(

(
(
(

(
(
=
(
(
(

| |
2 1
1
4
( ' ) 0
( )
2
0
X X
I
n

(
(
=
(
(

2
.
9
The covariance matrix of OLSEs of and is
which means that the Cramer-Rao bound is attained for the covariance of b but not for s
2
.

2 1
4
( ' ) 0
2
0
OLS
X X
n k

(
(
=
(
(

Das könnte Ihnen auch gefallen