Sie sind auf Seite 1von 17

Qualitative Response

Regression Models

Linear Probability Model (LPM)


 Logistic Model (Logit)

 Probit Model

ghozali maski 1
LINEAR PROBABILITY MODEL
Yi = 1 + 2 Xi + u
E(Y/X) = 1 + 2 Xi
Jika Pi adalah probabilitas bahwa Y=1
dan ( 1- Pi ) adalah probabilitas bahwa
Y = 0 , maka :
Y Probability
0 1- Pi
1 Pi
total 1
ghozali maski 2
Continued…..

E(Yi) = 0 ( 1-Pi ) + 1 ( Pi ) = Pi
E( Yi/Xi ) = 1 + 2Xi = Pi dan nilai
0  E(Yi/Xi )  1

Several Problems:
1. Non-Normality of the error terms
2. Heteroscedastic Variance of the error terms
3. Nonfulfillment of 0  E(Yi/Xi )  1

ghozali maski 3
Continued….
Penaksiran LPM
1. Run OLS, untuk mendapatkan E(Yi/Xi )
2. Tentukan Weighted (w), di mana
w = akar dari E(Yi/Xi )[1- E(Yi/Xi )]
3. Transformasi model menjadi
Yi 1 Xi ui
  2 
w w w w

See example 15.1 – 3 ; 590-593


ghozali maski 4
LOGISTIC MODEL
Pi = E( Yi/Xi ) = 1 + 2 Xi + u
1
Pi  E Yi / Xi    o   1 X 
1 e
1
Pi  E Yi / Xi  
1  e  Zi 
1
Pi 
1  e  Z i 
1
1  Pi  1 
1  e  Zi 
ghozali maski 5
Continued…
maka,…
1
1  e  Zi 
1
 Zi  Zi

 1 e
Pi 1 e
  e Zi
1  Pi 1 1  e Zi
1  eZi

 Pi 
Ln   Ln e
Zi
 1  Pi 
ghozali maski 6
Estimation of The Logit Model:
Maximum likelihood (MLL)
Letting , f i (Yi )
n n 1Yi

Jo int probabilit y : f Y1 , Y2 ,..., Yn    f i Yi    PiYi 1  Pi 


1 1
n
ln f Y1 , Y2 ,..., Yn    Yi ln Pi  1  Yi  ln 1  Pi 
1
n

 Y ln P  Y ln 1  P   ln 1  P 
1
i i i i i

n   Pi  n
1 Yi ln  1  P    ln 1  Pi 
  i  1
ghozali maski 7
Continued…..
1  Pi   1
1  e 1   2 X i
 Pi 
ln     1   2 X i
 1  Pi 

 
n n
ln f Y1 , Y2 ,..., Yn    Yi  1   2 X i    ln 1  e  1   2 X i 
1 1
The log likelihood function is a function of the parameters β1
and β2, since the X are known
Estimation: individual & grouped data (See data SPSS)
Interpretation??
ghozali maski 8
The Probit (Normit) Model
 Logistic Model: Cumulative Distribution Function
(CDF)
 Probit Model: Normal CDF
To motivate the probit model, assume that the
probability of the event depends on an
unobservable utility index (I), that is
determined by one or more regressors

The larger the value of Ii, the greater the


probability of an events,

Ii = β1 + β2 Xi

ghozali maski 9
Continued…..

Ii  F 1
I i   F 1 Pi   1   2 X i
where,
Pi  PY  1 X   PZ i  1   2 X i 
 F 1   2 X i 
F-1 is the inverse of the normal
CDF

ghozali maski 10
Probit Estimation: gprobit
Pi Ii = F-1(Pi)
0,20 -0,8416
0,24 -0,7063
0,30 -0,5244
0,35 -0,3853
0,45 -0,1257
0,51 0,0251
0,60 0,2533
0,66 0,4125
0,75 0,6745
0,80 0,8416 Gujarati,
2003:610
ghozali maski 11
Continued…..
Pi  F  1   2 X i 
dPi
 f  1   2 X i  2
dX i
The rate of change of probability with respect to
income, see Gujarati, 2003:612

ghozali maski 12
ghozali maski 13
ghozali maski 14
ghozali maski 15
The Poisson Regression Model
 There are many phenomena where the
regressand is of the count type.
 Some times count data can also refer to
rare, or infrequent, occurrences such
winning more than one lottery within 2
weeks, having two or more heart
attacks in a span of 4 weeks.
 How do we model such phenomena?

ghozali maski 16
Continued…..
 Y e 
f Y  
Y!
E Y   
Var Y   
  E Y    1   2 X 2i  .......   k X ki
Poisson. Re gression :
Y  E Y    i     i
 Y e  Estimation Method:
Y  i Y = EXP(c(1)+c(2)*X2i+…..+c(k)*Xki)
Y!
ghozali maski 17

Das könnte Ihnen auch gefallen