Sie sind auf Seite 1von 1

𝑛

𝑚𝑖𝑛
[∑ 𝜌𝜏 {𝑦𝑘 − 𝑋𝑘𝑇 𝛽(𝜏)}]
𝛽(𝜏)
𝑘=1

𝜌𝜏 (𝑢) = 𝑢(𝜏 − 𝐼[𝑢 < 0])


𝑛
𝜏 𝑛 (1 − 𝜏)𝑛 𝑦𝑘 − 𝑿𝑇𝑘 𝛽(𝜏)
𝑓{𝒚|𝛽(𝜏), 𝜏, 𝜎(𝜏)} = 𝑒𝑥𝑝 [− ∑ 𝜌𝜏 { }]
𝜎(𝜏)𝑛 𝜎(𝜏)
𝑘=1

𝑄𝑌𝑘 (𝜏|𝑿𝑘 ) = 𝑒𝑥𝑝{𝑿𝑻𝒌 𝛽(𝜏)}

𝑄𝑌𝑘 + 𝑈𝑘 (𝜏|𝑿𝑘 , 𝐸𝑘 ) = 𝜏 + 𝑒𝑥𝑝{𝑿𝑇𝑘 𝛽(𝜏) + ln(𝐸𝑘 )}

𝑄𝑌𝑘∗ (𝜏|𝑿𝑘 , 𝐸𝑘 ) = 𝑿𝑇𝑘 𝛽(𝜏)𝜏 + ln(𝐸𝑘 )

where 𝑌𝑘∗ = ln( 𝑌𝑘 + 𝑈𝑘 − 𝜏). Models (4) and (5) are equivalent because moving from 𝑌𝑘 +
𝑈𝑘 to 𝑌𝑘∗ is a monotonic transformation, meaning that 𝑄𝑌𝑘∗ (𝜏|𝑿𝑘 , 𝐸𝑘 ) in equation (5) can be
written as

𝑄𝑌𝑘∗ (𝜏|𝑿𝑘 , 𝐸𝑘 ) = ln{𝑄𝑌𝑘 + 𝑈𝑘 (𝜏|𝑿𝑘 , 𝐸𝑘 ) − 𝜏}

thus leading to equation (4). To account for the variation that is introduced by adding
uniform random noise to the responses, Machado and Santos Silva (2005) applied the
model to M = 100 jittered data sets 𝑌 + 𝑈 (𝑖) , for 𝑖 = 1, , , , , 𝑀. They estimated the regression
parameters by

𝑀
1
𝛽̂ (𝜏) = ∑ 𝛽̃(𝑖) (𝜏)
𝑀
𝑖=1

where 𝛽̂ (𝑖) (𝜏) is the estimate that is obtained from 𝑌 + 𝑈 (𝑖) . They derived an expression for
the asymptotic covariance matrix, although in practice confidence intervals for 𝛽(𝜏) can be
calculated by averaging intervals obtained from the 𝑀 jittered data sets. Finally, the
estimated quantile function 𝑄̂𝑌𝑘 (𝜏|𝑿𝑘 , 𝐸𝑘 ) is given by

𝑄̂𝑌𝑘 (𝜏|𝑿𝑘 , 𝐸𝑘 ) = [𝑄̂𝑌𝑘 + 𝑈𝑘 (𝜏|𝑿𝑘 , 𝐸𝑘 ) − 1] = [𝜏 + 𝑒𝑥𝑝{𝑿𝑇𝑘 𝛽̂ (𝜏) + ln(𝐸𝑘 )} − 1]

where ⌈. ⌉ denotes the ceiling function.

𝑌𝑘∗ = ln(𝑌𝑘 + 𝑈𝐾 − 𝜏)~𝐴𝐿{𝜇𝑘 (𝜏), 𝜏, 𝜎(𝜏)}


𝜇𝑘 (𝜏) = 𝑋𝑘𝑇 𝛽(𝜏) + ln(𝐸𝑘 )
𝑈𝑘 ~ uniform (0,1)
𝜎(𝜏)~ inverse-gamma (𝑎, 𝑏)
𝑓{𝛽(𝜏)} ∝ 1

Das könnte Ihnen auch gefallen