Sie sind auf Seite 1von 16

Estimating Bayes Factors via Posterior Simulation with the Laplace-Metropolis Estimator

Steven M. Lewis and Adrian E. Raftery University of Washington November 30, 1994
Abstract

The key quantity needed for Bayesian hypothesis testing and model selection is the marginal likelihood for a model, also known as the integrated likelihood, or the marginal probability of the data. In this paper we describe a way to use posterior simulation output to estimate marginal likelihoods. We describe the basic LaplaceMetropolis estimator for models without random e ects. For models with random e ects the compound Laplace-Metropolis estimator is introduced. This estimator is applied to data from the World Fertility Survey and shown to give accurate results. Batching of simulation output is used to assess the uncertainty involved in using the compound Laplace-Metropolis estimator. The method allows us to test for the e ects of independent variables in a random e ects model, and also to test for the presence of the random e ects.

KEY WORDS: Laplace-Metropolis estimator; Random e ects models; Marginal likelihoods; Posterior simulation; World Fertility Survey.

1 Introduction
The standard Bayesian solution to the hypothesis testing and model selection problems is to compute Bayes factors for comparing two or more competing models; see Kass and Raftery (1995) for a survey.
Steven M. Lewis is a Postdoctoral Research Associate with the Center for Studies in Demography and Ecology, DK{40, and Adrian E. Raftery is Professor of Statistics and Sociology, GN{22, both at the University of Washington, Seattle, WA 98195. This research was supported by National Institutes of Health grant 5-R01-HD26330.

The Bayes factor, B , for comparing model M against model M for observed data, Y , is the ratio of the posterior odds for M against M to the prior odds: B = f (Y jj M ) : f (Y M ) In other words, the Bayes factor is the ratio of the marginal, i.e. integrated, likelihoods under the two models being compared. Hence calculation of Bayes factors boils down to computing marginal likelihoods,
01 0 1 0 1 01 0 1

f (Y j Mm ) =

f (Y j

m ; Mm ) f ( m

j Mm ) d

m = 0; 1;

where m is the vector of parameters in model Mm and f ( m j Mm) is its prior density. Dropping the notational dependence on the model, this can be rewritten as

f (Y ) =

f (Y j ) f ( ) d :

(1)

Historically the integration required for calculating marginal likelihoods has been done by taking advantage of conjugacy or by assuming approximate posterior normality. In other cases the requisite integrals have been approximated using such methods as Gaussian quadrature (Naylor and Smith 1982), the Laplace approximation (de Bruijn 1970; Tierney and Kadane 1986) or Monte Carlo methods. With the availability of increasing computer power, Markov chain Monte Carlo (MCMC) has become a reasonable alternative. There have been a number of alternative ways suggested to use posterior simulation output, such as that produced by MCMC, to estimate marginal likelihoods. These are surveyed by Raftery (1995) and include importance sampling methods such as calculating the harmonic mean of the output likelihoods, and various ways of combining prior and posterior information, such as suggested by Newton and Raftery (1994) and Meng and Wong (1993). In this paper we describe another way to use posterior simulation output to estimate the marginal likelihood. In Section 3 we describe the basic Laplace-Metropolis estimator. The basic Laplace-Metropolis estimator provides consistent estimates of the logarithm of the marginal likelihood. In hierarchical models with both xed and random e ects the basic Laplace-Metropolis estimator cannot be used directly. Instead, in Section 4, we introduce the compound 2

Laplace-Metropolis estimator for hierarchical models. This estimator results from applying the Laplace method at two di erent levels; the Laplace method is applied at a second level to integrate out each of the random e ect parameters. In Section 6 the compound Laplace-Metropolis estimator is used to calculate log marginal likelihoods for a number of di erent models t to data collected in Iran as part of the World Fertility Survey. By taking the di erence between log marginal likelihoods for two competing models we are able to estimate the Bayes factor for comparison of the two models. How variable are the log marginal likelihoods estimates produced by the compound Laplace-Metropolis estimator? In Section 7 we demonstrate a method to adapt one way of assessing the variability of Monte Carlo estimators, namely the batching of simulation output, for the purpose of assessing the variability of the compound Laplace-Metropolis estimator. We close by comparing the estimates of the log marginal likelihoods provided by the compound Laplace-Metropolis estimator against a \gold standard" determined on the basis of extremely large samples from the joint prior distribution of all parameters. The compound Laplace-Metropolis estimates are shown to be remarkably accurate, particularly in light of the substantially reduced computing time required.

2 Calculating the Marginal Likelihood Using the Prior Distribution


Before describing the Laplace-Metropolis estimator, we rst describe a method for obtaining a \gold standard" for compound log marginal likelihoods against which we can compare the Laplace-Metropolis estimator developed in this paper. Since the integral in equation (1) is not in general analytically tractable, it is usually necessary to approximate it. This integral can be approximated by a simple Monte Carlo estimator of the form 1 J (2) f^mc = J f Y j j ; j
X ( ) =1

where j : j = 1; : : : ; J is a sample from the prior distribution of the parameters. Unfortunately, it is generally necessary to obtain an extremely large number of draws from the
( )

prior distribution before f^mc becomes a good estimator for the integral in equation (1). In one study, Lewis (1994) found that in order to reduce the Monte Carlo standard error to an acceptable level it was necessary to use a sample of roughly 50 million draws from the prior distribution. The computing e ort needed to obtain such a large sample can be enormous, requiring several days of computer time on most workstations currently available. In this paper we will describe an estimator which may be used to estimate the logarithm of the integral in equation (1) in substantially less computer time. However, as a way to check the accuracy of the estimator which we describe, we have also calculated the Monte Carlo estimate using equation (2) for some of the models demonstrated in this paper; we will refer to this Monte Carlo estimate as the \actual" log marginal likelihood.

3 The Laplace-Metropolis Estimator of the Marginal Likelihood


Raftery (1995) has recently proposed a method for estimating the marginal likelihood by combining the Laplace approximation with MCMC. He suggests using the Metropolis algorithm (Metropolis et al. 1953) as a means for estimating the quantities required for the Laplace approximation. By applying the Laplace method we can derive the following approximation for the marginal likelihood:

f (Y )

(2 ) 2 jH ?j 2 f ( ?) f (Y j ?) ;
1

(3)

where ? is the value of at which h ( ) log ff ( ) f (Y j )g attains its maximum, H ? is minus the inverse Hessian of h evaluated at ? and P is the dimension of the parameter space. For numerical reasons and since it is customary to work with log likelihoods, it is better to work with this approximation on the logarithmic scale. Taking logarithms, we can rewrite equation (3) as P log f2 g + 1 log fjH ?jg + log ff ( ?)g + log ff (Y j ?)g : log ff (Y )g 2 2 (4) We refer to this estimator as the Laplace-Metropolis estimator. In the Laplace-Metropolis estimator there are 2 key quantities that we need to derive from the posterior simulation output, namely ? and H ?. The conceptually simplest way to 4

estimate ? would be to compute h ( ) for each draw from the posterior simulation output and use that for which h ( ) is largest for ?. This can take a lot of computing e ort. As an alternative we consider the multivariate median, or L center, which is de ned as that value of which minimizes
1

d
1

( )

( )

where j j denotes L distance. We use this as an estimator of the posterior mode. The other quantity needed for the Laplace-Metropolis estimator is H ?. This is asymptotically equal to the posterior variance matrix. We could use the sample covariance matrix of the simulation output for H ?. However, since MCMC trajectories are known to take occasional distant excursions, it would probably be a good idea to use a robust estimator for the posterior variance matrix. One such estimator is the weighted variance matrix estimate with weights based on the minimum volume ellipsoid estimate of Rousseeuw and van Zomeren (1990).

`=1

( )

4 The Compound Laplace-Metropolis Estimator for Hierarchical Models


In recent work, to be described shortly, we have found need of marginal likelihoods for a number of alternative hierarchical, i.e. random e ects, models. Here we use the term hierarchical model in the sense used by Raudenbush (1988), namely for a model in which several observations are made on subjects which themselves should be grouped into higher level entities in order to explicitly take into account aggregation inherent in the sampling design. It is usually the case that hierarchical models involve a very large number of nuisance parameters, namely the random e ects, and as a result calculating marginal likelihoods becomes extremely di cult, especially if the approximate normality assumption is questionable. For this situation we have experienced very good results by adapting the Laplace-Metropolis estimator to hierarchical models. For a random e ects model we should separate the vector of all parameters into its components, the vector of xed e ects, which we denote by , the variance of the random e ects hyperparameter, which we denote by , and the vector of random e ect parameters, . For calculating the Laplace-Metropolis estimator we need to distinguish between the 5

nuisance parameters, , and the rest of the parameters and hyperparameters, which together we denote by ( ; ). The term in the Laplace-Metropolis estimator, equation (4), requiring additional attention for a random e ects model is the log-likelihood, log ff (Y j ?)g. Calculation of the rst three terms in equation (4) can be accomplished precisely as already described in Section 3. We still need to nd the posterior mode of . To do so we locate the L center of ( ; ). The posterior variance matrix of the xed e ects, , can still be used for H ?. The third term is the logarithm of the joint prior density of the xed e ect parameters and the variance hyperparameter. In many random e ects models the random e ects are assumed to be conditionally independent given the other parameters in the model, such as the xed e ect parameters and the hyperparameters. We can take advantage of this assumption to calculate the loglikelihood as simply the sum of the log-likelihoods for each of the random e ects. In other words,
1

log ff (Y j ; )g = where log ff (Y i j ; )g = log


i
Z

i=1

log ff (Y i j ; )g ;

(5)

f (Y i j i; ; ) f ( i j ; ) d

(6)

is the random e ect parameter for the ith context. Equation (5) holds for any ( ; ). In particular, it holds for ( ?), the joint mode of the posterior distribution, f (Y i j i; ; ). As a result we can calculate each of the loglikelihood terms in equation (5) by conditioning on ( ?). It remains for us to calculate the integrals on the right-hand side of equation (6), each of which will be of low dimension. For regular statistical models these integrals can be well approximated using the Laplace method. By using the Laplace method for each of these integrals individually we can arrive at the Laplace-Metropolis estimator of the log marginal likelihood for a general hierarchical model. If we denote the Laplace estimate of the log ^ conditional likelihood for the ith random e ect, as de ned in equation (6), by Li, then the Laplace-Metropolis estimator of the log marginal likelihood for a general hierarchical model, 6

which we will refer to as a compound Laplace-Metropolis estimator, is I 1 ^ LMC = P log f2 g + 2 log fjH ?jg + log ff ( ?)g + Li 2 i
d X =1

(7)

5 The Laplace-Metropolis Estimator for the Logistic Hierarchical Model


In this section we derive the Laplace-Metropolis estimator for one of the most frequently used types of hierarchical model, namely the logistic hierarchical model. Here we assume that the data are produced by a mixed logistic model. That is, we assume logit( ) = X + ; (8) where the data may take on only values 0 or 1, = f itg, it is the probability that the tth observation within the ith random e ect is a 1, and X is a matrix of covariate information. The likelihood of the vector of observations for the ith random e ect, Y i, is exp fX it + igyit : f (Y i j ; i) = t 1 + exp fX it + i g If we also assume that the prior distribution of each of the random e ect parameters is Gaussian with mean 0 and variance and independent of the xed e ect parameters and the other random e ect parameters, it is straightforward to calculate the conditional likelihood of the ith random e ect's vector of observations as 2 1 1 1 exp fh ( )g d ; f Y i j ~; ~ = i i 2 ~ ?1 where yit (X it ~ + i) ? 1~ i ? log (1 + exp fX it ~ + ig) : h ( i) = 2 t t The rst and second derivatives of h ( i ) are exp fX it ~ + ig 1 h0 ( i) = yit ? ~ i ? t t (1 + exp fX it ~ + i g) exp fX it ~ + ig 1 h00 ( i) = ? ~ + : t (1 + exp fX it ~ + i g)
" Y # Z 2 (" X # " 2 2 X #) 2 (" X # " X #) 2 ( " X #) 2 2

We can then locate the mode of h ( i), which we denote by ?, using a few iterations of Newton's method. Using the second derivative above, we nd that the square root of the determinant of minus the inverse of the Hessian conditional on ( ?) evaluated at ? is
2

?2 exp fX it ~ + ?g : t (1 + exp fX it ~ + ? g) We now have all the quantities needed to use the Laplace method to approximate the integrals on the right-hand side of equation (6). Doing so, we nd that a Laplace estimate for each random e ect log conditional likelihood is exp f it + ? 1 ^ Li = ? 2 log 1 + ~ (1 + expXX~ ~ + g?g) ? 21~ ? f it t + yit (X it ~ + ?) ? log (1 + exp fX it ~ + ?g) :

jH2j = ~ 21 1 + ~
1 2

" X

#!

" X

#!

"

" X

These Laplace estimates of the log conditional likelihoods may then be used in equation (7) to calculate a compound Laplace-Metropolis estimate for hierarchical models. In the next section we will show how this works for an example using data collected in Iran as part of the World Fertility Survey.

6 Example Using Data from the World Fertility Survey


The Iran Fertility Survey (IFS) was a part of the World Fertility Survey (WFS). There is already much in the literature describing the details of the WFS. The volume edited by Cleland and Scott (1987) serves as the primary summary publication on the WFS. The IFS included full fertility histories of a randomly selected sample of 4; 928 married women born between 1926 and 1963. The survey also obtained a large collection of covariate information for each woman including data on how much formal education each of the women as well as their husbands received. For analyses of the full data set, see Raftery, Lewis, Aghajanian and Kahn (1993) and Raftery, Lewis and Aghajanian (1994). We investigated the methods described in the preceding section by tting a small example logistic hierarchical model to a randomly selected sample of 29 women from the IFS data set. 8

The response used in this analysis was whether or not a woman experienced a birth in each year in which she potentially could have had a child; we refer to these as exposure-years. The rst model consisted of 4 xed e ect parameters in addition to the intercept. The 4 xed e ects were the age of the woman during each exposure-year (centered at the average age in the IFS data set), an indicator variable which is 1 for the rst exposure-year of the interval and 0 otherwise, the woman's parity (number of previous children born) during each exposure-year and the woman's completed education level (a 6 level categorical variable). In this example we also included a random e ect parameter for each woman in the sample. We implemented a Metropolis algorithm for estimating the parameters of a mixed logistic model, equation (8), in a Fortran program written speci cally to handle event history data sets (Lewis 1993, 1994; Raftery, Lewis and Aghajanian 1994). We obtained the results shown in Table 1. Metropolis was run for a total of 5; 500 iterations of which the rst 500 were discarded for \burn-in". Table 1: Regression parameter estimates for a sample from the IFS event history data set using the 4 xed e ect model. Estimate Standard error Intercept ?0:73 0.50 Centered age ?0:19 0.36 First year of interval ?2:34 0.45 Parity ?0:04 0.10 Woman's education level ?0:35 0.20 Variance of random e ects 0.17 0.22 Variable

?1:45 ?0:54 ?5:25 ?0:41 ?1:76


0.78

Using the technique described in the previous section we found the estimated log marginal likelihood for this example to be ?220:5. The obvious next question is just how good is this estimate? One way to assess this would be to calculate a Monte Carlo estimate for the integral in equation (1), as we previously described in Section 2. Calculating such a Monte Carlo estimate for the 4 xed e ects model, we found the resulting \actual" log marginal likelihood to be ?221:8. The LaplaceMetropolis estimate is quite good. To more precisely assess this claim we need to measure 9

the uncertainty of this estimate. In the next section we describe one way of doing this.

7 Assessing variance of the estimator using batching


To assess the uncertainty of the Laplace-Metropolis estimator we adapted one of the most commonly used methods for assessing the uncertainty involved in Monte Carlo estimation, namely the method of batch means, to the situation at hand. The use of this method for calculating the Monte Carlo variance for the mean of a functional of posterior simulation output has been discussed by numerous authors, including Hastings (1970), Schmeiser (1982) and Geyer (1992). The underlying idea is to divide an entire MCMC sample into a fairly small number of batches, say B , of equal size. The mean of the simulations within each batch is found, producing B estimates of the mean. Under relatively mild conditions these B estimates will be essentially independent. The sample of the B estimates can then be used to provide a compound estimate of the mean along with an estimate for its variance. To apply this idea to assessing the uncertainty of the Laplace-Metropolis estimator we can calculate separate Laplace-Metropolis estimates within each of B batches, take the mean of the B estimates as a compound Laplace-Metropolis estimate and take the variance of the mean as an estimate of the variance of the compound Laplace-Metropolis estimate. In other words, within the bth batch we can nd a Laplace-Metropolis estimate for the log marginal likelihood I 1 ^ (9) LMb = P log f2 g + 2 log fjH ?jg + log ff ( ?)g + Li 2 i
d X =1

and then use these B estimates to calculate an overall Laplace-Metropolis estimate, 1 B LM : LM = B b b


d X =1 d

(10)

We applied this method to the model estimated in Section 6. Instead of only 5; 500 iterations, we ran the Metropolis algorithm for 75; 500 iterations. The rst 500 were once again discarded as \burn-in". The remaining 75; 000 iterations were divided into B = 15 batches of 5; 000 iterations each. Before examining parameter estimates or performing other inference using Metropolis it is generally a good idea to look at plots of the (dependent) sequential realizations of all the 10

xed e ect parameter estimates and plots of at least some of the random e ect parameter realizations. We have found that if the Markov chain is not mixing well or is not sampling from the stationary distribution, this is usually apparent in sequential plots of one or more of the xed e ect realizations. The sequential plot of the intercept realizations is the plot which most often exhibits di culties in the Markov chain. Figure 1 shows the sequential realizations of the intercept parameter for the 4 xed e ects model. The sequential plots of the other xed e ects were similar to the intercept plot. In this case the Markov chain seems to be mixing well enough and is likely to be sampling from the stationary distribution.
4 -2
0

20000

40000

60000

Figure 1: Sequential realizations of the intercept parameter. It is also interesting to look at estimated marginal densities for each of the xed e ect parameters and for some of the random e ect parameters. These also assist the analyst in detecting possible failures in the Metropolis run. We have found that the marginal distribution of the intercept parameter is often a good place to detect problems in the Markov chain. In Figure 2 the marginal distribution of the intercept parameter for the 4 xed e ects model is shown; this was obtained using Terrell's (1990) maximal smoothing density estimation procedure. It is worth noting that this posterior marginal distribution is not normally distributed; it is skewed to the right. So approximating this posterior marginal 11

distribution with a normal, such as is commonly done in practice, is quite likely to lead to erroneous inferences.
0.5 0.0 0.1 0.2 0.3 0.4

-2

Figure 2: Estimated marginal distribution of the intercept parameter. Table 2 shows the estimated log marginal likelihood within each batch. The rst batch consisted of the 501st through 5; 500th iterations. The second batch was made up of iterations 5; 501 through 10; 500, and so forth. Shown are the contribution of the within batch maximized log-likelihood, the contribution of the other three terms of equation (4) to each within batch log marginal likelihood and in the rightmost column the within batch log marginal likelihood; the predominant contribution of the maximized log-likelihood is apparent. The mean of the 15 within batch estimates is ?220:5 and their standard deviation is 0:7. It can be argued (see Lewis 1994) that the compound Laplace-Metropolis estimator will have an approximate t-distribution with (B ? 1) degrees of freedom. Using this approximation, a 95% highest posterior density interval for the compound Laplace-Metropolis estimate would be (?222:0; ?219:0). Observe that this interval does indeed contain the \actual" log marginal likelihood, ?221:8. 12

Table 2: Example of calculating Laplace-Metropolis estimates for separate batches using the posterior simulation output for the 4 xed e ects model. Batch Within Batch Within Batch Within Batch # Loglikelihood Other Logposterior I L ^i LMb Terms i 1 ?205:7 ?14:8 ?220:5 2 ?205:5 ?14:3 ?219:8 3 ?205:8 ?15:5 ?221:3 4 ?206:0 ?14:6 ?220:6 5 ?205:5 ?14:4 ?219:9 6 ?206:5 ?15:3 ?221:8 7 ?206:2 ?14:5 ?220:7 8 ?205:3 ?14:4 ?219:7 9 ?205:8 ?15:0 ?220:8 10 ?205:4 ?15:0 ?220:4 11 ?207:0 ?14:2 ?221:2 12 ?206:0 ?15:2 ?221:2 13 ?205:4 ?13:8 ?219:2 14 ?205:7 ?14:3 ?220:0 15 ?205:9 ?14:5 ?220:4
P =1 d

8 Discussion
Raftery (1995) originally proposed the Laplace-Metropolis estimator to get around limitations encountered when trying to use the Laplace method. He also proposed use of the L center as an approximation for the posterior mode in the situation where it is impractical to calculate the likelihood or log likelihood for each simulated parameter vector. A reader might wonder why we needed to use it here since as we noted in Section 6 we were able to nd the \actual" log marginal likelihood by taking a sample of 50 million draws from the prior distribution. The program to do this took about 5 days on our SPARC station II. The compound Laplace-Metropolis approximation was found in about two hours. Thus while the amount of computer time required to get a compound Laplace-Metropolis approximation to the log marginal likelihood for a logistic hierarchical model is non-trivial, it requires much less time than computing the \actual" log marginal likelihood. And along with this sub1

13

stantial savings in computer time, we have found that the approximations obtained using the compound Laplace-Metropolis estimator are remarkably accurate. As noted previously, the standard Bayesian solution for comparing two models is to compute the Bayes factor. The Bayes factor is the ratio of two marginal likelihoods. In this paper we demonstrated how we were able to obtain good approximations for marginal likelihoods in hierarchical models. We found a point estimate of ?220:5 for the log marginal likelihood for an example model with 4 xed e ects. We have also t a number of other models to the IFS event history data set. For example, when we add husband's level of completed education to the model, we get an approximate log marginal likelihood of ?223:5. Hence, the Bayes factor found for comparing the model without husband's education against the model with husband's education is approximately e : 20, providing evidence for the smaller model. Models containing other potential covariates may be similarly compared. Bayes factors can be used not only to compare models containing di erent xed e ect parameters but also to assess whether the data provide evidence for or against the presence of random e ects. In Section 7 we found that a compound Laplace-Metropolis estimate of the log marginal likelihood for the 4 xed e ects model was ?220:5. This model included a random e ect parameter for each woman in the sample. Should we have included the random e ects in the model? If we can calculate the log marginal likelihood of a model without random e ects, we will be able to determine a Bayes factor for comparing the model with random e ects to the model without random e ects. The marginal likelihood for the model without random e ects can be approximated using the Laplace method (Raftery 1993). For the 4 xed e ects model this was ?222:15. Hence the Bayes factor for comparing the model with random e ects against the model without random e ects is exp f?220:5 ? (?222:15)g 1:35. There is some evidence, but only weak evidence, favoring the model incorporating random e ects.
30

References
Cleland, J.G. and Scott, C., with D. Whitelegge (eds.) (1987). The World Fertility Survey: An Assessment. Oxford, U.K.: Oxford University Press. de Bruijn, N.G. (1970). Asymptotic Methods in Analysis. Amsterdam, The Netherlands: North-Holland. 14

Geyer, C.J. (1992). \Practical Markov chain Monte Carlo (with discussion)." Statistical Science 7, 473{511. Hastings, W.K. (1970). \Monte Carlo sampling methods using Markov chains and their applications." Biometrika 57, 97{109. Kass, R.E. and Raftery, A.E. (1995). \Bayes factors." Journal of the American Statistical Association, To appear. Lewis, S.M. (1993). \Contribution to the discussion of three papers on Gibbs sampling and related Markov chain Monte Carlo methods." Journal of the Royal Statistical Society, series B, 55, 79{81. Lewis, S.M. (1994). Multilevel Modeling of Discrete Event History Data Using Markov Chain Monte Carlo Methods. Unpublished doctoral dissertation, Department of Statistics, University of Washington, Seattle, Wa. Meng, X.L. and Wong, W.H. (1993). \Simulating ratios of normalizing constants via a simple identity." Technical Report no. 365, Department of Statistics, University of Chicago. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E. (1953). \Equations of state calculations by fast computing machines." Journal of Chemical Physics 21, 1087{1092. Naylor, J.C. and Smith, A.F.M. (1982). \Applications of a method for the e cient computation of posterior distributions." Applied Statistics 31, 214{225. Newton, M.A. and Raftery, A.E. (1994). \Approximate Bayesian inference by the weighted likelihood bootstrap (with discussion)." Journal of the Royal Statistical Society, Series B 56, 3{48. Raftery, A.E. (1993). \Approximate Bayes factors and accounting for model uncertainty in generalized linear models." Technical Report no. 255, Department of Statistics, University of Washington, Seattle, Wa. Raftery, A.E. (1995). \Hypothesis testing and model selection via posterior simulation." In Practical Markov Chain Monte Carlo (W.R. Gilks, D.J. Spiegelhalter and S. Richardson, eds.), To appear. Raftery, A.E., Lewis, S.M. and Aghajanian, A. (1994). \Demand or ideation? Evidence from the Iranian marital fertility decline." Working Paper No. 94-1, Center for Studies in Demography and Ecology, University of Washington, Seattle, Wa. 15

Raftery, A.E., Lewis, S.M., Aghajanian, A. and Kahn, M.J. (1993). \Event history modeling of World Fertility Survey data." Working Paper No. 93-1, Center for Studies in Demography and Ecology, University of Washington, Seattle, Wa. Raudenbush, S.W. (1988). \Educational applications of hierarchical linear models: A review." Journal of Educational Statistics 13, 85{116. Rousseeuw, P.J. and van Zomeren, B.C. (1990). \Unmasking multivariate outliers and leverage points (with discussion)." Journal of the American Statistical Association 85, 633{651. Schmeiser, B. (1982). \Batch size e ects in the analysis of simulation output." Operations Research 30, 556{568. Terrell, G.R. (1990). \The maximal smoothing principle in density estimation." Journal of the American Statistical Association 85, 470{477. Tierney, L. and Kadane, J.B. (1986). \Accurate approximations for posterior moments and marginal densities." Journal of the American Statistical Association 81, 82{86.

16

Das könnte Ihnen auch gefallen