Sie sind auf Seite 1von 3

MEAN VARIANCE ANALYSIS

http://submartingale.co.uk/MEAN VARIANCE ANALYSIS.htm

MEAN VARIANCE ANALYSIS and TRACKING ERROR Much of modern finance is based around Markowitzs mean-variance efficiency, where variance is used as a proxy for risk and it is generally assumed that investors are risk averse for the purposes of calculating efficient frontiers. Although these assumptions can be criticised on many grounds, for computational reasons they are often adopted in practice. In some contexts, e.g. tracking error, minimising variance is not an unreasonable objective although other methods can also be considered such as maximising stationarity or using the implied alpha of positions. There are two major practical issues to be considered in considering any approach based on mean variance analysis. These concern (i) data estimation and (ii) optimisation computation. Stein Estimators An estimator is admissible if no other estimator is always better (for any properly defined loss function). This is clearly a minimal condition. In the 1950s Charles Stein shocked the statistical community by showing that sample means are not admissible for a general class of multivariate populations. This is essentially because they fail to take account of the multivariate data available. Information relating to the expected return on one stock can be derived from the returns earned on all stocks. Intuitively this is because the sample mean relates to just one evolution of history and one period of out-performance can be followed by another of under-performance. The (admissible) James-Stein estimator shrinks the sample mean back towards the global mean for all stocks, scaled by the variance. The greater the variance the less the information contained in the sample mean and therefore the greater the shrinkage towards the global mean. Similar considerations apply to the covariance estimator with, in fact, even greater effect on mean variance optimisation. In a series of papers from 1994, Ledoit proposes various shrinkage methods to improve covariance matrix estimation. Principal Components (PCA) methods can also be considered. Ill-Conditioned Covariance Matrices An ill-conditioned matrix is one where the ratio of the largest to the smallest eigenvalue is large. We can use PCA to understand why there is a problem with such covariance matrices. Take the matrix (V) that arises directly from the sample period and apply PCA to this. If there are N stocks there are N principal components (eigenvectors) but only the first n will be significant. Typically n << N and the last few components (eigenvalues) will be small and represent statistical noise. However, in MV optimisation routines the inverse of the covariance matrix is used. As is well known the eigenvalues of the inverse are the inverse of the eigenvalues. Therefore, assuming the matrix is positive definite, the largest eigenvalues (i.e. the first few PCs) of V inverse are determined by the smallest eigenvalues of V and are thus mostly reflective of statistical noise. These effectively determine the portfolio selected. This explains why MV optimised portfolios are usually unstable and sensitive to small changes in inputs. In order to deal with this one can consider three approaches: - Shrinkage methods (a.k.a. ridge regression) - PCA - Resampling Ledoits shrinkage method is designed to ensure that that the inverse matrix is useable. The matrix is entirely
1 of 3

1/8/2012 12:18 AM

MEAN VARIANCE ANALYSIS

http://submartingale.co.uk/MEAN VARIANCE ANALYSIS.htm

determined by the data and there is no requirement to choose a Bayesian prior. Under PCA, components below a certain threshold are assumed to relate to noise and are set to zero. The problem is accordingly assumed to be one of lower dimensionality. Resampled Efficient Frontier In his 1998 book, Michaud proposes an alternative method of dealing with the instability caused by MV optimisation algorithms, using resampling techniques. Resampling (a.k.a. bootstrap) is a Monte Carlo technique used to calculate statistics when analytic results cannot be derived and/or a non-parametric distribution is assumed. Here we can use it to calculate alternative, but consistent, histories from available data. It is assumed that there exists a true stationary distribution. Although stationarity is a big assumption, if it is not assumed, little can be deduced. The true distribution may be a historical distribution, a modification of it or some other assumed distribution. Under Michauds method, a random sample of multi-period returns is drawn from the distribution and an efficient frontier computed from this alternative history. This step is repeated many times and a distribution of efficient frontiers is consequently obtained. Resampled efficient portfolios are computed as the average of all the portfolio weights from equivalent points on the frontier. This produces much more stable efficient portfolios as the statistical noise is averaged out. Conversely the typical MV optimisation process produces unintuitive portfolios of little perceived investment value that are very unstable to small changes of input. This is because the algorithm searches for corner portfolios and assumes exactitude in the input date. To illustrate, if two assets had near identical risk and return, the maximum return portfolio would fully weight the one with the highest return. Under resampling, alternative but consistent histories would be generated where the other asset had the higher return, and the average would be a roughly equal weighting of the two. This portfolio is obviously more consistent with normal investment practice. Taking the analysis a stage further, statistics such as confidence regions can be computed by additional levels of abstraction. It is clearly useful to know if a proposed portfolio is in fact statistically equivalent to the current one, and does not therefore justify incurring transaction costs. A further level of abstraction (resampling from the resample) is required to avoid implicitly assuming that the true distribution is in fact known. Unlike backtests which have no real statistical validity (in or out of sample) being based on only a single evolution of history, resampling techniques enable valid statistical conclusions to be drawn, albeit still assuming distributional stationarity. Implied Alpha Analysis Unlike MV optimisation which uses the inverse of the covariance matrix to impose a theoretically optimal portfolio, implied alpha analysis gives the fund manager useable information about his positions. The implied alpha uses the covariance matrix rather than the inverse, so does not suffer the concern above in connection with noise eigenvalues. The implied alpha of a position is the excess return on that position which is consistent with its size in the portfolio taking into account all other positions and correlations. The fund manager can then decide whether this accords with his view of the expected value brought by the position and, if not, adjust the portfolio. Cointegration Correlation analysis typically looks at the relationship between returns, being stationary, rather than prices

2 of 3

1/8/2012 12:18 AM

MEAN VARIANCE ANALYSIS

http://submartingale.co.uk/MEAN VARIANCE ANALYSIS.htm

which are usually not. If returns are stationary, prices are integrated. However by differencing prices to get returns, some information is lost. It is possible for series to be cointegrated and not correlated and vice versa. Price cointegration takes account of long term trends and relationships whereas returns have little or no memory. The relevance of cointegration to tracking is that if tracking error is mean-reverting, correlation will not identify this, but cointegration may. Cointegration aims to maximise the stationarity of a combination of series, rather than minimise return variance. These two are undoubtedly related but may have differences. It is necessary to search for a cointegration vector to achieve this. Selected References H. Markowitz. 1952. Portfolio Selection C. Stein. 1955. Inadmissibility of the Usual Estimator of the Mean of the Multivariate Normal Distribution. O. Ledoit. 1999. Improved Estimation of the Covariance Matrix of Stock Returns with an Application to Portfolio Selection. R. Michaud. 1998. Efficient Asset Management. Davison and Hinkley. 1997. Bootstrap Methods and their Application C. Alexander. 2001. Market Models. (Cointegration Chapter 12) Stuart, Ord and Arnold. 1999. Kendalls Advanced Theory of Statistics Classical Inference and the Linear Model. A. Harvey. 1993. Time Series Models.

3 of 3

1/8/2012 12:18 AM

Das könnte Ihnen auch gefallen