# A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices

## Olivier Ledoit and Michael Wolf

### Abstract

Many economic problems require a covariance matrix estimator that is not only
invertible, but also well-conditioned (that is, inverting it does not amplify
estimation error). For large-dimensional covariance matrices, the usual
estimator - the sample covariance matrix - is typically not well-conditioned and
may not even be invertible. This paper introduces an estimator that is both
well-conditioned *and* more accurate than the sample covariance matrix
asymptotically. This estimator is distribution-free and has a simple explicit
formula that is easy to compute and interpret. It is the asymptotically optimal
convex combination of the sample covariance matrix with the identity matrix.
Optimality is meant with respect to a quadratic loss function, asymptotically as
the number of observations and the number of variables go to infinity together.
Extensive Monte-Carlo confirm that the asymptotic results tend to hold well in
finite sample.

This is based on the paper I sent on the academic job market under the title "Portfolio Selection: Improved
Covariance Matrix Estimation". It constituted the first chapter of my 1995
Finance PhD thesis at MIT: Essays
on Risk and Return in the Stock Market. I was invited to present it at
UCLA, the University of Chicago, Wharton and Yale, all of which offered me
tenure-track positions as Assistant Professor of Finance. I also presented it at
the Q Group, who awarded me the Roger
F. Murray prize.

The code in Matlab for the estimator proposed in the paper can be
downloaded
for free from the website of my co-author
Michael Wolf in the
Department of Economics of
the University of Zurich.

Journal
of Multivariate Analysis, Volume 88, Issue 2, February
2004, pages
365-411

Download full paper (Acrobat PDF - 663KB)

Back to research page

Back to home page