Skip to content

Minimum Variance Algorithm (MVA) Test Drive

April 4, 2013

The Minimum Variance Algorithm (MVA) follows much of the same logic as the Minimum Correlation Algorithm (MCA) and differs primarily in the objective function which is to minimize portfolio variance versus correlations. Both are “heuristic” algorithms that seek to approximate the results of more complex methods that require employing quadratic optimization. In a recent whitepaper, Newfound performed various simulations and came to the same conclusion that I have shared for a long time:   in the case of uncertainty in the parameter inputs such as returns, correlations and volatilities, simple heuristic methods achieve results that are equivalent to more complex optimization methods. It is therefore feasible that good heuristic methods can exceed the performance of their more complex counterparts especially if they are designed to be less sensitive to parameter uncertainty.

The core principle of both MVA and MCA is to use proportional allocations to generate weightings because they are more stable than using discrete selection of both assets and weights. This principle is supported by information theorists, and is used frequently in technological applications. Cover also covers this principle in his work on Universal Portfolio Theory. A good summary article is presented on Ernie Chan’s blog. Another aspect of both MVA/MCA is that they use a gaussian transformation  to normalize the relative average correlations/covariances. MVA is very similar to “mincorr2” (see the whitepaper for more details) and simply finds the average covariance of each asset versus all other assets -including its own variance- and then converts the average value for each asset to a cross-sectional distribution using normalization. This is used to proportionately weight each asset to find an initial set of weights. The final weights are derived by multiplying each initial asset weight by its inverse variance and then releveraging to sum up the weights to a total of 100%.  The result is that weights reflect both the asset’s own relative variance and also average covariance to the universe of assets. However, the weights are less dependent on correlation estimates (which are critical in complex minimum variance but are noisier than volatility estimates) and do a better job of distributing risk since allocations are made to all assets in the universe.

Below is a backtest of the MVA on eight highly liquid ETFs used for the original MCA tests since 2003. The variance-covariance matrix uses a 60-day parameter with weekly rebalancing. The benchmark used is equal weight:

mva chart

mva yearlyAs you can see the MVA achieves a high sharpe ratio (higher than MCA) and achieves slightly superior returns to an equal weight portfolio with less than 50% of the volatility. The benchmark analysis shows that MVA is simply a means to efficiently reduce downside relative to an equal weight portfolio, and this comes at the cost of some upside performance. MVA captures 75% of the upside in bull markets for the equal weight index, and only 50% of the downside in bear markets using a continuous distribution measurement. The actual results of this one test are not meant to be conclusive, but I have done a large range of tests on different universes with both long-term tests on index data and using recent ETF data and have found similar results. While there is nothing magical about MVA, it supports the point that a heuristic method can be very effective-especially with noisy time series data. For the sake of practicality, it can be implemented easily in just about any platform and like MCA can also be computed very quickly for large datasets. There isn’t really a good case to employ quadratic optimization to minimize variance unless you need handle different constraints. While I haven’t done much in the way of comparisons between the two, I would imagine that MVA would perform at least as well across a wide range of universes.


4 Comments leave one →
  1. Arthur permalink
    April 4, 2013 2:36 am

    David, your reportings are really cool, what are you using to generate them? I don’t recognise any commercial app. A home made application ? A spreadhsheet where you put a lot of love and work ?

    • david varadi permalink*
      April 5, 2013 1:50 am

      hi Arthur, thank you…Corey Rittenhouse built some custom software in VBA/excel to generate them–he is a budding graphical artist along with being a good programmer.

  2. stefan permalink
    April 4, 2013 5:11 am

    Thank you so much for sharing!
    A question: do you average the normalized covariances OR the rank weighted normalized covariances (like in mincorr2 in MCA)?

    • david varadi permalink*
      April 5, 2013 1:49 am

      hi stefan, thank you. I just posted a spreadsheet example.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: