Skip to content

RSO MVO vs Standard MVO Backtest Comparison

October 10, 2013

In a previous post I introduced Random Subspace Optimization as a method to reduce dimensionality and improve performance versus standard optimization methods. The concept is theoretically sound and is  traditionally applied in machine learning to improve classification accuracy.  It makes sense that it would be useful for portfolio optimization.  To test this method, I used a very naaive/simplistic RSO model where one selects “k” subspaces from the universe and running classic “mean-variance” optimization (MVO) with “s” samples and averaging the portfolio weights found across all of the samples to produce a final portfolio. The MVO was run unconstrained (long and shorts permitted) to reduce computation time since there is a closed form solution.  Two datasets were used: the first is an 8 ETF universe used in previous studies for the Minimum Correlation and Minimum Variance Algorithms, the second was using the S&P sector spyder ETFs. Here are the parameters and the results:

rso comp

 

On these two universes, with this set of parameters, RSO mean-variance was a clear winner in terms of both returns and risk-adjusted returns– and the results are even more compelling when you factor the lower average exposure used as a function of averaging across 100 portfolios. Turnover is also more stable, which can be expected because of the averaging process. Results were best in these two cases when k<=3, but virtually all k outperformed the baseline. The choice of k is certainly a bit clunky (like in nearest neighbourhood analysis), and it needs to be either optimized or considered in relation to the number of assets in the universe. The averaging process across portfolios is also naaive, it doesn’t care whether the objective function is high or low for a given portfolio. There are a lot of ways to improve upon this baseline RSO version. I haven’t done extensive testing at this point, but theory and preliminary results suggest a modest improvement over baseline MVO (and other types) of optimizations. RSO is not per se a magic bullet, but in this case it appears better capable of handling noisy datasets at the very least- where matrix inversion used within typical unconstrained MVO can be unstable. 

5 Comments leave one →
  1. Kostas permalink
    October 11, 2013 1:46 am

    David, is there a spreadsheet model for the test?

    • david varadi permalink*
      October 15, 2013 7:20 pm

      hi Kostas, there is currently no spreadsheet model- but perhaps in the future we will provide one. good question.
      best
      david

  2. October 14, 2013 11:43 am

    Regarding your min variance post, you show CAGR of 12%, and the test in this post shows 4.84% – why is there such a big difference – comparing the 2 methods…

    https://cssanalytics.wordpress.com/2013/04/04/minimum-variance-algorithm-mva-test-drive/

    • david varadi permalink*
      October 15, 2013 7:21 pm

      hi, this is mean-variance/max sharpe not minimum variance. furthermore, this is long/short. hope that helps.
      best
      david

Trackbacks

  1. Cluster Random Subspace- A Process Diagram | CSSA

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: