Cluster Random Subspace Method for Portfolio Management
One of the many areas that I have explored in my own research is creating new methods to improve upon mean-variance optimization. A while back I wrote about the concept of applying the Random Subspace Method (RSM) as a viable alternative to improve upon some of the deficiencies in standard portfolio optimization . I called the application of RSM to optimization RSO– which showed promise versus traditional mean-variance for homogeneous universes. The original concept for random subspaces originated at the famous Bell Labs, and was designed to reduce dimensionality for prediction or classification. The most popular application of RSM is in “Random Forests” which is used for generating more robust “decision trees” in machine learning. RSM uses bagging to draw samples of predictors and combine their estimates together in an “ensemble.” The primary advantage is that the noise created by each group of predictors tends to be somewhat unrelated to other randomly selected groups. As a consequence, the noise gets “cancelled out” and what remains is a more stable and accurate predictor ensemble.
While the RSM framework is statistically sound, it does have some obvious areas of weakness that require a more refined approach. I worked together with Michael Guan of Systematic Edge as an advisor for his computer science thesis on a superior approach called “Cluster Random Subspace Method” (CRSM). Michael is a very smart guy, and it was a lot of fun working with him. We also received some valuable feedback from Adam Butler of GestaltU. The application we used to demonstrate the advantages of CRSM was portfolio optimization, but the concept can be applied to prediction and classification as well (including Random Forests). The thesis can be found here: CRSO Thesis<a . I would encourage everyone to read the thesis, but for those that want more of a simple overview, I will be providing a summary in the next post.