Skip to content

Quantum RSO

October 8, 2013

quantum

In the last post on Random Subspace Optimization (RSO) I introduced a method to reduce dimensionality for optimization to improve the robustness of the results. One concept proposed in the previous article was to weight the different subspace portfolios in some manner rather than just equally weighting their resulting portfolio weights to find the final portfolio. Theoretically this should improve the resulting performance out of sample.

One logical idea is to compound the algorithm multiple times. This idea is driven from the notion that complex problems can be more accurately solved by breaking them down into smaller sub-problems. Quantum theory is the theoretical basis of modern physics that explains the nature and behavior of matter and energy on the atomic and subatomic level. Energy, radiation and matter can be quantized- or divided up into increasingly smaller units which helps to better explain their properties.

By continuing to synthesize and aggregate from smaller subsamples, it may be possible to do a better job at optimizing the universe of assets than optimizing globally only once with all assets present. There is no reason why RSO can’t borrow the same concept to optimally weight subspace portfolios. Imagine taking the subspace portfolios formed at the first level and then running the same optimization (with the same objective function) using the RSO on the subspace portfolios. The analogy would be: RSO(RSO) where the RSO portfolios become “assets” for a new RSO. This is like the concept of generations in genetic algorithms. In theory this could proceed multiple times- ie RSO(RSO(RSO)). Borrowing a concept from micro-GA, one could run a small number of samples and run multiple levels of RSO and then start the process over again instead of expending computational resources on one large sample.

 

 

2 Comments leave one →
  1. October 8, 2013 5:48 pm

    Recursive or repeated optimization can often be a bad thing — or at least not any better than a single global optimization.

    The purpose of the sub-sampling is to avoid model over-fitting by decreasing the bias in the bias/variance trade-off. The goal is for your out-of-sample results to have an error rate similar to your in-sample. (This is why we’ve seen many Kaggle competition winners have been using the Random Forest method. Classification problems are very easy to over-fit)

    Every optimization layer or iteration will again increase the bias. In many scenarios, repeated optimization will end up at an answer indistinguishable from a global optimization. If you are optimizing between the sub-samples over the same data, this is almost a certainty. If you’re using some sort of cross-validation to weight them, you might not have enough data for repeated optimization.

    Optimization is a very fickle mistress who can easily lead you to false confidence in models.

    • david varadi permalink*
      October 8, 2013 8:06 pm

      you are correct, the bias increases as you add layers in this case and too much recursion can lead you back to the exact same optimum found by just doing it once. i pose this as an idea where heuristics for compromise can be generated–too many samples with a high enough k and too much recursion will not solve anything, completely agree.
      the RSO without the second layer is a generalization of random forest, adding in the second layer needs to be done with less dimensions in the inputs (ie rank and average, proportionately weight, minimize variance etc).there needs to be some blending of the portfolios otherwise you would just select the absolute optimum and be no better off. a point i definitely did not mention–that should be emphasized.
      best
      david

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: