The Adaptive Time Machine Series Returns (Part 1)
First, thanks to everyone that commented on the last post—your kind words and support are much appreciated. Getting back to the subject matter–The Adaptive Time Machine– the first installment is about examining the original algorithm to understand where it can be improved. Quantum Financier did a good series recap on the time machine and also tested out the original algorithm on various markets: http://quantumfinancier.wordpress.com/2010/05/10/time-machine-test-part-1/
The original algorithm was based on using a simple t-test confidence-based method of selecting from 50 different runs-based strategies. The entry run was between 1-5 days in length, while the exit run was also between 1-5 days in length. Mean-reversion type run strategies entered on a down run of “n” days and exiting on an up run of “n” days. This produced 25 different combinations of long/short strategies. Trend-type strategies entered on an up run of “n” days and exited on a down run of “n” days–also producing 25 different combinations of long/short strategies for a total of 50. It was shown that selecting strategies with increasingly high levels of confidence generated superior returns than each prior group, and significantly outperformed buy and hold. This algorithm was able to adapt to changing market environments over time.
But the algorithm was far from perfect–although it proved to be fairly robust across instruments, in many different markets it struggled to beat buy and hold and in some cases even under-performed. The question is why? The most obvious problem with the original algorithm is with the strategy selection criteria itself. First of all, the problem with using runs is twofold: 1) run frequency changes over time- there is both intra-year variation and secular shifts in run frequency across years 2) run frequency varies significantly by instrument which limits adaptation. As a simple example, a strategy that goes long on a 3-day run down, and exits on a 5-day run up will experience periods where it is completely inactive or even worse stuck waiting for an exit. If this strategy was selected prior to going into a deep bear market, it is possible that no significant up run would materialize for a month or more. In addition, the frequency of long runs either up or down have shrunk over the last few decades, and this transition greatly impacts the bias in strategy selection—we may end up selecting a strategy that is no longer regularly active. Furthermore, in relation to different markets, while a run of 4 or 5 is very rare for the S&P500, it may be quite common with different commodities or currencies. The best solution to this problem: a normalized up and down run indicator using (what else?) the 252-day percentrank!
Another problem with the time machine algorithm relates to the t-test itself which assumes normality and independence–both highly flawed assumptions with time series data. The use of non-parametric statistics are slightly better, but they lack the statistical power of more rigorous tests like the bootstrap or the monte carlo simulation as popularized by David Aronson in Evidence-Based Technical Analysis. This book is well worth reading, and provides the requisite dose of rigor and skepticism into a world filled with snake oil. Implementing and adapting these tests for hypothesis testing is well beyond the scope of this blog and is something we have invested a fair bit of time on internally at CSS. Truthfully the simple t-test, or a regression-based t-test will have to suffice for most traders since conducting the above tests is both complex and computer-intensive. One important point is that many strategies will have greater than 95% confidence, and what was sorely lacking in the time machine was a performance selection criterion to isolate the best of the strategies that already pass the confidence test. To improve upon the existing framework we will use a fancy new statistic called “Omega” to select strategies, which is essentially a distribution-based measure of upside versus downside. The Omega is by itself very useful for different applications and will be introduced in a post later this summer.
Finally, the last problem with the time machine pertains to the use of long/short strategies. If a market is perpetually rising, short strategies may not be very profitable and will obscure the strength of its long-side cousin. The converse will happen when the market is falling. This makes it more difficult to adapt to what is working, and also less flexible. The best way to solve this issue is to have long only strategies, short only strategies, and also long/short strategies. This provides maximum flexibility for strategy selection. Next week I will begin presenting some of the refinements and spend time developing the new and improved algorithm. Stay tuned!