Busting the Efficient Markets Hypothesis: The Adaptive Market Time Machine
In this series of posts, I will challenge the Efficient Markets Hypothesis with the introduction of a methodology that uses bias-free adaptive learning algorithms. I will show that a learning algorithm that is given no prior information or assumptions can find profitable patterns in short-term data and handily beat the market with less risk. This “Adaptive Market Time Machine” will start off trading the S&P500 index in 1955 with no other tools than the past sequence of runs over the last 5 trading days. The time machine is not a black box, it conducts experiments and uses basic statistics that any scientist or any well-versed “quant-oriented” trader could perform. Further adding to the realism of the experiment is that unlike most technical analysis tools today, run data is information that could have been realistically been used by a trader back in 1955.
The Efficient Markets Hypothesis (EMH) theory is the dreaded condemnation of mediocrity bestowed upon all of us in the investment industry by the academic world. It roughly states that no one can be expected to systematically outperform the market over time, and those that do are simply lucky. For a modern academic review and excellent background on the EMH please read this paper by Andrew Lo: http://web.mit.edu/alo/www/Papers/EMH_Final.pdf
Traders and portfolio managers often respond that the EMH does not work in practice–they have backtested several strategies that have consistently beat the market in the long run. The high priests and founders of the EMH would respond that they are simply data- mining–that is they are by chance finding the rule or handful of rules that worked in the past, but this does not mean that they will work in the future. This is a very valid point: how do we know that the process is being done in a way that actually generalizes out of sample in real-life? Simple out of sample or walk forward testing is not enough– you may validate that a specific strategy is robust, but not the process and method of backtesting as it applies to a variety of approaches/indicators. That is, the research process itself must be generalizable, otherwise you are simply validating that a strong effect exists for a given strategy that works in practice. This does not mean that using the same method/research process will be able to discover and validate new effects that will also work out of sample.
There are other pitfalls that are difficult to see: How do we know that the backtesting was not simply biased towards a specific market climate? When trend-following dominates the “best” strategies will be trend oriented, but how do you know when they are starting fail? How do you know if the regime is changing? The only way to REALLY know is if you could mimic the process of intelligent and well-thought out backtesting and create a machine with 1) no prior knowledge 2) no prior bias towards any given strategy. You would then take that machine and let it conduct tests and trade through new environments over time. One of the best examples of an adaptive process used the strategy of daily-follow through and was first detailed by Michael Stokes at MarketSci: http://marketsci.wordpress.com/2008/11/19/the-simple-made-powerful-with-adaptation/ The best and only academic article of substance on this concept applied to the stock market is by a few Canadian professors (probably why its obscure) and I strongly recommend that you read it: http://docs.google.com/gview?a=v&q=cache:lBeGPkJ5TMMJ:www.fma.org/SLC/Papers/cnPKR161m.pdf+Can+Machine+Learning+Challenge+the+Efficient+Market+Hypothesis%3F&hl=en&gl=ca Having read a great deal, and having tested machine learning and neural networks, I can tell you that the adaptive market time machine is completely different. The manner in which it makes decisions and the results are intuitive–unlike neural networks which find relationships that humans cannot possibly understand. Unlike machine-learning, it does not employ sophisticated non-linear regression techniques. The technology is most comparable to particle swarm optimization but with distinct differences. But at the very root it conducts statistical tests and uses a very robust evolving mechanism to figure out what is working and how things are changing. More on the time machine in the next article.