Skip to content

Busting the Efficient Markets Hypothesis: The Adaptive Market Time Machine

September 14, 2009

In this series of posts, I will challenge the Efficient Markets Hypothesis with the introduction of  a methodology that uses bias-free adaptive learning algorithms. I will show that a learning algorithm that is given no prior information or assumptions can find profitable patterns in short-term data and handily beat the market with less risk. This “Adaptive Market Time Machine” will start off trading the S&P500 index in 1955 with no other tools than the past sequence of runs over the last 5 trading days. The time machine is not a black box, it conducts experiments and uses basic statistics that any scientist or any well-versed “quant-oriented” trader could perform. Further adding to the realism of the experiment is that unlike most technical analysis tools today, run data is information that could have been realistically been used by a trader back in 1955.

The Efficient Markets Hypothesis (EMH) theory is the dreaded condemnation of mediocrity bestowed upon all of us in the investment industry by the academic world. It roughly states that no one can be expected to systematically outperform the market over time, and those that do are simply lucky. For a modern academic review and excellent background on the EMH please read this paper by Andrew Lo: http://web.mit.edu/alo/www/Papers/EMH_Final.pdf 

Traders and portfolio managers often respond that the EMH does not work in practice–they have backtested several strategies that have consistently beat the market in the long run. The high priests  and founders of the EMH would respond that they are simply data- mining–that is they are by chance finding the rule or handful of rules that worked in the past, but this does not mean that they will work in the future. This is a very valid point: how do we know that the process is being done in a way that actually generalizes out of sample in real-life? Simple out of sample or walk forward testing is not enough– you may validate that a specific strategy is robust, but not the process  and method of backtesting as it applies to a variety of approaches/indicators. That is, the research process itself must be generalizable, otherwise you are simply validating that a strong effect exists for a given strategy that works in practice. This does not mean that using the same method/research process will be able to discover and validate new effects that will also work out of sample.

 There are other pitfalls that are difficult to see: How do we know that the backtesting was not simply biased towards a specific market climate? When trend-following dominates the “best” strategies will be trend oriented, but how do you know when they are starting fail? How do you know if the regime is changing?  The only way to REALLY know is if you could mimic the process of intelligent and well-thought out backtesting and create a machine with 1) no prior knowledge 2) no prior bias towards any given strategy. You would then take that machine and let it conduct tests and trade through new environments over time. One of the best examples of an adaptive process used the strategy of daily-follow through and was first detailed by Michael Stokes at MarketSci: http://marketsci.wordpress.com/2008/11/19/the-simple-made-powerful-with-adaptation/ The best and only academic article of substance on this concept applied to the stock market is by a few Canadian professors (probably why its obscure) and I strongly recommend that you read it:  http://docs.google.com/gview?a=v&q=cache:lBeGPkJ5TMMJ:www.fma.org/SLC/Papers/cnPKR161m.pdf+Can+Machine+Learning+Challenge+the+Efficient+Market+Hypothesis%3F&hl=en&gl=ca  Having read a great deal, and having tested machine learning and neural networks, I can tell you that the adaptive market time machine is completely different. The manner in which it makes decisions and the results are intuitive–unlike neural networks which find relationships that humans cannot possibly understand. Unlike machine-learning, it does not employ sophisticated non-linear regression techniques. The technology is most comparable to particle swarm optimization but with distinct differences. But at the very root it conducts statistical tests and uses a very robust evolving mechanism to figure out what is working and how things are changing. More on the time machine in the next article.

21 Comments leave one →
  1. September 14, 2009 1:14 am

    David, curious to see where you take this–given there are many interesting angles on which to define regime detection and dynamic adaptive evolution.

  2. Joe Marc permalink
    September 14, 2009 5:31 am

    Intriguing. Keep this up and you will soon be the most popular finance analysis blog out there. Dynamite. Big money report also. As several other comments have stated, we REALLY appreciate your sharing with us.

    • david varadi permalink*
      September 14, 2009 7:42 am

      thanks joe, no problem love to share. cheers,
      dv

  3. September 14, 2009 12:03 pm

    David, I enjoy your blog even though I have only recently discovered it. I do want to take issue with this post though. I do not really think it is fair to send in a computer and complicated software for periods of time in which that type of technology was impossible. The computing power we have at our fingertips now is immense. I mean we went to the moon on slide rules! The way the market works now with Quant funds and the recent market “volatility” has showed that when something works extremely well, everyone jumps on board and the opportunities to exploit the relationships evaporate. I would like nothing more than to see this “time machine” perform out of sample as well as it will in sample. I would then have no legs to stand upon in this argument and wholeheartedly retreat my dissent.

    • david varadi permalink*
      September 14, 2009 12:07 pm

      hi brad, thanks for the kind words, actually all of the technology is fairly simple and was in fact available at that time that is why i did not use other methods as an example. So i agree completely in prinicple with what you are saying but not as it is applied to this concept.

  4. MDan permalink
    September 14, 2009 12:03 pm

    David, this experiment is very interesting. I am looking forward to seeing where you are going to take this experiment, since I have a few comments that I think you might find interesting. But everything at it’s time.

    • david varadi permalink*
      September 14, 2009 12:07 pm

      looking forward to it!

      dv

  5. quant permalink
    September 14, 2009 12:27 pm

    As you know, one can trade index futures but not the index directly. Will reported profits on the S&P 500 index be spurious due to issues such as stale pricing and large bid-ask spreads if one tried to transact in all the stocks?

    That said, I enjoy your site and look forward to your findings.

    • david varadi permalink*
      September 15, 2009 3:07 pm

      hi, there shouldn’t be much of a difference historically, but now people can trade the SPY ETF directly.

      dv

  6. John permalink
    September 14, 2009 5:41 pm

    Not only are you a great researcher but you are an excellent writer as well. If this was a book, I would be chomping at the bit to read the next page. Thanks for sharing your ideas!

    John

    • david varadi permalink*
      September 15, 2009 12:05 am

      thanks john im blushing. :o)
      dv

  7. WTP permalink
    September 21, 2009 1:11 pm

    David,
    Thank you so much for the incredible body of work you are sharing; it is helping me immeasurably.
    At the risk of sound dense, allow me to verify my understanding of your approach:
    • 50 possible strategies are available to be executed in each 5 day period.
    • The Time Machine runs tests of the prior 1 & 3 year periods to rank order the results of these 50 strategies using t-statistics to guide, using confidence results, which strategies should be traded.
    Assuming my summary is correct, could you please clarify:
    • What learning guided you to 1 & 3 years? Is there evidence that regime changes are shifting more quickly over time (implying a need for shorter look back periods in the future)?
    • Are the 1 & 3 years equally weighted?
    • Do entries occur at the open and exits at the close?
    Thanks again for helping to clarify.
    Kind Regards,
    WTP

    • david varadi permalink*
      September 21, 2009 1:15 pm

      wtp, the 1/3 year (equal weighted) was selected arbitrarily because it approximates the required time length to evaluate the average parameter length (ie long parameters require long lookbacks and vice versa). The required lookback was two, but i used 3/1 to account for the fact that some strategies would transact less.all entries/exits occur at the close

      thanks for the compliments
      dv

  8. theFB permalink
    December 12, 2011 10:00 am

    Hi David, this post (and, to be honest, all of your blog) make for fascinating reading. I’m working on various projects that kind of parallel some of the ideas you touch on here and it’s exciting to see someone explain the concepts that swim around in my head, so clearly and concisely – well done.

    Just out of interest, It seems the link to google.doc Canadian Profs papers has broken – would you be able to supply a fresh link – or send me a copy on email?

    Thanks and keep up the good work,
    theFB

    • david varadi permalink*
      October 8, 2013 12:59 am

      hi FB, thank you. I will forward you as well if i find it–see my comment to Jim above.
      best
      david

  9. Jim permalink
    October 6, 2013 11:59 pm

    Great blog. The link to one of the reference papers is broken. Does any one know where to find the paper entitled “Can Machine Learning Challenge the Efficient Market Hypothesis?”? I searched the web without any success. Thanks

    • david varadi permalink*
      October 8, 2013 12:57 am

      hi Jim, thank you. I can’t seem to find it either. I will look on one of my old hard drives and see if i can dig up a copy and send to you if i have success.
      best
      david

Trackbacks

  1. The Adaptive Time Machine: The Importance of Statistical Filters « CSS Analytics
  2. Link Feast 3-1-10! « The Edge
  3. Time Machine Test (Part 1) « Quantum Financier
  4. You Analyze, I Analyze, We Analyze, But we disagree (Part 3) « Financialfreezeframe's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: