Skip to content

Inter-market Effects

August 12, 2010

One subject matter that is noticeably absent in the quantitative blogosphere is the measurement and use of inter-market variables for market prediction. Inter-market variables can be defined primarily as any major market(s) that is/are likely to affect an  individual market or stock such as interest rates, currencies, and commodities to name a few.  The primary reason inter-market research is absent from much of the research is because-excepting interest rates-there do not appear to be any clear and consistent effects that forecast the S&P500 in the short-term over time. As a consequence, this area has been largely untouched yet fertile ground. Frank Hassler recently wrote a good article about using sector returns to predict the SPY: http://engineering-returns.com/2010/08/08/spy-xlv-xlu-sector-performance/. This article showed essentially that when defensive sectors are falling, the S&P500 does better the next day which probably reflects sentiment factors (investors dumping defensive issues) versus inter-market –though there may be a relationship between XLU and interest rates affecting returns in this case also.

The last two years highlights the importance of inter-market factors on the market, yet so far few have applied the immense potential of this approach to date. After running hundreds of backtests spanning many different types of tests using technical variables it became obvious to me that certain time periods were difficult for technical factors regardless of the approach. Once I started testing inter-market variables it became immediately apparent that these same difficult time periods were handled exceptionally well using  this approach. My intial conclusion is that during certain periods the inter-market variables are far more important than taking advantage of feedback loops or sentiment. This is especially true with individual stocks, which tend to provide an enormous opportunity to gain or reduce leverage to such effects when the timing is right.

It is my opinion that most quants in general have too much of a bias towards very long-term backtests (greater than 10-years)–and in no other area does this distort research initiatives more than the field of inter-market effects. The major problem with inter-market effects is that they are always changing depending on the economic environment. In contrast most mean-reversion/trend or price effects relate to biases in human behavior which tends to be far more consistent over time.  Every decade brings a new economy with new market players and salient factors that only echo past economic situations to a minor degree. In some decades the US dollar is strong and no one is worried about deficits, while inflation might be a concern. In other decades the dollar is weak and people are worried about deficits and the specter of deflation.  In other cases the relationships may be somewhere in between or completely unique. Truthfully, the cycles of economic themes are much faster than just a 10-year window, and can change multiple times in the interim. This is because Fed policy and government intervention can change the direction of certain variables, and their effect on the markets can be either highly temporary or sometimes longer-term. In either case there are a lot of factors at work, and a dynamic approach is absolutely necessary. The problem with trying to find effects over a decade or longer is that you end up averaging  out effects that have changed direction but were exceptionally strong and predictable in the short-term. To be able to capture inter-market effects you have to track things on a rolling basis over much shorter windows. This is a far cry from the long-term and static back-testing that most quants are used to doing. My advice is that inter-market effects can be discovered and applied effectively using only a few years of data history, and in some cases much less. That said, teasing out these effects and their correlations with other inter-market variables can be a tricky exercise. Furthermore data pre-processing becomes a much more significant issue. Still, it is a brave new world out there–perhaps researchers will heed my humble advice and do some homework in this area.

3 Comments leave one →
  1. MIke permalink
    August 12, 2010 9:45 am

    So if these intermarket relationships tend to be less stable then say MR or trending markets one would need a way to “turn on” this strategy or relationship, and then get out of it when it stops being significant, correct? Do they tend to exist for long enough periods of time that we can find them and utilize them before they are gone, while avoiding noise and spurious relationships?

  2. CarlosR permalink
    August 18, 2010 8:08 pm

    The answer to both of Mike’s questions is yes, in my opinion. But, doing this effectively is very tricky, and has to be supported by some sort of automatic search software. And even with that, there are a number of management parameters (system design issues) that have to be dealt with in order to avoid getting blindsided when an effect ceases working.

    I think David may be moving in this direction with the genetic algorithm alluded to in his latest (8/17) posting. But that’s just the start, if you want to seriously trade this general concept.

    • david varadi permalink*
      August 19, 2010 2:20 pm

      thanks carlos, I agree that these things can be tricky—they do not always need to be handled by GA, in fact a combination of approaches is the best. but you do need to incorporate equity curve management with this far more judiciously unlike with technical adaptation which doesn’t require too much micromanagement. intermarket changes a lot and spontaneously breaks down, while TA effects generally do not.
      best

      dv

Leave a comment