Skip to content

Part 2: Using a Self-Similarity Metric with Intraday Data to Define Market Regimes

April 17, 2015

The Self-Similarity metric has been a popular series. Recently the original post was shared on Jeff Swanson’s popular site System Trader Success which covers a wide variety of thought provoking articles on trading system development and is worth reading. Jeff has also posted some TradeStation code for the indicator which some readers may find valuable. In a great example of vertical blogging, a very interesting analysis on the Self-similarity metric was done by Mike Harris on his Price Action Blog which also has a lot of very interesting articles and is worth following.

I have to say that one of the most rewarding aspects of this blog has been my interaction with readers (and fellow bloggers) at various levels. I have developed several relationships over the years, and some of these developed into new business opportunities. Many years ago while actively running CSS Analytics, I was fortunate to work with a core group of very talented and dedicated people. It has been nice to see that many of these individuals have become quite successful in the quant world. One of the original members of that talented group was David Abrams. We have spent a lot of time on system development over the years, and although we no longer actively collaborate we still manage to keep in touch. David reached out to me with some visuals and analysis on the chaos/stability self-similarity indicator I recently presented on the blog. I suggested that we post this for CSSA readers, and he was kind enough to agree to share.

Dave Abrams is Director of Quantitative Strategies at Wilbanks, Smith and Thomas Asset Management ( in Norfolk, VA, where they design risk managed fund models and S&P indices (about 400M of the firm’s 2.5B in AUM is quant). He was formerly part of a group doing quant research at CSS Analytics.

Visualizing The Chaos Stability Indicator

It is useful to visualize DV’s new self-similarity regime method in a Tradestation chart. Here is the strategy and indicator using the default parameters (N = 10 day, 60 day average, 252 percent rank length). I transformed the indicator by subtracting 0.5 to center the values around 0 and displayed with a color coded histogram.

Here is what the Chaos Stability as a buy/sell indicator on the SPY looks like:

pic 1

The indicator is currently in a new “sell” position as the value is below zero.
This perhaps reflects the more random and sideways market movement that we have had over the past few months. As with any indicator, this bearish signal is not perfect and in April through early July of 2014 the market made good upward progress despite the chaos stability values being mired deep in the red. It is useful to look at some other charts to get a sense of when the buy and sell signals occur.



pic 1


It is hard to discern what is going on without careful inspection, but it seems as if the chaos/stability indicator flashes buy signals at swing lows- where the market is moving persistently downward or in persistent bull moves upward out of corrections or at the top of established rallies. Sell signals tend to occur near market tops where things get choppy or in areas of congestion where the market is moving sideways within its long-term trend. Since persistency occurs both in up and down moves, signals are uncorrelated or even negatively correlated to a trend-following strategy as highlighted in the original post. This is important to those looking to diversify a trend strategy on a single asset.

Smoothed Chaos Stability Metric

One of the challenges I noticed when looking at the charts was that the indicator frequently switched from buy to sell- especially as the value hovered close to zero. Smoothing seemed to be a logical approach to reduce some of the whipsaw trades and reduce turnover. To address this issue, I applied John Ehler’s Super Smoother method ( to to the Chaos Stability measure. Notice the indicator below. This reduced the number of trades by 11% and the Profit Factor went up by 6 %.


Walk Forward Analysis

One of the challenges of new indicators is that they tend to promise a lot but fail to deliver in the real world. Often the reason this happens is because the examples presented by the developer are tuned to a specific set of parameters- in other words the indicator is over fit and is not robust. So is DV’s innovative new indicator just lucky or is it stable? One of the best ways to evaluate whether an indicator has any predictive power is to perform a walk forward test. An indicator with no predictive power will tend to fail using such tests. For this evaluation, I did a walk-forward test in Tradestation. This module continuously re-optimizes the parameters so that at each period of time we are using out-of-sample results. We can get greater confidence in the strategy if it performs well using walk-forward. The results are below :

pic 6

Based on these criteria for evaluation, the DV Chaos/Stability indicator passes with flying colors. In addition to passing the walk-forward test the logic of indicator is also sound- which is an often overlooked but important qualitative assessment. In our quantitative research methodology we always apply a walk-forward analysis and qualitative assessment. The hypothetical equity curve from the walk-forward results showing each out-of-sample period over time are presented below.


Tradestation Walk Forward Analyzer Performance Graph. The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained

Good quantitative research is a combination of different but stable ideas which either confirms each other or adds diversity to the overall model. I agree with Ray Dalio that 15 uncorrelated return streams is the holy grail of investing ( DV’s chaos stability regime model could be a viable uncorrelated candidate.


The research discussion presented above is intended for discussion purposes only and is not intended as investment advice, recommendation of any particular investment strategy including any of the depicted models. There are inherent limitations of showing portfolio performance based on hypothetical & back-tested results. Unlike an actual record, hypothetical results cannot accurately reflect the effect of material economic or market factors on the price of the securities, and therefore, results may be over or under-stated due to the impact of these factors. Since hypothetical results do not represent actual trading and may not accurately reflect the impact of material economic and market factors, it is unknown what effect these factors might have had on the model depicted above. Past performance, whether based on hypothetical models or actual investment results, is not indicative of future performance.

Using a Self-Similarity Metric with Intraday Data to Define Market Regimes

March 13, 2015

koch snowflake and serpinski triangle

The images above are the famous Sierpinski Triangle and the Koch Snowflake. These objects are “self-similar” and this means that examination at finer levels of resolution will reveal the same shape. Both are examples of “fractal” geometry, and are characteristic of many phenomena in the natural world such as mountains, crystals, and gases. Self-similar objects are associated with simplicity, redundancy and hence robustness. Self-dissimilar objects are associated with complexity and chaos. Several mathematicians (including Mandelbrot) have observed that markets are clearly non-gaussian or non-normal. Markets exhibit “fat-tails” and have a distribution that shares more in common with a Levy distribution than the normal distribution which is used frequently in quantitative finance. But the market does not have a constant distribution- at times the market behavior is fairly normal in character while at other times the market is wild and unpredictable. The question is how we can effectively determine which regime the market is in so that we can apply the appropriate trading strategies to mitigate risk.

The essence of self-similarity and complexity is to compare the whole to its component parts. For example lets take a square that is divided into four separate squares of equal size. The area of the larger square is equivalent to the sum of the areas of each of its component squares. The same of course is true of a one-dimensional line which is equivalent to the sum of its parts. One of the methods of identifying self-similarity in the stock market is to look at the range or the difference between the highs and the lows. We would expect that in a perfectly self-similar market the longer range would be equivalent to the sum of the ranges measured over a smaller interval. The more chaotic the market is, the greater the difference will be between these two measures. Such market conditions would be characterized by a large ratio between the sum of smaller ranges versus the longer measure of range. Essentially this relationship is called fractal dimension and is a measure of complexity. There are many different ways to measure this including using the Hurst exponent, but the problem I have always found in my own humble research is that the suggested thresholds defined by specific absolute values do not seem to reflect the information consistent with theory. I have often found that relative measures tend to be more robust and consistent- much the same way that the magnitude of past returns has less predictive value than the relative rank of past returns. Relative measures tend to be more stationary than absolute values. To compute this measure of self-similarity I use the intraday range (high minus low) versus a longer range window. Here is how it is calculated:

1) find the high minus the low for each day going back 10 days
2) take the sum of these values (sum of the pieces)
3) find the 10-day range by taking the 10-day maximum (including the highs) and subtracting out the 10-day minimum (whole range)
4) divide the sum of the pieces by the whole range- this is a basic measure of fractal dimension/complexity
5) take the 60-day average of the 10-day series of the complexity values- this is the quarterly “chaos/stability” metric
6) use either the 252-day normsdist of the z-score or the percentile ranking of the chaos/stability metric
7) values above .5 indicate that the market is in a “chaos” regime and is much less predictable and non-stationary, values below .5 indicate that the market is stable and much more predictable.

When the market is “stable” it is easier to apply effective quantitative trading systems. When the market is in “chaos” mode, it is not necessarily volatile- but rather it is too complex to use for standard measurement and calibration of basic linear prediction. Let’s look at how this measure performs over a long time period using the S&P500 as a test set. The high and low values are generally the same until about 1963 which is when we will begin this test. Here is how the market performed in both regimes over the last 50+ years:



The market performs quite poorly in “chaos” conditions, and seems to make all of its long-term returns in the “stable” regime. Note however that the volatility is not materially different between both regimes- this means that we are capturing something different than just high and low volatility market conditions. Furthermore the correlation between the chaos indicator signals and for example the basic trend signal of a 200-day moving average is -.116. This means that we are capturing something different than just the market trend as well. The indicator is meant to be used to define regimes rather than as a trading signal to go long or short, but clearly there are some interesting attributes worthy of further exploration and refinement.

Conditional Percentile Channel “R” Code

March 11, 2015

The code in R for Conditional Percentile Channels is now available on Michael Kapler’s Systematic Investor blog. The original code was contributed by long-time reader Pierre Chretien and subsequently verified by Michael. Pierre has been generous enough to share code from the blog material several times over the past few years. Thank you both for sharing this code with our readers!

Conditional Percentile Channels

February 20, 2015

Ilya Kipnis at Quantstrat recently posted some R code attempting to replicate the ever-popular Percentile Channel Tactical Strategy. The results are similar but not exactly in line- which may have to do with the percentile function as Ilya has pointed out in the comments. In either case, the general spirit remains the same and readers are encouraged to take a look at his analysis of the strategy.

In quantitative finance there is the concept of “Conditional Value at Risk” (CVaR) which is a calculation frequently used in risk management. The general idea is that you are trying to capture the expectation beyond a certain tail of the distribution. The CVaR is preferred to the value at risk because it more comprehensive than looking a just one value. Likewise, Percentile Channels are similar to value at risk in that context as well as traditional Donchian Channels which only look at one reference price. Perhaps a logical improvement would be like CVaR to use the average of the prices above a certain percentile threshold. This is more like calculating the expected upper or lower bound for prices. Furthermore to account for the fact that recent data is progressively more important than older data, we can weight such prices accordingly.In theory, the most important prices are at the extremes and should also be weighted as such. So Conditional Percentile Channels is simply a twist on Percentile Channels incorporating these two ideas. Here is how it would be calculated:

conditional percentile channels

Basically you select a threshold like .75 and .25, and then you weight the prices that are above those thresholds according to both position in time (like a weighted moving average) and distance to max or min. This gives you a more accurate expected upper or lower bound for support and resistance (at least in theory). I know I am going to regret this, but using the same strategy ie- Percentile Channel Tactical Strategy in the last few posts- I substituted in the Conditional Percentile Channels using the same threshold of .75 and .25. All other parameters are identical. Here is how that looks:

conditional percentile strategy

Looks like a slight improvement over the original strategy in both returns and risk-adjusted returns. In general, I just like the concept better since it condenses more information about support/resistance than either Donchian Channels or Percentile Channels. It also represents a good complement to moving averages which capture central tendency rather than price movement at the extremes. So there you have it- yet another twist on using channels.

Percentile Channel Strategy Replication

February 16, 2015

Michael Kapler of the always excellent Systematic Investor blog has moved his publishing to GitHub to make it easier to post code. This has flown under the radar (even to me), and we are all grateful that he is back to publishing. He was able to reproduce the “Simple Tactical Asset Allocation with Percentile Channel Strategy” in his recent post here.

The table below compares the original strategy (channel rp) to other benchmarks including 1)ew- equal weight the assets in the portfolio 2)rp- risk parity using the assets in the portfolio and 3) channel ew: the percentile channel TAA strategy using equal weighting 4) QATAA- which is the application of Mebane Faber’s trend-following strategy cited in his now famous paper- A Quantitative Approach to Tactical Asset Allocation (in this case QATAA uses the same underlying assets and cash allocation as the percentile TAA strategy). Of course QATAA is one of the inspirations for the strategy framework and Meb always manages to publish interesting ideas on his World Beta blog. To avoid issues with different sources of extended data, Systematic Investor begins the test in 2010 using the underlying ETF data to show how the strategies have performed in the current bull market. If you are getting results in line with this test than you can feel comfortable that you have the details correct- if not you can use R and the code provided by Systematic Investor in the post.

channel strategy replication

After comparing results, Michael and I show an near identical match (I also get a sharpe of 1.42 and a CAGR of 8.93%) – a relief after all the commotion caused by the initial post (which was addressed in my now amusing rant over here). The original strategy is the best performer of the bunch since it applies multiple time frames as well as normalized bet sizing via risk parity (common for most trend-followers). As I have stated before, of the reasons I like the Percentile Channel approach is that the signals are likely to be slightly different from what most asset managers and investors are using.

New Channel Concepts: Volatility-Adjusted Time Series

February 12, 2015


In the last several posts, I introduced some different methods for channel strategies including Percentile Channels. A simple way to potentially improve (or at least take a different approach) to a donchian channel strategy is to use a different price input to generate trading signals. As stated in Error-Adjusted Momentum Redux, using any type of risk adjustment tends to improve performance by reducing some of the noise. That is easy to apply when using returns, but how do we apply this concept to a price-based strategy? Actually it is quite simple: using a fixed target percentage- say 1%- you multiply all returns since inception by the target divided by some lag of standard deviation. Then you create an index of those returns which becomes the new price series (being careful to avoid any lookahead bias). This volatility-adjusted index is what generates the signals for your channel strategy instead of the traditional price history. Of course in backtesting, you receive returns on the actual price history and not on the volatility-adjusted index. As a final point of clarification, you are not changing your position size as a function of volatility, instead you are just changing the input price.

So lets compare using a traditional 120-day Donchian Channel strategy that buys the S&P500 on new 120-day highs and sells and goes to cash (SHY) on 120-day lows versus the same strategy using a volatility-adjusted time series to generate signals. The lookback is a 20-day standard deviation to adjust daily returns to create the index (with a .75% vol target–note the choice of target doesn’t alter performance just the scale of the index). For this test we use SPY with data from Yahoo, and SHY with data extended from Morningstar. Note that the red line is NOT the equity curve of the strategy, but rather the Volatility-Adjusted Index created using SPY. The performance of the strategy using the index for signals is also highlighted in red:


In this case, performance is improved using the volatility-adjusted index for signals versus the actual SPY price. Here is the same strategy using DBC with the ETF data only (since the choice of extension of DBC can create significant variability in performance):


The strategy shows some promise and generates different signals at certain times than the traditional strategy. Perhaps using different risk metrics such as acceleration or using other filtering techniques may hold even more promise. This same concept can be applied with moving averages or any other time of price-based signal. Just another concept for the diligent researcher to experiment with. Perhaps applying fractals to generate charts may be another useful avenue of exploration.

A “Simple” Tactical Asset Allocation Portfolio with Percentile Channels (for Dummies)

February 8, 2015

For Dummies

I actually received a large volume of what could best be chararcterized as “hate mail” for one of the previous posts on percentile channels. In reading these comments I was reminded of Jimmy Kimmel’s funny segments where celebrities read mean tweets about themselves. While I did not publish these comments (I do not wish to alienate or prohibit those people who are kind enough to comment on the blog), needless to say most of them implied that I had presented a fraudulent strategy that badly misrepresented true performance. Since exact details were not provided on the strategy this is a difficult claim to justify. As a mountain of such comments piled in, I decided that it would be useful at this time to clarify how the allocations were calculated. The initial strategy was developed using a pre-built testing platform in VBA, so presenting the details for how the strategy calculates positions is easier than taking the time to build it in a spreadsheet.

It is rare that I present a formal strategy on this blog for several good reasons: 1) this is a blog for new ideas to inspire new strategies not for sharing code or spoon-feeding people with recipes 2) people actually pay money for strategies in the quantitative investment business, and giving one away for free seems like a pretty good deal. Who ever complains about free food? Hint: No one. 3) whenever I post strategies or indicators I get flooded with demands for spreadsheets and code. The tone of such emails is often terse or even desperate and implies that I have some sort of obligation to assist readers with replication or implementation on their end. Since the blog is free and competes for my often limited time while engaging in unrelated but paid business activities, meeting such demands is difficult to justify. I would comment that even the authors of academic articles to reputable journals rarely provide either: a) easy instructions for replication–in fact it is notoriously difficult to replicate most studies since either the instructions are vague or details are missing or b) assistance/support— authors rarely if ever provide assistance with replication and rarely answer such requests, even when their articles are supposed to contribute to some body of research (unlike a blog). I would like to think that CSSA has been considerably more generous over the years.

As a former professor of mine used to say: “I think you are responsible for doing your own homework and using your own brain”– perhaps a novel concept to those who simply wish to coast of the hard work and individual thinking of others. So without turning this into a prolonged rant, here is a “simple” (I will refrain from using that word in the future after the latest experience) schematic of how allocations are calculated for the strategy:

A couple key details first- the strategy was rebalanced monthly (allocations calculated and held through the month) and not daily. Also, the strategy is LONG ONLY. This means that any negative positions are ignored. The channel score or signals in the initial calculation can be long or short ie 1 or -1. This is probably the key reason why readers were unable to replicate since they probably used 1 or 0.

tactical for dummies

Notice that negative positions are used to calculate allocations but are ignored in the final calculations. Furthermore, the cash position is allocated as an excess to the total of active allocations and not included in the risk parity position sizing (which would make SHY a huge position due to its low volatility). So I hope that this helps reader’s implement/duplicate the strategy. Keep in mind that prior to 2006, some of the ETFs used had to be extended with other data which reader’s may not have access to. However, using ETF data only yields a sharpe ratio of about 1.5. Beyond this- readers are on their own. Good Luck!


Get every new post delivered to your Inbox.

Join 900 other followers