Skip to content

CSS Bar Classification Scheme

August 17, 2010

This idea was originally inspired by Jeff Pietsch-now hailing from the great new site ETF Prophet http://etfprophet.com/-who showed me an early variant of a “pattern-match” to predict SPY returns. I have also learned some of the nuances of specific technical pattern classification from Rob Hanna’s Quantifiable Edges newsletter who is the expert on this subject matter: http://quantifiableedges.blogspot.com/ . One of the weaknesses (and also a strength) of technical  indicators is that they fail to account for unique patterns. They effectively treat situations as broad “concepts”- a good point made by Michael at MarketSci:http://marketsci.wordpress.com/2010/08/03/indicators-as-concepts/ . An RSI2 for example classifies the last 2 days as either being “overbought” or “oversold” by looking at the closing price changes only. But this fails to capture some valuable information: 1) what were the highs and lows over the past few days and where is the close in relation to these points? 2) what was the return today versus yesterday? 3) is today’s range large or smaller than yesterday? Using an indicator like DV2 which considers the close in relation to the high to low range helps to complement the RSI2 in this sense, but it does not permit the full range of permutations to be analyzed.

What we need to do is to analyze the bar pattern–or pattern of the daily high, low, and close to dig deeper into more accurate classification. That said, overly specific classification amounts to walking a fine line—the more specific we are about given setup the more rare the condition, and the less likely it will repeat out of sample. However, we may end up finding some truly strong edges that will add value to a more general framework or to indicators like RSI2.  Many traders like to look at candlesticks which tend to carry strange names and specific patterns have a near mythical status with traders.  I prefer to classify things as objectively as possible, and I believe that it is possible to construct a simpler and more robust framework. For this first version, we will keep things extremely simple. Every bar can be classified  as having the following three qualities:

1) Bar Size: the size of a bar can be considered as the High minus the Low in relation to past measures of the high minus the low. This can be normalized using a ranking over the past 252 days: a)  big bar- when the high minus low is in the 75th percentile or greater b) normal bar- when the high minus low is in the 26th to 74th  percentile c) small bar- the high minus low is in the 25th percentile or lower. In total there are 3 different bar size types.

2) Range: the range of the bar can be measured two different ways– as the close in relation to the high to low range or the open in relation to the high to low range. to avoid complexity in our first examples, and also because the close is more important we will only consider the closing range.  The closing range can be measured using a stochastic formula-  (close- low)/(high-low). This closing range can be normalized using a ranking over the past 252 days: a) top of range- when the close relative to the high to low range is in the  90th percentile or greater b) upper range-when the close relative to the high to low range is in the 50th to 90th percentile  c) lower range- when the close relative to the high to low range is in the11th to 49th percentile d) bottom of range- when the close relative to the high to low range is in the 10th percentile. There are 4 possible range patterns.

3) Price Change: This is the change in price from the current close to yesterday’s close.  This can be normalized using a ranking over the past 252 days. For the price change we will use four quartiles: a) big up- when the price change is in the top quartile (>75th) b) up- when the price change is in the 2nd quartile (50th -74th) c) down- when the price change is in the 3rd quartile (26th to 49th) c) big down- when the price change is in the bottom quartile (<25th). There are 4 possible price change patterns.

The following classification scheme is simple and objective, and the process of normalization makes for easy comparisons across time. Counting the number of combinations leaves us with 3x4x4= 48 different possible bar patterns. This is manageable enough on its own, but tends to get dicey as you look at multiple days, for example a 3-day patter can have 48x48x48= 110,592 patterns! In practice, there are strong correlations between pattern components-  a big price change, a high or low closing range and big bar size are often related, which accounts for the correlation between mean-reversion indicators. This will limit the number of practical combinations.  However, there are distinctions and divergence here that make sense and thus it is worth considering all possible combinations. Nonetheless, this method is best handle by genetic algorithm. We will post an example soon.

Advertisements
21 Comments leave one →
  1. August 17, 2010 6:32 am

    Very interesting David, I am looking forward to the genetic algorithm implementation.

    • david varadi permalink*
      August 18, 2010 11:11 am

      thanks quantum. indeed should be interesting—it is the only way to handle the sheer number of combinations.
      best
      dv

  2. August 17, 2010 10:48 am

    Very interesting, indeed. Thank you.

    • david varadi permalink*
      August 18, 2010 11:12 am

      thanks t
      best
      dv

  3. August 17, 2010 2:31 pm

    Nice writeup. I like your approach to discretizing the bar size into range bins.
    This is one of the problems I have often pondered on how to best tackle, because from a programming point of view, a large white bar looks equivalent to a small white bar if we only use the ratio of the body to outer min/max tails (or similar normalizing metrics). I’ve found that the relative magnitude of body ranges have a lot of information that would otherwise be lost.

    Anyways, I look forward to seeing the GA example and how you handle generalization and out of sample metrics.

    • david varadi permalink*
      August 18, 2010 11:15 am

      hi IT thanks and nice work btw on you site. indeed normalization helps to solve a lot of problems that would otherwise create a lot of noise. I don’t know much about candlesticks and just created this method because I wanted something more intuitive to me personally.
      best
      dv

  4. August 18, 2010 2:57 pm

    David,

    As a heads up to you, I have just scraped together a short run of your concept on the Qs. It performed remarkably well OOS on a relatively simple learner using gain as the rough metric. The rough arbitrary period I tested on at the moment, was training set = 500 days, testing = 100. Gain on actual test set was -38%, test predicted set gain was +64%. Before anyone gets too excited, the confusion matrix only had a net win rate of 57%, but the down success rate was very high (71%, while up was 53%) and the overall period was down, thus, the overall gain was rather high.

    Obviously, a more thorough test is warranted, but just wanted to share my initial optimism on your idea.

    Cheers,
    IT

    IT

    • david varadi permalink*
      August 18, 2010 3:38 pm

      hi IT, sounds good–I would be happy to post your findings if you have a simple to understand format with which to explain things.
      best
      dv

  5. Joshua Chance permalink
    August 19, 2010 12:02 pm

    Very interesting, David. Looking forward to part 2.

    This sparked a whole bunch of ideas for me, as I’ve had something like this on the backburner for a while. Here’s how I would at first approach it. If anything sounds interesting to you by all means use it in any future post.

    Bar Size:

    The same except use the log of high minus low for further normalization.

    Range:

    Complicated, here goes… I would have 3 bins which in combination with the next classifier would really be 6 bins. Bin 1 would be the middle 20% of range (41st to 60th percentile). Bin 2 would be if the close was in either 61st to 80th or 21st to 40th. Bin 3 would be 1st to 20th or 81st to 100th. Also, considering that on an extremely low range day a high or low range close doesn’t seem as meaningful I would increase the middle bin size (Bin 1) if the percent rank of the day’s range was below some threshold. I think the bar (no pun intended) should be set higher when it comes to “range capture” on low range days.

    Price Change:

    I would use open to close percent price change with 4 bins, although again, more complicated. If its an up day and (close/open) is greater than the median up day for the past year, it goes in Bin 4 (76th+ percentile) otherwise Bin 3 (51st to 75th). And the same for down days.

    So by trading close-to-close % for open-to-close % we reduce the number of possibilities to 36. I would probably go further and remove the range classifier completely and replace it with a 3 bin volume classifier thats conditional on up or down days, comparing up day volume to past up day volume and same for down days. I think that since volume, range, and open-to-close absolute price change are correlated you wouldn’t lose that much info relative to what you might gain from a relative volume classifier.

    Anyway, that’s my two cents. Your posts always spark mini-firestorms of geeky ideas for me, so thanks again for that!

    Cheers,

    Josh Chance

    • david varadi permalink*
      August 19, 2010 2:19 pm

      Joshua, these are some excellent ideas and thank you very much for sharing. I will try to incorporate these into a broader schematic at some point. I especially like the idea you have with the ponzo time machine as this is along the lines that I was planning to go with this.
      best
      dv

      • Joshua Chance permalink
        August 19, 2010 3:17 pm

        Thanks David..

        Yeah, ever since I came across Jeff’s Ponzo post I’ve been postponing attempting to implement it cross-sectionally as opposed to his time series approach. I think “percent ranking” market breadth, economic indicators, and market regime states with the Ponzo method might have some merit, if not for outright market timing then possibly for “factor timing” with quant based market neutral strategies. Btw, I remember reading a while ago that you were going to do a post or two on the fundamentals quant stockpicker side of the coin. Here’s one reader that’s anxiously awaiting that, if you’re still planning on it.

      • Joshua Chance permalink
        August 20, 2010 1:59 pm

        Another thought…

        Perhaps including the next day returns of past bars that are that are the least similar would add info. So if the past bars that are most similar had mostly positive next day returns and the bars that are least similar (or the most opposite) had mostly negative returns, then your confidence in next day’s forecast would be higher. A potential problem with this is it doesn’t account for asymmetric patterns of up vs. down days. Also, volume and range size patterns could be best for predicting next day volatility and might just add noise with predicting direction.

  6. Joshua Chance permalink
    August 19, 2010 1:28 pm

    Another idea that goes in the opposite direction from discrete pattern classification…

    How about comparing a current bar’s similarity or lack thereof to past bars. Calculate percent ranks for volume, range, price change, etc.. Then using an array and loop function, calculate the difference between the current bar’s volume percentrank and past bar’s volume percentrank. Do the same for the ranks of your other variables and sum the absolute values of each variables’ rank difference. Sort in an array, and use the next day return of the past bar that’s most similar as your forecast.

    Similar to Jeff Pietsch’s “Ponzo’s Time Machine” but for one bar. Perhaps taking the average of the top 3 to 10 similar bars’ returns as your forecast, assuming that a minimum agree on the direction and/or the Z-score is high (or low) enough.

    I hope that’s clear because I’d be very interested in what you think.

    Josh

  7. August 21, 2010 11:38 am

    The problem of too few matches according to combinatorial mathematics can be largely addressed using nearest neighbor analysis. One does not need to know they are in Topeka to know that they are in Kansas. Thus, the 48 cell matrix, or whatever large size it happens to be, can be collapsed into whatever smaller discrete groupings based on like vocabulary that one wishes. I will do a post on this at http://www.etfprophet.com in the coming weeks if there is an interest.

  8. August 21, 2010 2:06 pm

    Some good thinking over here. I’ve been tinkering with it a bit. A few observations for those who are running some simulations.

    1) I made a few alterations to the inputs. I think the one attribute that was return (Price Change), was changed to [open to low] relative to [high – low]. This was done to try to capture more of the candlestick information (that was missed by only using the close to low (Bar Size) bin).
    2) One problem that surfaces is that if your training set occurs during a high volatility regime, you may find many high prob setups during the training phase dependent upon ‘high’ bar_range in that regime, that do not exist at all in the testing set. So you might consider how you are defining the percentile limits in the testing set, and/or how the train/test set window percentiles are defined in sample/out of sample, as well as train/test set lengths. Say for example, you have input attributes H,L,L that are very high probability in the training set, but the attribute H is not present in the testing set; you may only have M,L,L and L,L,L in test set depending on regime conditions. Smaller train/test windows will somewhat avoid this, but at the cost of less observations to rely on.
    3) I found many high probability patterns in the training phase (55% to 80% with minimum number of occurrences required >=11, that ranged between 11-130 occurrences depending on pattern). What I found was that there was a good relationship between hit rate in sample/out sample over many ranges. The major problem is that even though the hit rate is high both in sample/out sample, the learner is only focusing on nominal attributes, and has no emphasis on magnitude cost of errors.
    I.e. you might have consistent 60% OOS success across samples, but only one or two of the errors in the test sample are large price changes, causing the entire system equity curve to be shifted below the b&h curve. If you are using a GA approach, having control over the fitness function might give you some help in this area.

    Those are some observations. I’ve discussed some other alternate approaches around some of the issues (including feature reduction, defining features, and generalization) in past blogs.

    Cheers,
    IT

    • Joshua Chance permalink
      August 21, 2010 6:42 pm

      Yeah, using a simple percentrank of a bar’s range or even percent rank of say, 10 day annualized volatility will present problems when you’re in a different regime. I’d recommend further normalization of all inputs before they’re ranked. For historical volatility I use:

      ( HV(21) – HV(65) ) / HV(65)

      And then rank this against the past 252 bars.

  9. August 22, 2010 11:31 am

    Nice third derivative JC.

    • Joshua Chance permalink
      August 22, 2010 3:34 pm

      Thanks Jeff,

      Btw, I’m definitely interested in a post on nearest neighbors, especially if there will be any Tradestation code.

  10. October 9, 2010 10:28 am

    I was going to suggest using a k-nn algorithm as well. Beaten to it long ago by Mrkt_Rwnd! I’m catching up on my reading for a break.

  11. QuantFX permalink
    November 21, 2010 3:03 pm

    Hi David,
    Did you ever post the code for the Bar Classification idea?
    Thanks

Trackbacks

  1. The Role of System Testing StockTwits U

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: