Skip to content

Filtering White Noise

April 23, 2013

Most asset return processes can be characterized as containing a primary trend, along with mean-reversion around that trend, as well as a certain amount of random noise.  Econometricians classify these elements using a Hurst Exponent as either : 1)black noise (trending/positive autocorrelations- Hurst>.5) 2) pink noise (mean-reverting/negative autocorrelations- Hurst<.5) or 3)  white noise ( no trend/mean-reversion, low/insignificant autocorrelations- Hurst=.5). Intuitively traders wish to capitalize on either the trend or mean-reverting behaviour- often at different time frames since they are part of the same unified process (trends tend to occur at longer time frames, and mean-reversion around that trend at shorter time frames). The key obstacle for both styles is to eliminate or minimize the impact of white noise on indicators that are used to measure either trending or mean-reverting behavior. The failure to do so results in poor trading results due to false/random signals.

Consider two charts of the same time series (from Jonathan Kinlay’s good blog) one is a black noise process while the other contains a white noise process:

Black Noise Fractal Random Walk

white noise random walk

In the first chart- with a black noise process- it is easy to see how profitable it might be to use a simple moving average to trade the underlying–there is very little noise to speak of that is not self-reinforcing (trending). In the second chart- a white noise process- you can see the similarity to real financial time series. There appears to be a fair amount of random noise, and it would be more difficult to trade for example with a moving average. The chart below shows a pink noise process, and looks familiar to those who trade pairs as a log of the ratio of two asset prices that are cointegrated (ie like one sector ETF versus the same sector from a different ETF provider).

pink noise

Notice that this process appears to have a stationary mean and predictable negative autocorrelation. It would be impossible to trade this series using a moving-average based trend strategy. However, this would be an ideal dataset to trade using runs (ie buy on a down day short on an up day). In practice, time series data contains elements of all three types of noise and thus what we want to do is to filter out the white noise which is less predictable and obscures otherwise predictable asset behavior.

A recent paper was written by a colleague- George Yang-  that sheds light on how to go about filtering random/white noise elements and also shows the practical impact on trading system profitability. The paper recently won a prize in the prestigious Wagner Award competition which is run through NAAIM. Mr. Yang shows that one can filter out “insignificant” data using a rolling or historical standard deviation threshold and extend indicators to use only “significant” data. For example, if one were to use a 200-day moving average on the S&P500, you might stipulate that market moves between .25% and -.25% are too small to be considered significant in defining the trend. That is, a small up or down day (or series of small days) may cause a trade which will not signal a true change in the underlying trend. This can also be translated for example as a fraction of a rolling standard deviation unit. To calculate the true 200-day moving average in the first case, one would eliminate all insignificant days from the data set and count back in time until there were 200 days of significant data to calculate the moving average. The results in the paper demonstrate that this type of filtering is effective increasing the signal to noise ratio and improving trading results across a wide range of parameters. The paper also shows the same technique is effective at improving a short-term mean-reversion system using runs. This highlights the potential of applications that can filter white noise from the data.

There are multiple extensions to improve this concept, many beyond the scope of this post. However, one seemingly obvious method would be to also filter insignificant days as also requiring trading volume to also be insignificant– presumably volume that is below average would signify a lack of conclusive agreement on the current market price. On the flip side a seemingly small market move accompanied by very heavy trading volume could be a warning sign. Another method could look (on George’s suggestion) at the high to low range for the day in relation to the past (ie like DV2). Presumably a tight daily range implies insignificant movement, while a wider range is more informative. One can picture using multiple filters to enhance the ability to identify truly significant from insignificant trading days. This would in turn significantly improve trading signal performance or forecasting ability.

About these ads
12 Comments leave one →
  1. Joe permalink
    April 23, 2013 1:01 pm

    “To calculate the true 200-day moving average in the first case, one would eliminate all insignificant days from the data set and count back in time until there were 200 days of significant data to calculate the moving average.”

    There are no insignificant days in the market.

    “The results in the paper demonstrate that this type of filtering is effective increasing the signal to noise ratio and improving trading results across a wide range of parameters.”

    In hindsight only.

    • david varadi permalink*
      April 23, 2013 8:34 pm

      I am posing this as an area of exploration, and while I cannot prove that this concept is true–you most certainly cannot make the concrete statements that you made above either and be definitively correct. There is validity to the concept–as for the results and application- those are up to the researcher to investigate. Certainly there is enough practical and academic evidence to lend credibility to white noise and significant/insignificant market days as a feasible/worthwhile area of investigation..
      best
      david

  2. April 25, 2013 1:03 am

    Interesting article. Just curious…how is the concept different from using MA channels or BB or Keltner bands? The underlying premise seems to be same.

    • MachineGhost permalink
      April 26, 2013 4:44 am

      It sounds like the concept strings together only the significant days for calculating an indicator. Similar in concept to using range bars.

      • david varadi permalink*
        April 29, 2013 11:44 am

        hi, yes you are correct. thanks
        david

    • david varadi permalink*
      April 29, 2013 11:44 am

      hi, i think that the concept primarily differs in that the length of time for calculation gets lengthened as insignificant days
      are identified. ie if you try to calculated a 200 day moving average, if there are 5 insignificant days, you would have to extend the lookback window more than 200 days in order to find 200 significant data points.
      good question
      best
      david

  3. Turing Complete permalink
    April 27, 2013 6:09 pm

    If I understand the computation correctly a sequence of small moves that aggregate to a significant change in price will not be impounded into the filtered price until it is too late. Is that a reasonable thing to do?

    • david varadi permalink*
      April 29, 2013 11:46 am

      hi, this is only costly if there is very little or no white noise–essentially there is a latency/noise tradeoff which
      should be dominated by noise at short time periods if noise does in fact exist.
      best
      david

  4. Joe permalink
    April 28, 2013 10:19 am

    “The results in the paper demonstrate that this type of filtering is effective increasing the signal to noise ratio and improving trading results across a wide range of parameters.”

    Far from a demonstration, possibly curve-fitting. Very old idea, I’m surprised it won a prize.

    • david varadi permalink*
      April 29, 2013 11:48 am

      hi Joe, my statements were not meant to be conclusive–so that is fair and indeed there are always the pitfalls of data mining.
      I think that there are some missing links–ie identifying whether filtering should be used in the first place which perhaps i can delve into at some point.
      best
      david

  5. Robert L permalink
    June 12, 2013 9:38 am

    This sounds conceptually similar to smoothing techniques like Kern smoothing available in packages such as R. In both cases the result is a smoother curve although I think there are probably some nuances in this article that are different … and which I don’t fully appreciate.

Trackbacks

  1. Mean-Variance Optimization and Statistical Theory | CSSA

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 547 other followers

%d bloggers like this: