]]>

I have spent many years toiling with creating different asset allocation methodologies including the application of traditional and non-traditional portfolio optimization. Given the recent flare of articles on this topic in the blogosphere, I felt it was worthwhile to share my two cents. Applying optimization to a tactical approach is a topic that readers may already be familiar with, I recently posted an article on the subject on my LinkedIn : Think MPT Doesn’t Work? You Are Probably Using it the Wrong Way . Wouter Keller of Flex Capital, Adam Butler of BPG, and Ilya Kipnis of Quantstratrader wrote a great paper that was referenced in the post that readers are encouraged to take a look at; Momentum and Markowitz; A Golden Combination. They show that using the MPT algorithm in a dynamic context with shorter-term data helps to capture the momentum effect as well as producing diversified portfolios with good risk-adjusted returns. This paper is in many ways a very important contribution to a stream of research and practitioner debate that is at times imbalanced and one-sided— and without good logical reasons. MPT happens to be widely and roundly criticized in the industry for perceived algorithm-specific flaws and research that shows poor out-of-sample performance. Of course, this is primarily because it is used the wrong way–at intermediate or longer time horizons that are ill-suited to the approach. It is also important to keep in mind that industry heavyweights such as AQR and Goldman Sachs have used variants of a dynamic MPT approach to build sophisticated portfolios that have performed very well for decades.

Some other related articles on the same topic that are quite interesting include The Universal Investment Strategy by Frank Grossman of Logical Invest, and Momentum and Diversification by Andrew Gogerty of Newfound Research – 3rd Place winner of the prestigious NAAIM Wagner Award. The methodology in these two articles for optimization is nearly identical. They both find maximum Sharpe portfolios by using brute force to combine equity curves with a constrained set of choices into a portfolio instead of using MPT. It is important to understand that both MPT and these approaches are essentially interchangeable for the most part (MPT finds the brute force optimal solution mathematically). Grossman uses a variant on the objective function with a risk-aversion parameter. Newfound introduces the twist of allowing for different rebalancing windows in the lookback window which is more similar to a dynamic programming approach. In both cases, I wanted to clarify to readers that finding the sharpe ratio by combining equity curves (assuming daily rebalance) is identical to using the calculated correlation/volatility and return to compute sharpe optimal portfolios- so there is no escaping “estimation error” it is just implicit as opposed to explicit.

Wes Gray of Alpha Architect is always a good source of research and demonstrates the more traditional use of MPT (not the tactical) in asset allocation in his post; Beware of Geeks Bearing Formulas. Unfortunately, this post is not comparing apples to apples since the MPT lookback parameters are longer-term than the simple tactical benchmarks being compared. As a consequence this post happens to be biased against the use of MPT in a dynamic format which is common within the industry and in my opinion a bit unfair since there is more good to work with than bad. It just happens to be the case that using MPT in a tactical format comes with a set of unique complexities that do not plague simpler methods- these include higher turnover, concentrated portfolios and greater sensitivity to estimation error. The higher level of estimation error occurs for several reasons. One is greater dimensionality since there are many more inputs to estimate. The other is that in MPT the magnitude of returns dictate weights as well as the ranks of returns—in contrast a basic momentum approach only pays attention to rank. This puts greater pressure on return estimation in MPT versus a simple momentum approach. Another issue is the integration of noisy/random correlations which interfere with errors in return estimates. Adding correlations is important for stressing diversification but only to the extent that they are not highly error-prone. Using MPT for allocating across investment strategies rather than asset allocation is even more challenging since strategies have far more complex inputs to estimate, and some inputs cannot be estimated quantitatively. On the positive side, using MPT in a tactical approach carries much less room for data mining bias than building a simple tactical system using rules. This is especially true if the system builder is free to vary multiple parameters and may also choose their investment universe through repeated testing. Using one algorithm that is mathematically compact like MPT with one lookback parameter is far less subject to these insidious data mining problems.

I think the most important takeaway from the debate in the industry is that many algorithms, trading methods, or indicators are often unfairly discarded through improper or unsuitable analysis (or use) rather than for true deficiencies. The skilled cook can take a few mediocre or exotic ingredients and create a masterpiece while less knowledgeable cooks can find the same box of ingredients to be wholly deficient for creating a suitable meal. There are plenty of examples of people that have been successful even with the ultimate black-box machine-learning approach–it is a hazardous path much like climbing Mount Everest but apparently there are some good climbers out there (see Renaissance Technologies). Of course in good quantitative system design as in cooking, using great simple ingredients makes it easy to create a great meal without a lot of manipulation or effort. Pushing the edges by exploring the more exotic applications creates greater risk of failure but also greater opportunity- and that is a risk worth taking in highly competitive markets. You just need to have a good understanding of where to draw the line. **To that extent, I guess the decision to incorporate MPT within tactical asset allocation is ironically a matter concerning utility curves……..**

]]>

The results seem to clearly favor Real Momentum- which is impressive considering we simplified the calculation and also extended the lookback for 10-years that are “out of sample.” On average, the Real Momentum signal produces nearly a 1% advantage in CAGR annualized and a near 15% improvement in the sharpe ratio. It seems on the surface that real equity risk premiums may be more important to large investors that can move markets. But as my colleague Corey Rittenhouse points out, if you aren’t going to invest in something that has a negative real rate of return then you need to have an alternative. I agree with this point, and one logical option is to hold TIP- or Inflation-Protected Treasurys when Real Momentum is negative. Using a 120-day Real Momentum with the strategy parameters above, a baseline strategy goes long SPY/S&P500 when Real Momentum is >0 and holds TIP when momentum is <0. Here is what this looks like:

For comparison, here is absolute momentum using SPY and SHY with the same 120-day parameter:

As you can see the Real Momentum strategy outperforms the Absolute Momentum strategy, with higher accuracy on winning trades and higher gains per trade along with higher return and a higher sharpe ratio with a similar maximum drawdown. Some readers may point out that this comparison may not be fair because TIP returns more than SHY as the cash asset. As the first table shows, the timing signal itself is superior so that is unlikely to be the driving factor. But just to prove that, here is the Absolute Momentum strategy using SHY as the asset to trigger the signal but holding TIP as the cash asset:

This is substantially worse than the Real Momentum strategy and worse than the Absolute Momentum strategy using SHY as the cash asset. While not shown, using TIP as the signal asset and the cash asset does the worst of all. So apparently there is something there with respect to looking at Real Momentum- or effectively the expected real return to the broad equity market/S&P500. This is not the final word on the strategy, and it would be helpful to run an even longer-term test (one can never have too much data as they say….). But after looking at the performance on other risk assets using this signal, I can’t reject the hypothesis that there isn’t something there at first pass. It is something that makes sense, and seems to be supported by data even after simplification and an out-of-sample test. It would be interesting to run a deeper analysis to see what is going on and whether this is merely a spurious result that is driven by some other factor. A basic Real Momentum strategy that holds the S&P500 when expected real returns are positive and holds Treasury Inflation Protected Securities when they are negative earns very good returns and risk-adjusted returns and beats buy and hold over a 20-year period by nearly 5% annualized. The strategy also happens to be relatively tax-efficient compared to more complex strategies which is a bonus.

]]>

The concept has always been appealing to me, and it makes sense to use this method to reduce the downside risk of holding a chosen asset class. In thinking about this concept, I could see why excess returns- or the return minus the risk free rate- was theoretically appealing since this is the basis of modern financial economic theory. But I also realized that investors do not earn nominal returns- they earn real returns net of inflation. The cost of living goes up, and so nominal returns must keep pace with inflation in order to provide an investor with a real return on their investments. It is rational for an investor to avoid assets with negative excess returns. If the excess return is negative net of inflation (or the real excess return is negative) then this should make an asset even less desirable for an investor.

The challenge is that inflation is somewhat elusive. Measures such as the CPI- Consumer Price Index- are released monthly with a lag, and are at best a vague measure of the change in the cost of goods for a typical consumer. Perhaps one of the best ways to get access to a real-time estimate of inflation is to look at yield curve of Treasury Inflation Protected Securities (TIPS) versus the comparable duration of a regular Treasury bond. The difference between these two represents expected inflation which is forward looking. Since there is often no matching bond duration for a TIP versus a traditional treasury, this real yield needs to be interpolated using a nonlinear estimation. A quick and convenient (albeit imperfect) way to capture this is to look at the difference in returns between the 7-10 year Treasury Bond (IEF) and the Treasury Inflation Protected Bond (TIP) which are both ETFs that trade daily. Both have an effective duration that is approximately 8 years, which makes them roughly equivalent. The daily difference in their total returns is essentially the change in expected inflation. Since this can be somewhat noisy, I chose to smooth this using a 10-day average. To proxy the risk-free rate, I use the short-term Treasury or (SHY). To calculate “Real Momentum”, I use an average of daily real excess returns. This is essentially the daily return of an asset minus the return of the risk-free rate (SHY) and the smoothed return of expected inflation (10-day sma of daily return difference between TIP and IEF).

**Real Momentum**= return of asset- risk free return- expected inflation

or the simple moving average of the:

Daily return of asset- Daily return of risk free proxy (SHY)- Daily return (smoothed) of expected inflation proxy (TIP-IEF smoothed)

For comparison with Absolute or conventional Time-Series Momentum it is important to use an average daily return proxy which is simply the average of the daily excess return of an asset minus the return of SHY. Here are the results comparing Real Momentum with Absolute Momentum from 2005 (June) to Present using the S&P500 (SPY). Note that there is limited data for TIP, so this was approximately the earliest start date that could accomodate the different lookbacks.

Over this 10-year period, it appears that Real Momentum is superior to Absolute Momentum which matches what we might expect theoretically. On average, the difference appears to be marginally significant on visual inspection. But I am not yet convinced with these preliminary tests that the difference is real (no pun intented). Trend-following strategies require a lot of data to have statistical significance because they don’t trade very frequently. A longer testing period would be preferable along with a test that incorporates the real yield instead of the TIP/IEF differential which is not a perfect basis for comparison (which is why smoothing is preferred to using the raw daily difference). Alternatively, one could use a proxy for TIP that goes back farther in time. Since this testing is in the preliminary stage, I would caution that it is difficult to draw any firm conclusions just yet. But the concept of a real absolute returns is appealing, it is just trickier to quantify in light of the fact that inflation itself can be calculated so many different ways. Feel free to share your ideas/comments and suggestions on this interesting topic.

]]>

**Note:** *James Picerno of The Capital Spectator recently did an interesting piece evaluating the Self-Similarity Metric and provides some R code which is valuable for many of our readers. *

The principle of parsimony relates to being frugal with resources such as money or the use of computing time. It is closely tied to the principles of simplicity, elegance and efficiency. It also complements the philosophical theory of Occam’s Razor which states that the simplest explanation with the fewest assumptions is often closest to the truth. Whether doing statistical modelling or building trading systems, it would be wise to respect the power of this principle. **Parsimonious models or trading systems are often robust, while overly complex models with too many assumptions are not.** The difficulty is in telling the difference- which is not obvious even to a talented and experienced developer. The ability to distinguish between parsimony and excess complexity is virtually invisible to almost everyone else.

The backtest is the problem and great distractor in the quest for parsimony. It is like a picture of a beautiful woman that is scantily clothed beside a paragraph of important text– no one is interested in the fine print. A beautiful backtest is admittedly just as satisfying to look at (perhaps even more so for quants!) and can blind us from the details that got us to the end point. And while we all appreciate some good “chart porn”, there are some important questions to consider: What universe did we select and why? Why did we omit certain assets or choose certain parameters over others? Why did we choose one indicator or variable over another-and how do we know it is superior? Why do we trade at a certain rebalancing frequency versus another and is this relevant to the model? Most importantly is: **Can I create a trading system with similar results with far fewer assumptions and with less computational power? **That should be your goal- to achieve the maximum results with the least number of assumptions and resource usage.

For example, I am well aware than the Minimum Correlation Algorithm does not mathematically optimize the correlation matrix or find the most diversified portfolio. The Minimum Variance Algorithm does not minimize variance either relative to a true MVP solution. But they both use an intuitive and simple method that meets or often exceeds the results of the more complex solutions with less resources, and hence can be considered parsimonious.** **They are also less dependent on estimates for optimization inputs. Such systems are more likely to work in the uncertain and messy world that we actually live in. Cooking is a hobby of mine, and more recently I have strived to achieve the most with the least, and ensuring that all of my marginal choices of ingredients or differences in traditional technique are actually adding value. There is no point sounding fancy by adding exotic ingredients or using fancy techniques if they don’t change the taste for the better. These give the illusion of expertise to the unsophisticated, but to top chefs judging these dishes on FoodTV they only serve to highlight their deficiencies as cooks. **My advice is to work with things that you can understand or intuitively grasp and be very careful when trying newer and more complex methodologies**. Master what you can with the tools you have at your disposal instead of reaching for latest and greatest new toy. This may sound strange coming from a blog that was built around offering new ideas and concepts- but rest assured this is some of the best advice you will ever receive.

All of the questions I posed above relating to trading systems are quite material, and many cannot be answered quantitatively. Unfortunately for the quantitatively inclined, the principles of good logic often get lost while decoding proofs, cleaning data, or debugging computer code. Furthermore, the elegance of complex math is like comfort food for those that are highly intelligent and it is easy to forget that the assumptions of these models are a far cry from describing reality. Even for the more experienced developers that are aware of these problems, they may arrive at the wrong approach to system development. **The solution is not to avoid making ANY decisions or assumptions (although relying less on specific parameters or universes is desirable for example), but rather to make sensible choices with few assumptions. Another alternative is to build a methodology that directly makes choices quantitatively to create a parsimonious model. Both methods have their strengths and weaknesses. **

At the end of the day, there is no point making something more complicated than it needs to be unless the benefits are material. The same is true for the length of time/complexity of the run for the computer program that runs the trading. My brother is a professional hiker and has traversed extreme mountain terrain. Unlike most amateurs, he does not pack everything under the sun that might be useful for his trip. Instead he focuses only on the essentials and on minimizing weight. **More importantly, he focuses on planning for what can go wrong and makes his choice of gear and specific hiking route accordingly**. The black and white realities of survival bring these questions to the forefront. In contrast the more comfortable and forgiving world of offices and computers make trading system decisions seem almost like a video game. Rest assured, it is not…..

]]>

I have to say that one of the most rewarding aspects of this blog has been my interaction with readers (and fellow bloggers) at various levels. I have developed several relationships over the years, and some of these developed into new business opportunities. Many years ago while actively running CSS Analytics, I was fortunate to work with a core group of very talented and dedicated people. It has been nice to see that many of these individuals have become quite successful in the quant world. One of the original members of that talented group was David Abrams. We have spent a lot of time on system development over the years, and although we no longer actively collaborate we still manage to keep in touch. David reached out to me with some visuals and analysis on the chaos/stability self-similarity indicator I recently presented on the blog. I suggested that we post this for CSSA readers, and he was kind enough to agree to share.

**Dave Abrams is Director of Quantitative Strategies at Wilbanks, Smith and Thomas Asset Management (www.wstam.com) in Norfolk, VA, where they design risk managed fund models and S&P indices (about 400M of the firm’s 2.5B in AUM is quant). He was formerly part of a group doing quant research at CSS Analytics.**

**Visualizing The Chaos Stability Indicator **

It is useful to visualize DV’s new self-similarity regime method in a Tradestation chart. Here is the strategy and indicator using the default parameters (N = 10 day, 60 day average, 252 percent rank length). I transformed the indicator by subtracting 0.5 to center the values around 0 and displayed with a color coded histogram.

Here is what the Chaos Stability as a buy/sell indicator on the SPY looks like:

The indicator is currently in a new “sell” position as the value is below zero.

This perhaps reflects the more random and sideways market movement that we have had over the past few months. As with any indicator, this bearish signal is not perfect and in April through early July of 2014 the market made good upward progress despite the chaos stability values being mired deep in the red. It is useful to look at some other charts to get a sense of when the buy and sell signals occur.

It is hard to discern what is going on without careful inspection, but it seems as if the chaos/stability indicator flashes buy signals at swing lows- where the market is moving persistently downward or in persistent bull moves upward out of corrections or at the top of established rallies. Sell signals tend to occur near market tops where things get choppy or in areas of congestion where the market is moving sideways within its long-term trend. Since persistency occurs both in up and down moves, signals are uncorrelated or even negatively correlated to a trend-following strategy as highlighted in the original post. This is important to those looking to diversify a trend strategy on a single asset.

**Smoothed Chaos Stability Metric**

One of the challenges I noticed when looking at the charts was that the indicator frequently switched from buy to sell- especially as the value hovered close to zero. Smoothing seemed to be a logical approach to reduce some of the whipsaw trades and reduce turnover. To address this issue, I applied John Ehler’s Super Smoother method (http://traders.com/Documentation/FEEDbk_docs/2014/01/TradersTips.html) to to the Chaos Stability measure. Notice the indicator below. This reduced the number of trades by 11% and the Profit Factor went up by 6 %.

**Walk Forward Analysis**

One of the challenges of new indicators is that they tend to promise a lot but fail to deliver in the real world. Often the reason this happens is because the examples presented by the developer are tuned to a specific set of parameters- in other words the indicator is over fit and is not robust. So is DV’s innovative new indicator just lucky or is it stable? One of the best ways to evaluate whether an indicator has any predictive power is to perform a walk forward test. An indicator with no predictive power will tend to fail using such tests. For this evaluation, I did a walk-forward test in Tradestation. This module continuously re-optimizes the parameters so that at each period of time we are using out-of-sample results. We can get greater confidence in the strategy if it performs well using walk-forward. The results are below :

Based on these criteria for evaluation, the DV Chaos/Stability indicator passes with flying colors. In addition to passing the walk-forward test the logic of indicator is also sound- which is an often overlooked but important qualitative assessment. In our quantitative research methodology we always apply a walk-forward analysis and qualitative assessment. The hypothetical equity curve from the walk-forward results showing each out-of-sample period over time are presented below.

Tradestation Walk Forward Analyzer Performance Graph. The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained

Good quantitative research is a combination of different but stable ideas which either confirms each other or adds diversity to the overall model. I agree with Ray Dalio that 15 uncorrelated return streams is the holy grail of investing (

http://www.businessinsider.com/heres-the-most-genius-thing-ray-dalio-said-2011-9). DV’s chaos stability regime model could be a viable uncorrelated candidate.

Disclosure

The research discussion presented above is intended for discussion purposes only and is not intended as investment advice, recommendation of any particular investment strategy including any of the depicted models. There are inherent limitations of showing portfolio performance based on hypothetical & back-tested results. Unlike an actual record, hypothetical results cannot accurately reflect the effect of material economic or market factors on the price of the securities, and therefore, results may be over or under-stated due to the impact of these factors. Since hypothetical results do not represent actual trading and may not accurately reflect the impact of material economic and market factors, it is unknown what effect these factors might have had on the model depicted above. Past performance, whether based on hypothetical models or actual investment results, is not indicative of future performance.

]]>

The images above are the famous Sierpinski Triangle and the Koch Snowflake. These objects are “self-similar” and this means that examination at finer levels of resolution will reveal the same shape. Both are examples of “fractal” geometry, and are characteristic of many phenomena in the natural world such as mountains, crystals, and gases. Self-similar objects are associated with simplicity, redundancy and hence robustness. Self-dissimilar objects are associated with complexity and chaos. Several mathematicians (including Mandelbrot) have observed that markets are clearly non-gaussian or non-normal. Markets exhibit “fat-tails” and have a distribution that shares more in common with a Levy distribution than the normal distribution which is used frequently in quantitative finance. But the market does not have a constant distribution- at times the market behavior is fairly normal in character while at other times the market is wild and unpredictable. The question is how we can effectively determine which regime the market is in so that we can apply the appropriate trading strategies to mitigate risk.

The essence of self-similarity and complexity is to compare the whole to its component parts. For example lets take a square that is divided into four separate squares of equal size. The area of the larger square is equivalent to the sum of the areas of each of its component squares. The same of course is true of a one-dimensional line which is equivalent to the sum of its parts. One of the methods of identifying self-similarity in the stock market is to look at the range or the difference between the highs and the lows. We would expect that in a perfectly self-similar market the longer range would be equivalent to the sum of the ranges measured over a smaller interval. The more chaotic the market is, the greater the difference will be between these two measures. Such market conditions would be characterized by a large ratio between the sum of smaller ranges versus the longer measure of range. Essentially this relationship is called fractal dimension and is a measure of complexity. There are many different ways to measure this including using the Hurst exponent, but the problem I have always found in my own humble research is that the suggested thresholds defined by specific absolute values do not seem to reflect the information consistent with theory. I have often found that relative measures tend to be more robust and consistent- much the same way that the magnitude of past returns has less predictive value than the relative rank of past returns. Relative measures tend to be more stationary than absolute values. To compute this measure of self-similarity I use the intraday range (high minus low) versus a longer range window. Here is how it is calculated:

1) find the high minus the low for each day going back 10 days

2) take the sum of these values (sum of the pieces)

3) find the 10-day range by taking the 10-day maximum (including the highs) and subtracting out the 10-day minimum (whole range)

4) divide the sum of the pieces by the whole range- this is a basic measure of fractal dimension/complexity

5) take the 60-day average of the 10-day series of the complexity values- this is the quarterly “chaos/stability” metric

6) use either the 252-day normsdist of the z-score or the percentile ranking of the chaos/stability metric

7) values above .5 indicate that the market is in a “chaos” regime and is much less predictable and non-stationary, values below .5 indicate that the market is stable and much more predictable.

When the market is “stable” it is easier to apply effective quantitative trading systems. When the market is in “chaos” mode, it is not necessarily volatile- but rather it is too complex to use for standard measurement and calibration of basic linear prediction. Let’s look at how this measure performs over a long time period using the S&P500 as a test set. The high and low values are generally the same until about 1963 which is when we will begin this test. Here is how the market performed in both regimes over the last 50+ years:

The market performs quite poorly in “chaos” conditions, and seems to make all of its long-term returns in the “stable” regime. Note however that the volatility is not materially different between both regimes- this means that we are capturing something different than just high and low volatility market conditions. Furthermore the correlation between the chaos indicator signals and for example the basic trend signal of a 200-day moving average is -.116. This means that we are capturing something different than just the market trend as well. The indicator is meant to be used to define regimes rather than as a trading signal to go long or short, but clearly there are some interesting attributes worthy of further exploration and refinement.

]]>

]]>

In quantitative finance there is the concept of “Conditional Value at Risk” (CVaR) which is a calculation frequently used in risk management. The general idea is that you are trying to capture the expectation beyond a certain tail of the distribution. The CVaR is preferred to the value at risk because it more comprehensive than looking a just one value. Likewise, Percentile Channels are similar to value at risk in that context as well as traditional Donchian Channels which only look at one reference price. Perhaps a logical improvement would be like CVaR to use the average of the prices above a certain percentile threshold. This is more like calculating the **expected** upper or lower bound for prices. Furthermore to account for the fact that recent data is progressively more important than older data, we can weight such prices accordingly.In theory, the most important prices are at the extremes and should also be weighted as such. So Conditional Percentile Channels is simply a twist on Percentile Channels incorporating these two ideas. Here is how it would be calculated:

Basically you select a threshold like .75 and .25, and then you weight the prices that are above those thresholds according to both position in time (like a weighted moving average) and distance to max or min. This gives you a more accurate expected upper or lower bound for support and resistance (at least in theory). I know I am going to regret this, but using the same strategy ie- Percentile Channel Tactical Strategy in the last few posts- I substituted in the Conditional Percentile Channels using the same threshold of .75 and .25. All other parameters are identical. Here is how that looks:

Looks like a slight improvement over the original strategy in both returns and risk-adjusted returns. In general, I just like the concept better since it condenses more information about support/resistance than either Donchian Channels or Percentile Channels. It also represents a good complement to moving averages which capture central tendency rather than price movement at the extremes. So there you have it- yet another twist on using channels.

]]>

The table below compares the original strategy (channel rp) to other benchmarks including 1)ew- equal weight the assets in the portfolio 2)rp- risk parity using the assets in the portfolio and 3) channel ew: the percentile channel TAA strategy using equal weighting 4) QATAA- which is the application of Mebane Faber’s trend-following strategy cited in his now famous paper- A Quantitative Approach to Tactical Asset Allocation (in this case QATAA uses the same underlying assets and cash allocation as the percentile TAA strategy). Of course QATAA is one of the inspirations for the strategy framework and Meb always manages to publish interesting ideas on his World Beta blog. To avoid issues with different sources of extended data, **Systematic Investor begins the test in 2010 using the underlying ETF data** to show how the strategies have performed in the current bull market. If you are getting results in line with this test than you can feel comfortable that you have the details correct- if not you can use R and the code provided by Systematic Investor in the post.

After comparing results, Michael and I show an near identical match (I also get a sharpe of 1.42 and a CAGR of 8.93%) – a relief after all the commotion caused by the initial post (which was addressed in my now amusing rant over here). The original strategy is the best performer of the bunch since it applies multiple time frames as well as normalized bet sizing via risk parity (common for most trend-followers). As I have stated before, of the reasons I like the Percentile Channel approach is that the signals are likely to be slightly different from what most asset managers and investors are using.

]]>