# Forecast-Free Algorithms: A New Benchmark For Tactical Strategies

We all spend most of our time creating strategies with the promise of “alpha”—excess returns adjusted for risk to some benchmark. The most desirable strategies for many traders/investors are tactical asset allocation models because they are easy to implement and tend to be more reliable than capitalizing on short-term effects that are constantly in flux. One of the “founding fathers” of tactical asset allocation was Mebane Faber http://www.mebanefaber.com/2009/02/19/a-quantitative-approach-to-tactical-asset-allocation-updated/. He showed the utility of using long-term moving averages to trade various asset classes. This simple approach worked very well both in and out of sample, and also managed to preserve capital. Some of his other papers validated other academic work showing the utility of momentum/relative strength to choose between asset classes as well. Jeff Pietsch of ETF Prophet http://etfprophet.com/ has written about several different types of relative strength models for broad asset classes—many of them worth reading. There are also some basic models offered to follow on the site. Michael Stokes at MarketSci http://marketsci.wordpress.com/ also has published research and has a monthly model for investors to follow. All of the models above use return or price inputs in order to predict 1) whether an asset is likely to have a positive or negative return and 2) which assets will perform the best in the future. In a sense, both models are dependent on forecasting either absolute or relative returns and/or prices. If the past doesn’t predict the future, then such models fail to produce alpha.

But what should be the benchmark for such strategies? I would argue that a fair benchmark would be a model that is “forecast-free”– in the sense that it does not extrapolate returns or prices. Such a model would be also risk neutral and seek to maximize diversification. In the absence of opinions about relative returns, we would seek to treat each asset class equally and reduce the dependency or correlation between them. This portfolio would be agnostic to relative returns, and theoretically should be optimal if the market was truly efficient. As it turns out, such a portfolio does a lot better than you might expect—-implying that 1) assets are much more efficiently priced than we think 2) tactical asset allocation models offer varying “beta” payoffs– a) they either reduce downside risk (unfavorable beta) while preserving the upside (favorable beta) similar to trend-following models using moving averages and cash holdings –for a good overview of the literature read Automated Trading Systems http://www.automated-trading-system.com/ , or b) they create beta through relative asset selection— dynamically increasing leverage in up markets by selecting the most volatile assets, and subsequently reduce leverage by selecting the least volatile assets in down markets.

I have conducted exhaustive testing on such a “forecast-free” model and related variants that I like to term the “Minimum Correlation Algorithm.” It is perhaps the most robust model or “system” that I have ever tested– in the sense that it is largely invariant to the selection of assets or parameter values and furthermore it performs very well on a risk-adjusted basis. Below is a simple test using 8 major asset classes/indices including: 1) S&P500 2) Nasdaq 100 3) Russell 2000 4) MSCI EAFE (Europe, Asia and Far-East) 5) Long-term Treasury Bonds 6) Real Estate and 7) Gold. Note that rebalancing was done on a weekly basis and quarterly data was used within the algorithm to estimate correlations.

Looks pretty good doesn’t it? Notice that this method often performs very well in difficult times (like this month!). It is hard to believe that this strategy is always invested in the market and does not care whether assets are going up or down. Considering this is not a “system” of any sort, nor does it rely on parameters or multiple assumptions about market inefficiencies, it is truly an impressive result. It turns out that Markowitz was right– diversification is where it is at. Consider these results all the more impressive as correlations between assets have been increasing since 2000. To me this is the true benchmark for a tactical asset allocation strategy– the sharpe ratio of such a strategy should exceed that of the “forecast-free” approach in order to justify the risk, not to mention the fact that such strategies rely substantially on specific parameters that are constantly in flux (ie 10-month sma or 12 month roc).

I am not advocating against active management or tactical asset allocation. In fact I believe there are ways to dramatically improve upon such models using heuristic portfolio algorithms. This is a topic I will cover at some point in the future- perhaps in a white paper. I am merely suggesting that the backtests of tactical models are not quite as impressive in light of the fact that you can probably do better looking both at in-sample and out-of-sample by sticking with a more robust approach of just diversifying intelligently. There are many further applications of this approach– one of the most obvious is to diversify among different trading systems or investment managers. A less obvious application would be to use the algorithm to create “learning ensembles” that blend different system signals into one composite voting mechanism or trading signal.

### Trackbacks

- Wednesday links: indifferent markets | Abnormal Returns
- xhtml css templates – Forecast-Free Algorithms: A New Benchmark For … – CSS Analytics | XHTML CSS - Style sheet and html programming tutorial and guides
- Wednesday links: indifferent markets
- Trading and Forecasting « STROM Macro
- ETF Prophet | Follow up FAQ: “Forecast-Free” Algorithms and Minimum Correlation Algorithm
- The Most Diversified or The Least Correlated Efficient Frontier « Systematic Investor
- Backtesting Minimum Variance portfolios « Systematic Investor
- Risk Decomposed: Marginal Versus Risk Contributions « CSS Analytics
- ETF Prophet | Risk Decomposition: Marginal vs Risk Contributions

Hi David,

This is a very interesting finding indeed. Have you written about the ‘Minimum Correlation Algorithm’ before?

Hi David,

1) When you say that Mebane Faber’s model tries to predict whether an asset is likely to have a positive or negative return and which assets will perform the best in the future, I don’t agree. The model tries to get out of an asset when it carries too much risk. Indeed, when the model switches from an asset to cash, the average return in the asset is not negative. The benefit from the model is to reduce all the measures of risk and to keep the same level of return. Hence, comes the alpha.

2) I do not understand how you compute your equity curve for the “Minimum Correlation Algorithm”. Can you explain to us? Do you inverse the correlation matrix to get the weights for each asset? Are you doing a mean-variance optimization (without the means actually)?

3) Why is your algorithm risk-neutral? Do you mean the beta relative to each asset is zero? But you say “that this strategy is always invested in the market”. I am confused.

Thanks for the post!

4 remarks:

1) Mebane Faber is not a founding father of GTAA, which has been around since the beginning of the 90s. Nothing against the guy. He just published things that were very well known before. But he was the first to publish.

2) All GTAA strategies that move to cash at some time share a common problem re estimation of risk. You cannot simply calculate a sharpe ratio for the strategy relying on realized risk. You need to assume that risk of the strategy is simply the one of the riskiest asset class considered as you may be fully invested in it at any moment in time (or a percentage of that risk if you limit weights on a given asset class).

3) Performance is highly dependent on the choice of asset classes. Not that many people would have considered emerging markets in a GTAA system back in 91 for example. At that time you would have seen systems playing between US/UK/France/Germany/Japan. Hindsight is a real problem.

4) It is so easy to do data snooping in this field. Beware!

Thanks for the mention David.

This “forecast-free” model sounds very interesting (some people would argue with you that trend following / TAA does _not_ predict or forecast but merely reacts to price action ;-) but this is besides the point, and probably just semantics).

This post has me begging for a bit more details though (like Yaba Qi above). Do you simply adjust the weights for each portfolio constituent based on their past correlation numbers aiming to minimise the average correlation in the portfolio?

Do you think that the negative-slope yellow trendlines are the result of increasing correlations?

Thanks for the post anyway, and appreciate any more insights that you can share on this blog.

~Jez

yes, you are such a tease to build a 1.61 Sharpe “forecast-free” model and set that as just the benchmark to beat. if it is just a benchmark, then you might not mind disclosing the full code so the doubters like Thierry can understand how it is truly “forecast-free”

Do not use me as an excuse to get more from him ;-)

Just giving my insight. I personally do not really care about the strategy itself. You can find hundreds published everywhere, all with sharpe > 0.7. In the end the result is always the same: you cannot stand the drawdowns.

Any action taken in a portfolio has a reason behind it (at least one would hope!).

Whatever that reason may be is based on assumptions (implicit or explicit, rational or irrational).

Those assumptions are forecasts.

It may not look like an equity analyst’s “MSFT is going to 50″ forecast, but every single person forecasts.

Even doing nothing is a forecast. I would argue even the most naive buy and hold concepts have implicit assumptions about the world that drive why one would decide to follow it.

The real issues to consider are:

1) What are you truly forecasting? (http://bit.ly/oMaB3V) There are countless things that can be forecast. Direction? Magnitude? Benefit to a portfolio (i.e. correlation)?

2) How are you doing it? Their are countless ways to forecast.

3) And most importantly…. What if my forecast is wrong?

-Ström

P.S. – Great post David. Very thought provoking.

I agree with Strom_trader’s points, but would add a little.

1) To estimate a correlation matrix and use it to construct a portfolio is a forecast of the future correlations.

2) Risk neutral is a term normally reserved for options pricing, not portfolio construction. Do you mean risk parity?

3) If you want people to use this as a benchmark, you’ll obviously need to provide more details of the algorithm.

4) Minimum correlation or some sort of diversification maximization strategies generally do not have Sharpe ratios that high.

5) You use gold, but not oil. I would including oil would lead to worse risk statistics since you have such good performance during the crisis in 2008.

6) Getting to that crisis, it appears that there’s a significant weight on TLT and (perhaps to a lesser extent) GLD. This might be indicative of a strategy more akin to risk parity (which historically has a high weight on bonds).

7) All these ETFs could go back further than 2003 in total return indices. Giving you a longer time period to provide returns.

I am very interested to learn more about your approach this system.

I attempted to build a model following your directions. I am not sure how to algorithmically find the minimum correlation portfolio, so I ran a couple thousand random portfolios as picked the one which minimized the trailing quarters’ weighted average correlation.

I did find that the model benefited from the huge performance of assets classes which had low correlation to others, e.g. GLD and TLT. It would be interesting to see how the model performed when these asset classes acted more typically.

Thanks for this great post and the thoughtful reader comments.

One approach would be to use an optimizer and give it a correlation matrix rather than a variance matrix.

Pat,

I have a mean variance optimization package which takes as inputs prospective return, volatility, correlations, and minimum / maximum allocations. Do you have a suggestion about how to use this optimizer since we are not forecasting returns / volatility? Or does this process implicitly forecast both?

Thanks,

Michael

Yes, when you give inputs to an optimizer, you are implicitly saying that those are your forecasts (for some indefinite period). To get a minimum correlation portfolio you would set the expected returns to zero and the volatilities to 1. Depending on the flexibility of your optimizer, that may need to be “trick” rather than “set”.

Hi Pat, are you sure setting the volatilities to 1 will work? It seems like that would underweight the low volatility assets and vice versa. Say bonds have the correlation direction that you need for the portfolio, but you would need to buy a lot of them to make any dent in diversifying EEM.

Correct that it would underweight low volatility assets relative to a low (or minimum) variance portfolio. But you are changing the criterion when you say you want minimum correlation.

If you look at some of the literature on low volatility investing, then you might come to the conclusion that a minimum correlation portfolio would make sense once you ruled out the 20% to 40% most volatile assets.

Hello,

Thanks for this very interesting article !! Any update concerning your white paper ??

Thanks, Marc

How is correlation calculated over a matrix with weighted variables? I could minimize it if I knew how to define it correctly.