## “2D Asset Allocation” Using PCA (Part 1)

Asset allocation is a complex problem that can be solved using endless variations of different approaches that range from theoretical like Mean-Variance to heuristic like Minimum Correlation or even “tactical strategies.” Another challenge is defining an appropriate asset class universe which can lead to insidious biases that even experienced practitioners can fail to grasp or appreciate. Reducing dimensionality and the number of assumptions is the ultimate goal. The simplest way to manage a portfolio is to revert to a CAPM world where there is a market portfolio and you can leverage or hold cash to meet your risk tolerance requirements. But this method also requires one to define a “market portfolio” which in theory can be defined as the market-cap weighted mix of investable asset classes, but in practice is elusive to define and also determine on a real-time basis. What we really want is a sense of what drives systematic risk across a range of asset classes and to identify a portfolio that best represents that systematic risk (offense), and a portfolio that is inversely correlated to that systematic risk (defense). A parsimonious way to make that determination is to use Principal Component Analysis (PCA) by isolating the PC1 or first principal component portfolio that explains most of the variation across a broad set of asset classes. In most cases, the first principal component will explain between 60-70% of the variation across asset classes and represents a core systematic risk factor. If we take a large basket of core asset classes we can use PCA to identify this PC1 portfolio over the period from 1995-2018 using ETFs with index extensions. In this case we used the R code provided by Jim Picerno’s excellent new book Quantitative Investment Portfolio Analytics in R.

We can see that this PC1 portfolio makes a lot of intuitive sense: The highest weights are in Emerging Markets, Nasdaq/Technology, and Small Cap (Offense). Asset classes with negative weights have an inverse relationship to this core systematic risk factor, and the lowest are Long-Term Treasurys followed by Intermediate Treasurys, Inflation-Protected Treasurys, the Aggregate Bond Index and Short-Term Treasurys (Defense). Effectively the “Offense” portfolio is positively tilted toward the most aggressive asset classes that likely perform the best during a bull market, while the “Defense” portfolio is positively tilted toward the most defensive asset classes that likely perform the best in a bear markets. With one calculation we have mathematically separated the asset classes into two broad groups/dimensions which can be used to create a wide variety of different simple asset allocation schemes. In a subsequent post we will show some examples of how this can be done.

## Adaptive Volatility: A Robustness Test Using Global Risk Parity

In the last post we introduced the concept of using adaptive volatility in order to have a flexible lookback as a function of market conditions. We used the R-squared of price as a proxy for the strength of the trend in the underlying market in order to vary the half-life in an exponential moving average framework. The transition function used an exponential formula to translate to a smoothing constant. There are many reasons why this approach might be desirable from a regime or state dependent volatility framework, to improving the mitigation of tail risk by being more responsive to market feedback loops as mentioned in this article from Alpha Architect. In the latter case, by shortening the volatility lookback when the market seems to be forming a bubble in either direction (as measured by trend measures such as the Hurst Exponent or R-squared) we can more rapidly adjust volatility to changes in the market conditions.

In order to test the robustness of the adaptive volatility measure we decided to follow the approach of forming risk parity portfolios which was inspired by this article by Newfound Research. Our simple Global Risk Parity portfolio uses five major asset classes: Domestic Equities (VTI), Commodities (DBC), International Equity (EFA), Bonds (IEF), and Real Estate (ICF). The choice of VTI was deliberate since we already did the first test using SPY. VTI contains the full spectrum of domestic equities including large, mid and small cap whereas SPY is strictly large cap. We created simple risk parity portfolios (position size is equivalent to 1/vol scaled to the sum of inverse vol across assets) with weekly rebalancing and a 1-day delay in execution. For realized volatility portfolios we ran each individually using various parameters including 20-day, 60-day, 252-day and all history. To test adaptive volatility we ran 27 different portfolios that varied the maximum smoothing constant and the r-squared lookback. The smoothing constant was varied between .1,.5 and .9 and the R-squared lookback was varied using 10,12,15,20,25,30,40,50 and 60 days. We chose to keep the multiplier (-10 in the original post) the same since it was a proxy for the longest possible lookback (all history) by design. The testing period was from 1995 to present and we used extensions with indices for the ETFs when necessary to go back in time. In the graph below we chart the return versus risk for each portfolio.

We used a line to separate the performance of the realized volatility portfolios to better illustrate the superiority in performance of the adaptive volatility portfolios. All parameter combinations outperformed the realized volatility portfolios on a return basis. In terms of risk-adjusted return or sharpe ratio, the realized volatility portfolios fell on the 0%, 3.3%, 13.3% and 33% percentile in the distribution of all portfolios- in other words nearly all the adaptive portfolios also outperformed on a risk-adjusted basis. Was there any systematic advantage to using certain parameters for the adaptive volatility portfolios? As it turns out the maximum smoothing constant was less important than the choice of R-squared. We stated in the previous post that shorter r-squared parameters on average were more desirable than long parameters as long as they weren’t too short so as to avoid capturing noise. Shorter lookbacks should allow the adaptive volatility to more rapidly adjust to current market conditions and therefore reduce drawdowns and improve returns. It turns out that this pattern is validated when we average across smoothing constant values (hold them constant) and look at the return relative to maximum drawdown (MAR) as a function of R-squared lookback.

Clearly the shorter-term R-squared values improved the return relative to maximum drawdown. While not shown, the drawdowns were much lower and drove this effect, while the returns showed a more modest improvement. The drawback to shorter lookbacks is increased turnover, which can be mitigated by longer rebalancing windows or through improved smoothing measures or rules to mitigate allocation changes without materially affecting results. Another alternative is to average all possible r-squared and smoothing constant portfolios with a tilt toward shorter r-squared parameters to have a good balance between responsiveness and smoothness while mitigating the risk of a poor parameter choice.

In conclusion, this simple robustness test appears to show that adaptive volatility is relatively robust and may have practical value as a substitute or complement to realized volatility. We will do some single stock tests in order to further investigate this effect and possibly compare to traditional forecasting methods such as GARCH. Additional exploration on this concept could be done to vary the transition formula or choice of trend indicator. Finally, it may be valuable to test these methods in a more formal volatility forecasting model rather than using just a backtest, and calibrate the parameters according to which are most effective every day.

Information on this website is provided by David Varadi, CFA, with all rights reserved, has been prepared for informational purposes only and is not an offer to buy or sell any security, product or other financial instrument. All investments and strategies have risk, including loss of principal, and one cannot use graphs or charts alone in making investment decisions. The author(s) of any blogs or articles are principally responsible for their preparation and are expressing their own opinions and viewpoints, which are subject to change without notice and may differ from the view or opinions of others affiliated with our firm or its affiliates. Any conclusions or forward-looking statements presented are speculative and are not intended to predict the future or performance of any specific investment strategy. Any reprinted material is done with permission of the owner.

## Adaptive Volatility

One of the inherent challenges in designing strategies is the need to specify certain parameters. Volatility parameters tend to work fairly well regardless of lookback, but there are inherent trade-offs to using short-term versus longer-term volatility. The former is more responsive to current market conditions while the latter is more stable. One approach is to use a range of lookbacks which reduces the variance of the estimator or strategy- i.e. you have less risk of being wrong. The flip side is that you have not increased accuracy or reduced bias. Ultimately you don’t want to underfit relevant features as much as you do not want to overfit random noise in the data. Forecasting volatility can be beneficial towards achieving a solution but is more complicated to implement and exchanges lookback parameters for a new set of parameters. Using market-based measures such as the options market has the fewest parameters and inherent assumptions, and can theoretically improve accuracy but the data is not easily accessible, and it is more useful for individual equities rather than macro markets.

An alternative approach is to create an “adaptive” volatility measure that varies its lookback as a function of market conditions. Using an exponential moving average framework we can apply a transition function that uses some variable that can help us decide what conditions should require shorter or longer lookbacks. More specifically, we vary the smoothing constant or alpha of the EMA using a mathematical transform of a chosen predictor variable. The benefit of this approach is that it can potentially improve outcomes by switching to short or longer volatility as a function of market conditions, and it can be superior to picking a single parameter or basket of multiple parameters. Furthermore, it can achieve a better trade-off between responsiveness and smoothness which can lead to better outcomes when transaction costs become an issue.

How do we choose this predictor variable? There are two observations about volatility that can help us determine what to use:

- Volatility can be mean-reverting within a particular market regime- this favors longer lookbacks for volatility to avoid making inefficient and counterproductive changes in position size
- Volatility can trend higher or lower during a transition to a new market regime- this favors shorter lookbacks for volatility to rapidly respond by increasing or decreasing position size

We can’t predict what regime we are in necessarily so the simplest way to address these issues is to look at whether the market is trending or mean-reverting. The simplest method is to use the R-squared of the underlying price of a chosen market with respect to time. A high r-squared indicates a strong linear fit, or high degree of trend while the opposite indicates a rangebound or sideways market. If the market is trending (r-squared is higher), then we want to shorten our lookbacks in order to ensure we can capture any sudden or abrupt changes in volatility. If the market is trendless or mean-reverting (r-squared is low) then we want to lengthen our lookbacks since we would also expect that volatility should revert to its historical long-term mean.

**Transition Function:**

In order to translate the R-squared value into a smoothing constant (SC) or alpha for an exponential moving average we need a mathematical transform. Since markets are lognormally distributed an exponential function makes the most sense.

**SC= EXP(-10 x (1- R-squared(price/time, length))**

**MIN( SC, .5)**

To get a more stable measure of R-squared we use a lookback of 20-days, but values between 15 to 60 days are all reasonable (shorter is noisier, longer has greater lag). By choosing -10 in the above formula, this will default to an almost anchored or all-time lookback for historical volatility for the underlying, which we expect to serve as an indication of “fair value” during periods in which volatility is mean-reverting. (Technically speaking if the r-squared is zero then substituting (2-SC)/SC gives an effective lookback of 44052.) By choosing a MAX SC of .5 we are limiting the smoothing period to a minimum lookback of effectively 3 days (2/(n+1)=SC). Therefore the adaptive volatility metric can vary its effective lookback window between 3 days and all history.

This formula gets applied to take the exponential moving average of squared returns. Translating this to annualized volatility, you need to take the square root of the final value and multiply by the square root of 252 trading days. We can compare this to the often used 20-day realized volatility on the S&P500 (SPY) to visualize the differences:

Considering that the adaptive volatility uses a much longer average lookback than 20-days we can see that it has comparable responsiveness during periods of trending volatility, and has flat or unchanging volatility during periods of mean-reverting volatility. This leads to an ideal combination of greater accuracy and lower turnover. Even without considering transaction costs the results are impressive (note that leverage in the example below has not been constrained in order to isolate the pure differences):

The results show that adaptive volatility outperforms realized volatility, and while not shown this is true across all realized lookback windows. Relative to 20-day realized, adaptive volatility outperforms by 3% annually with the same standard deviation. Factoring in transaction costs would increase this gap in returns significantly. Risk-adjusted returns are higher, but more impressively this comes with lower drawdowns even at the same level of volatility. This is due to the better combination of responsiveness and smoothness. In either case, I believe that adaptive volatility is an area worth considering as an alternative tool to research. One can come up with a variety of different predictors and transition formulas to research that may be superior– the purpose of using r-squared was that it happens to be straightforward and intuitive along with the exponential transition function.

## Risk Management and Dynamic Beta Podast

I had the honor of speaking with **Mebane Faber **of Cambria Investment Management recently where I discussed the topic of risk management and also applying a dynamic beta approach on his widely popular podcast **“The Mebane Faber Show”**. The interview is almost an hour and covers a wide range of topics whether you are a quant geek like myself or an investor.

Here is the link to the podcast.

**Episode #64: David Varadi, “Managing Risk Is Absolutely Critical”**

## Welcome QuantX!

I am very proud to announce that readers can finally have access to products based on many of the quantitative ideas used in the blogosphere and published in academic research. Yesterday we launched five new ETFs through the QuantX Brand (linked to Blue Sky Asset Management). They provide the building blocks to design customized portfolios with downside protection as well as ETFs focused on enhanced stock selection. The funds follow quantitative strategies that are familiar in a tax efficient and transparent ETF wrapper. You can check out our new QuantX website: http://www.quantxfunds.com/ and our recent press release: http://www.marketwired.com/press-release/blue-sky-asset-management-launches-the-quantx-family-of-etfs-2191355.htm

Now that we have gone through the long and arduous launch process, I will have more time to write about quantitative ideas and also some of the cool new concepts behind the funds!

## Tracking the Performance of Tactical Strategies

There is a cool new website that tracks the performance of well-known tactical strategies. AllocateSmartly has collected an extensive list of strategies from well-known hedge fund managers like Ray Dalio along with several other portfolio managers and financial bloggers. The backtests for these strategies use a very detailed and comprehensive method that is both conservative and realistic. Where possible, the author uses tradeable assets rather than indices and factors in transaction costs along with careful treatment of dividends. The current allocations and performance are tracked in real-time which allows investors to be able to realistically trade these portfolios. Curiously the best performing model tracked on the website this year is the Minimum Correlation Algorithm from CSSA which says a lot about the importance of diversification in 2016 versus momentum and managing risk via trend-following/time-series momentum. In fact, if you dig deeper you will notice that most of the best performers have a structural or dynamic diversification element. The worst performers have been the most concentrated and oriented toward identifying the best performers. As the website correctly points out- the diversification oriented strategies tend to do well during normal market conditions but ultimately the dynamic and more tactical strategies outperform during bear markets. Over longer backtest periods, the more truly tactical performers had better long-term performance. Different market regimes will reward different approaches depending on how predictable and interrelated the markets happen to be that year. An umbrella is great for a rain storm but less than ideal for a sunny day. That is why it is important to understand the strategies you are following and why you are investing in them rather than blindly chase performance. While many quant developers and investors chase the best looking equity curves it is important to consider two primary factors: 1) the utility curve that works best for any one individual is a very personal choice (ie risk/reward and tracking error) 2) you need to choose a set of assumptions for capital markets either going forward or over the long-term: will returns, correlations or volatility be predictable and if so which will be the most predictable and why.

On a side note, I was informed that the very popular “A Simple Tactical Asset Allocation Strategy with Percentile Channels” by CSSA is also being added to the AllocateSmartly website very soon. This is a tactical and structural diversification hybrid that provides balanced factor risk with the ability to de-risk during market downturns. While it lacks the higher returns of more momentum-oriented or equity-centric strategies it provides a steady and low-risk profile across market conditions.

**Disclosure:** The author(s) principally responsible for the preparation of this material are expressing their own opinions and viewpoints, which are subject to change without notice and may differ from the view or opinions of others at BSAM or its affiliates. Any conclusions presented are speculative and are not intended to predict the future of any specific investment strategy. This material is based on publicly available data as of the publication date and largely dependent on third party research and information which we do not independently verify. We make no representation or warranty with respect to the accuracy or completeness of this material. One cannot use any graphs or charts, by themselves, to make an informed investment decision. Estimates of future performance are based on assumptions that may not be realized and actual events may differ from events assumed. BSAM is not acting as a fiduciary in presenting this material. Benchmark indices are presented or discussed for illustrative purposes only and do not account for deduction of fees and expenses incurred by investors.

The strategies discussed in this material may not be suitable for all investors. We urge you to talk with your investment adviser prior to making any investment decisions. Information taken from Minimum Correlation Algorithm strategy article is publicly available and used by a third party to generate the strategies and signals provided on AllocateSmartly.com. We have not reviewed and do not represent this information as accurately interpreted or utilized.

## Book Review: Adaptive Asset Allocation

I recently read “**Adaptive Asset Allocation**” ( link to the book) by Butler, Philbrick and Gordillo of **ReSolve Asset Management**. The book is the culmination of research developed over the years by the ReSolve team towards the next generation approach of dynamic asset allocation. The core principles of this approach are the ability to “go anywhere” and adapt to changes in the economic environment in the quest for greater risk-adjusted returns. (CSSA readers may recall a post we did a while back on adaptive asset allocation, if not it is worth a refresher along with one of the original whitepapers on AAA) The book is extremely well-written, and the chapters are easy to read- developing the story persuasively from cover to cover.

This book is not a dense quantitative tome , but rather a summary of a coherent and rigorously developed investment philosophy that is carefully built around academic research and concepts. To that extent, Adaptive Asset Allocation is a true “tour de force” and a key contribution to the field of asset allocation theory. Without this background, it is impossible to frame ideas properly within any trading system or tactical asset allocation model. It is far too easy to get confused over the wide range of possible approaches to portfolio management: should you use momentum? should you seek to minimize risk? should you use long-term or short-term estimates? should you include or exclude certain asset classes? what time frame should you trade on? Ultimately the answers to these questions are driven by having a framework that neatly incorporates what input assumptions that you are confident in making versus those that you don’t know anything about. The book really helps to address these key issues in the development of trading models/systems. Adaptive Asset Allocation also neatly ties in the natural link between an active asset allocation approach and financial planning. Much of this is both theoretical and also based on their experience working as financial advisors with wealthy clients. The authors show that managing “volatility gremlins” with a portfolio management approach that is specifically designed to manage volatility itself is critical for investors in retirement. Adaptive Asset Allocation is not just an investment philosophy or a quantitative approach, but rather the book proves that it is a coherent and comprehensive solution for wealth management.

**Who should read the book?:** If you are a short-term trader that is looking for trading system ideas this probably isn’t for you. But if you are an investor, a portfolio manager, or a trader interested in longer frequency models, this is an essential book that will help to develop and crystallize your thinking towards asset allocation.