Skip to content

“2D Asset Allocation” using PCA (Part 2)

August 21, 2018

In the last post we showed how to use PCA to create Offense and Defense portfolios by focusing on the first principal component or “PC1.” After rotation has been completed it is possible to derive weights or portfolios for each principal component. Another good primer on using PCA for asset allocation is written by a reader of the blog- Dr. Rufus Rankin. The link for this book is here. We can separate the PC1 portfolio which represents broad systematic risk by dividing it into two dimensions- Offense (Risk On) and Defense (Risk Off)- by isolating positive versus negative weights. To form each portfolio you would simply take the absolute value of each weight and divide it by the sum of absolute values of weights for each of the Offense and Defense portfolio. In this example we will use 8 core asset classes for the sake of simplicity- Domestic Equity, Emerging Market Equity, International Equity, Commodities, High Yield Bonds, Gold, Intermediate Treasurys, Long-Term Treasurys. Here is the PC1 Offense Portfolio using the in sample period from 1995-2018 on various ETFs with extensions using indices:

This portfolio shows that some of the more aggressive asset classes such as emerging markets have the highest weighting, while international and domestic equity have nearly equal weightings. Equity overall has the highest weighting in the offense portfolio which is logical. Commodities take second spot while assets such as high yield bonds and gold have smaller weightings. In general this portfolio makes sense: for the most part when the market goes down and systematic risk is very high, all of these asset classes have a tendency to fall. However, during a bull market, these asset classes tend to do very well. In contrast when we look at the PC1 Defense Portfolio and it looks predictably like the opposite of the offense portfolio:

The PC1 Defense Portfolio has a high duration portfolio tilted toward long-term treasurys that has historically performed quite well during recessionary periods or other periods when systematic risk is high. The performance of both the PC 1 Offense and Defense Portfolios over time is plotted in the graph below.

In the graph we can clearly see the inverse correlation between the PC1 Offense and Defense Portfolios. Both obviously perform well at different times as we would expect. A simple tactical model would be to hold the PC1 Offense portfolio when systematic risk is low and to hold the PC1 Defense portfolio when risk is high. To do this we can simply use the 200-day simple moving average strategy on the PC1 Offense portfolio on a daily basis (generating an equity curve by using the weights of PC1 and rebalancing this portfolio monthly) and holding the PC1 Offense portfolio when risk is on- the equity curve is above its 200-day sma- and holding the PC1 Defense portfolio when risk is off- the equity curve of PC1 Offense is below its 200-day sma. We can give this simple strategy a name- “2D Asset Allocation”- which represents the two dimensions that we have separated the asset class universe into: Offense and Defense. The performance of this strategy is shown below:

The performance of this simple strategy is quite good, and manages to perform well even during the 2015 period which was difficult for traditional momentum/trend-following strategies. Below is a table showing the summary statistics. A good tactical strategy will ideally perform better than the buy and hold version of its underlying offense/defensive components over a full market cycle. Clearly the 2D Asset Allocation does substantially better than either component in isolation.

The best part about this strategy is that it was by no means “curve-fit” since the 200sma is a well-established strategy and is not the optimal strategy on the PC1 Offense portfolio. Using PCA to reduce dimensionality and derive this portfolio is a well-established statistical practice. The only caveat is that this portfolio was derived “in sample” which is less than ideal but no different than the starting place from which traditional system developers create trading strategies via backtests. Perhaps a better way to do this would be to using a rolling or anchored PCA analysis to derive the two portfolios instead on a walk forward basis. The choice of asset class universe in this case was designed to capture major asset classes, but the good thing about PCA is that you can use just about any asset class universe you want without introducing undue bias by choosing an arbitrary subset. In either case, this is a good example of how tactical asset allocation can be greatly simplified. Refinements to the strategy could include holding a minimum allocation to PC1 Defense for diversification purposes or potentially using momentum within the PC1 Offense and Defense portfolios to overweight/underweight different holdings. The possibilities are endless.

 

This material is for informational purposes only. It is not intended to serve as a substitute for personalized investment advice or as a recommendation or solicitation of any particular security, strategy or investment product. Opinions expressed are based on economic or market conditions at the time this material was written. Economies and markets fluctuate. Actual economic or market events may turn out differently than anticipated. Facts presented have been obtained from sources believed to be reliable, however, cannot guarantee the accuracy or completeness of such information, and certain information presented here may have been condensed or summarized from its original source.

Advertisements

“2D Asset Allocation” Using PCA (Part 1)

July 23, 2018

Asset allocation is a complex problem that can be solved using endless variations of different approaches that range from theoretical like Mean-Variance to heuristic like Minimum Correlation or even “tactical strategies.” Another challenge is defining an appropriate asset class universe which can lead to insidious biases that even experienced practitioners can fail to grasp or appreciate. Reducing dimensionality and the number of assumptions is the ultimate goal. The simplest way to manage a portfolio is to revert to a CAPM world where there is a market portfolio and you can leverage or hold cash to meet your risk tolerance requirements. But this method also requires one to define a “market portfolio” which in theory can be defined as the market-cap weighted mix of investable asset classes, but in practice is elusive to define and also determine on a real-time basis. What we really want is a sense of what drives systematic risk across a range of asset classes and to identify a portfolio that best represents that systematic risk (offense), and a portfolio that is inversely correlated to that systematic risk (defense). A parsimonious way to make that determination is to use Principal Component Analysis (PCA) by isolating the PC1 or first principal component portfolio that explains most of the variation across a broad set of asset classes. In most cases, the first principal component will explain between 60-70% of the variation across asset classes and represents a core systematic risk factor. If we take a large basket of core asset classes we can use PCA to identify this PC1 portfolio over the period from 1995-2018 using ETFs with index extensions. In this case we used the R code provided by Jim Picerno’s excellent new book Quantitative Investment Portfolio Analytics in R.

We can see that this PC1 portfolio makes a lot of intuitive sense: The highest weights are in Emerging Markets, Nasdaq/Technology,  and Small Cap (Offense). Asset classes with negative weights have an inverse relationship to this core systematic risk factor, and the lowest are Long-Term Treasurys followed by Intermediate Treasurys, Inflation-Protected Treasurys, the Aggregate Bond Index and Short-Term Treasurys  (Defense).  Effectively the  “Offense” portfolio is positively tilted toward the most aggressive asset classes that likely perform the best during a bull market, while the “Defense” portfolio is positively tilted toward the most defensive asset classes that likely perform the best in a bear markets. With one calculation we have mathematically separated the asset classes into two broad groups/dimensions which can be used to create a wide variety of different simple asset allocation schemes. In a subsequent post we will show some examples of how this can be done.

Adaptive Volatility: A Robustness Test Using Global Risk Parity

November 29, 2017

In the last post we introduced the concept of using adaptive volatility in order to have a flexible lookback as a function of market conditions. We used the R-squared of price as a proxy for the strength of the trend in the underlying market in order to vary the half-life in an exponential moving average framework. The transition function used an exponential formula to translate to a smoothing constant. There are many reasons why this approach might be desirable from a regime or state dependent volatility framework, to improving the  mitigation of tail risk by being more responsive to market feedback loops as mentioned in this article from Alpha Architect. In the latter case, by shortening the volatility lookback when the market seems to be forming a bubble in either direction (as measured by trend measures such as the Hurst Exponent or R-squared) we can more rapidly adjust volatility to changes in the market conditions.

In order to test the robustness of the adaptive volatility measure we decided to follow the approach of forming risk parity portfolios which was inspired by this article by Newfound Research. Our simple Global Risk Parity portfolio uses five major asset classes: Domestic Equities (VTI), Commodities (DBC), International Equity (EFA), Bonds (IEF), and Real Estate (ICF). The choice of VTI was deliberate since we already did the first test using SPY. VTI contains the full spectrum of domestic equities including large, mid and small cap whereas SPY is strictly large cap. We created simple risk parity portfolios (position size is equivalent to 1/vol scaled to the sum of inverse vol across assets) with weekly rebalancing and a 1-day delay in execution. For realized volatility portfolios we ran each individually using various parameters including  20-day, 60-day, 252-day and all history. To test adaptive volatility we ran 27 different portfolios that varied the maximum smoothing constant and the r-squared lookback. The smoothing constant was varied between .1,.5 and .9 and the R-squared lookback was varied using 10,12,15,20,25,30,40,50 and 60 days. We chose to keep the multiplier (-10 in the original post) the same since it was a proxy for the longest possible lookback (all history) by design. The testing period was from 1995 to present and we used extensions with indices for the ETFs when necessary to go back in time. In the graph below we chart the return versus risk for each portfolio.

 

We used a line to separate the performance of the realized volatility portfolios to better illustrate the superiority in performance of the adaptive volatility portfolios. All parameter combinations outperformed the realized volatility portfolios on a return basis. In terms of risk-adjusted return or sharpe ratio, the realized volatility portfolios fell on the 0%, 3.3%, 13.3% and 33% percentile in the distribution of all portfolios- in other words nearly all the adaptive portfolios also outperformed on a risk-adjusted basis. Was there any systematic advantage to using certain parameters for the adaptive volatility portfolios? As it turns out the maximum smoothing constant was less important than the choice of R-squared. We stated in the previous post that shorter r-squared parameters on average were more desirable than long parameters as long as they weren’t too short so as to avoid capturing noise. Shorter lookbacks should allow the adaptive volatility to more rapidly adjust to current market conditions and therefore reduce drawdowns and improve returns. It turns out that this pattern is validated when we average across smoothing constant values (hold them constant) and look at the return relative to maximum drawdown (MAR)  as a function of R-squared lookback.

 

 

 

 

 

 

 

 

 

 

 

 

Clearly the shorter-term R-squared values improved the return relative to maximum drawdown. While not shown, the drawdowns were much lower and drove this effect, while the returns showed a more modest improvement. The drawback to shorter lookbacks is increased turnover, which can be mitigated by longer rebalancing windows or through improved smoothing measures or rules to mitigate allocation changes without materially affecting results. Another alternative is to average all possible r-squared and smoothing constant portfolios with a tilt toward shorter r-squared parameters to have a good balance between responsiveness and smoothness while mitigating the risk of a poor parameter choice.

In conclusion, this simple robustness test appears to show that adaptive volatility is relatively robust and may have practical value as a substitute or complement to realized volatility. We will do some single stock tests in order to further investigate this effect and possibly compare to traditional forecasting methods such as GARCH.  Additional exploration on this concept could be done to vary the transition formula or choice of trend indicator. Finally, it may be valuable to test these methods in a more formal volatility forecasting model rather than using just a backtest, and calibrate the parameters according to which are most effective every day.

 

Information on this website is provided by David Varadi, CFA, with all rights reserved, has been prepared for informational purposes only and is not an offer to buy or sell any security, product or other financial instrument. All investments and strategies have risk, including loss of principal, and one cannot use graphs or charts alone in making investment decisions.    The author(s) of any blogs or articles are principally responsible for their preparation and are expressing their own opinions and viewpoints, which are subject to change without notice and may differ from the view or opinions of others affiliated with our firm or its affiliates. Any conclusions or forward-looking statements presented are speculative and are not intended to predict the future or performance of any specific investment strategy. Any reprinted material is done with permission of the owner.

 

Adaptive Volatility

November 15, 2017

One of the inherent challenges in designing strategies is the need to specify certain parameters. Volatility parameters tend to work fairly well regardless of lookback, but there are inherent trade-offs to using short-term versus longer-term volatility. The former is more responsive to current market conditions while the latter is more stable. One approach is to use a range of lookbacks which reduces the variance of the estimator or strategy-  i.e. you have less risk of being wrong. The flip side is that you have not increased accuracy or reduced bias. Ultimately you don’t want to underfit relevant features as much as you do not want to overfit random noise in the data. Forecasting volatility can be beneficial towards achieving a solution but is more complicated to implement and exchanges lookback parameters for a new set of parameters. Using market-based measures such as the options market has the fewest parameters and inherent assumptions, and can theoretically improve accuracy but the data is not easily accessible, and it is more useful for individual equities rather than macro markets.

 

An alternative  approach is to create an “adaptive” volatility measure that varies its lookback as a function of market conditions. Using an exponential moving average framework we can apply a transition function that uses some variable that can help us decide what conditions should require shorter or longer lookbacks. More specifically, we vary the smoothing constant or alpha of the EMA using a mathematical transform of a chosen predictor variable.  The benefit of this approach is that it can potentially improve outcomes by switching to short or longer volatility as a function of market conditions, and it can be superior to picking a single parameter or basket of multiple parameters. Furthermore, it can achieve a better trade-off between responsiveness and smoothness which can lead to better outcomes when transaction costs become an issue.

 

 

How do we choose this predictor variable? There are  two observations about volatility that can help us determine what to use:

 

  1. Volatility can be mean-reverting within a particular market regime- this favors longer lookbacks for volatility to avoid making inefficient and counterproductive changes in position size
  2. Volatility can trend higher or lower during a transition to a new market regime- this favors shorter lookbacks for volatility to rapidly respond by increasing or decreasing position size

We can’t predict what regime we are in necessarily so the simplest way to address these issues is to look at whether the market is trending or mean-reverting. The simplest method is to use the R-squared of the underlying price of a chosen market with respect to time. A high r-squared indicates a strong linear fit, or high degree of trend while the opposite indicates a rangebound or sideways market. If the market is trending (r-squared is higher), then we want to shorten our lookbacks in order to ensure we can capture any sudden or abrupt changes in volatility. If the market is trendless or mean-reverting (r-squared is low) then we want to lengthen our lookbacks since we would also expect that volatility should revert to its historical long-term mean.

Transition Function:

In order to translate the R-squared value into a smoothing constant (SC) or alpha for an exponential moving average we need a mathematical transform. Since markets are lognormally distributed an exponential function makes the most sense.

SC=  EXP(-10 x (1- R-squared(price/time, length))

MIN( SC, .5)

To get a more stable measure of R-squared we use a lookback of 20-days, but values between 15 to 60 days are all reasonable (shorter is noisier, longer has greater lag). By choosing -10 in the above formula, this will default to an almost anchored or all-time lookback for historical volatility for the underlying, which we expect to serve as an indication of “fair value” during periods in which volatility is mean-reverting. (Technically speaking if the r-squared is  zero then  substituting (2-SC)/SC gives an effective lookback of 44052.) By choosing a MAX SC of .5 we are limiting the smoothing period to a minimum lookback of effectively 3 days (2/(n+1)=SC). Therefore the adaptive volatility metric can vary its effective lookback window between 3 days and all history.

This formula gets applied to take the exponential moving average of squared returns. Translating this to annualized volatility, you need to take the square root of the final value and multiply by the square root of 252 trading days.  We can compare this to the often used 20-day realized volatility on the S&P500 (SPY) to visualize the differences:

 

av2

 

Considering that the adaptive volatility uses a much longer average lookback than 20-days we can see that it has comparable responsiveness during periods of trending volatility, and has flat or unchanging volatility during periods of mean-reverting volatility. This leads to an ideal combination of greater accuracy and lower turnover. Even without considering transaction costs the results are impressive (note that leverage in the example below has not been constrained in order to isolate the pure differences):

 

av1

av3

The results show that adaptive volatility outperforms realized volatility, and while not shown this is true across all realized lookback windows. Relative to 20-day realized, adaptive volatility outperforms by 3% annually with the same standard deviation. Factoring in transaction costs would increase this gap in returns significantly.  Risk-adjusted returns are higher, but more impressively this comes with lower drawdowns even at the same level of volatility. This is due to the better combination of responsiveness and smoothness.  In either case,  I believe that adaptive volatility is an area worth considering as an alternative tool to research. One can come up with a variety of different predictors and transition formulas to research that may be superior– the purpose of using r-squared was that it happens to be straightforward and intuitive along with the exponential transition function.

 

 

 

Risk Management and Dynamic Beta Podast

August 4, 2017

I had the honor of speaking with Mebane Faber of Cambria Investment Management recently where I discussed the topic of risk management and also applying a dynamic beta approach on his widely popular podcast “The Mebane Faber Show”. The interview is almost an hour and covers a wide range of topics whether you are a quant geek like myself or an investor.

Here is the link to the podcast.

Episode #64: David Varadi, “Managing Risk Is Absolutely Critical”

Welcome QuantX!

January 27, 2017

quantx-funds

I am very proud to announce that readers can finally have access to products based on many of the quantitative ideas used in the blogosphere and published in academic research. Yesterday we launched five new ETFs through the QuantX Brand (linked to Blue Sky Asset Management).  They provide the building blocks to design customized portfolios with downside protection as well as  ETFs focused on enhanced stock selection. The funds follow quantitative strategies that are familiar in a tax efficient and transparent ETF wrapper. You can check out our new QuantX website:  http://www.quantxfunds.com/   and our recent press release: http://www.marketwired.com/press-release/blue-sky-asset-management-launches-the-quantx-family-of-etfs-2191355.htm

Now that we have gone through the long and arduous launch process, I will have more time to write about quantitative ideas and also some of the cool new concepts behind the funds!

Tracking the Performance of Tactical Strategies

September 8, 2016

allocate-smartly

There is a cool new website that tracks the performance of well-known tactical strategies. AllocateSmartly  has collected an extensive list of strategies from well-known hedge fund managers like Ray Dalio along with several other portfolio managers and financial bloggers. The backtests for these strategies use a very detailed and comprehensive method that is both conservative and realistic. Where possible, the author uses tradeable assets rather than indices and factors in transaction costs along with careful treatment of dividends. The current allocations and performance are tracked in real-time which allows investors to be able to realistically trade these portfolios. Curiously the best performing model tracked on the website this year is the Minimum Correlation Algorithm from CSSA which says a lot about the importance of diversification in 2016 versus momentum and managing risk via trend-following/time-series momentum. In fact, if you dig deeper you will notice that most of the best performers have a structural or dynamic diversification element. The worst performers have been the most concentrated and oriented toward identifying the best performers. As the website correctly points out- the diversification oriented strategies tend to do well during normal market conditions but ultimately the dynamic and more tactical strategies outperform during bear markets. Over longer backtest periods, the more truly tactical performers had better long-term performance. Different market regimes will reward different approaches depending on how predictable and interrelated the markets happen to be that year. An umbrella is great for a rain storm but less than ideal for a sunny day. That is why it is important to understand the strategies you are following and why you are investing in them rather than blindly chase performance. While many quant developers and investors chase the best looking equity curves it is important to consider two primary factors: 1) the utility curve that works best for any one individual is a very personal choice (ie risk/reward and tracking error) 2) you need to choose a set of assumptions for capital markets either going forward or over the long-term: will returns, correlations or volatility be predictable and if so which will be the most predictable and why.

On a side note, I was informed that the very popular “A Simple Tactical Asset Allocation Strategy with Percentile Channels” by CSSA is also being added to the AllocateSmartly website very soon. This is a tactical and structural diversification hybrid that provides balanced factor risk with the ability to de-risk during market downturns. While it lacks the higher returns of more momentum-oriented or equity-centric strategies it provides a steady and low-risk profile across market conditions.

 

 

Disclosure: The author(s) principally responsible for the preparation of this material are expressing their own opinions and viewpoints, which are subject to change without notice and may differ from the view or opinions of others at BSAM or its affiliates. Any conclusions presented are speculative and are not intended to predict the future of any specific investment strategy. This material is based on publicly available data as of the publication date and largely dependent on third party research and information which we do not independently verify. We make no representation or warranty with respect to the accuracy or completeness of this material. One cannot use any graphs or charts, by themselves, to make an informed investment decision. Estimates of future performance are based on assumptions that may not be realized and actual events may differ from events assumed. BSAM is not acting as a fiduciary in presenting this material. Benchmark indices are presented or discussed for illustrative purposes only and do not account for deduction of fees and expenses incurred by investors.

The strategies discussed in this material may not be suitable for all investors. We urge you to talk with your investment adviser prior to making any investment decisions. Information taken from Minimum Correlation Algorithm strategy article is publicly available and used by a third party to generate the strategies and signals provided on AllocateSmartly.com. We have not reviewed and do not represent this information as accurately interpreted or utilized.