Skip to content

Interview With a Pioneer: David Aronson

May 12, 2014

aronson

David Aronson is considered by many serious quants to be one of the first authors to seriously address the subject of data-mining bias in trading system development. His popular book “Evidence-Based Technical Analysis” is a must read for system developers. One of the interesting things that most people do not know is that David was a pioneer in the use of machine-learning in financial markets going back over four decades. Over that time he has become an expert in developing predictive models using trading indicators.

Recently he released the book: Stastically Sound Machine Learning for Algorithmic Trading of Financial Instruments as a companion to the TSSB software that implements all of the applications found in the book. The TSSB software is incredibly powerful and the book does a good job of explaining the myriad of applications that are possible.  After reading the book, it became readily apparent that this software was the product of hundreds or even thousands of hours of very meticulous work that could only be shaped by a lifetime of experience working with machine-learning. I had a chance to speak with David recently at length to discuss a variety of different topics:

 

What was the motivation behind TSSB?

The initial purpose for creating TSSB was as internal software for Hood River Research in our consulting work in applying predictive analytics to financial markets. The bulk of our consulting work has been developing performance boosting signal filters to existing trading systems. There was no software available  that dealt successfully with data manning bias and over fitting problems associated with the application of machine learning to financial market data.  We decided to sell a guided tutorial for its use (Statistically Sound Machine Learning for Algorithmic Trading of Financial Instruments) to raise funds for its additional development.  TSSB is made available for free.

What is it that got you interested in predictive modelling versus using traditional trading systems?

I started as a stock broker with Merrill lynch in the 70’s. I wanted to promote the money management services of a Netcom a CTA—but Merill wouldn’t permit that at the time. So I left to go out on my own and began analyzing the performance of all CTAs registered with the CFTC as of 1978.  I started reading about statistical pattern recognition (what is now known as predictive analytics) after a prospect of mine from the aerospace industry suggested it might be valuable to apply to financial markets. Only two CTAs in my survey were using such methods- and so I thought there would be an edge to trading systems based on this approach. But in the late 1970’s affordable computers at the time were not quite up to the task. A precursor to Hood River Research was Raden Research Group. We developed an early predictive analytics platform for financial market data software called Prism. The software used a machine learning technique called kernel regression (GRNN) and this predated the use of neural networks and the publication of papers on neural nets in the 1980’s. However, like NN, some of the early methods had the problem of over-fitting the data—and few appreciated the statistical inference issues involved.  Later I joined forces with Dr. Timothy Masters who is a statistician and TSSB was developed.

Why do you think conventional technical analysis is flawed from a statistical or theoretical standpoint?

The quality of the indicators uses as inputs to a predictive model or a trading system are very important. Even the best conventional technical indicators have only small amount predictive information. The vast majority is noise. Thus the task is to model that tiny amount of useful information in each indicator and it with useful information in other indicators.  Rules defined by a human analyst often miss potentially useful but subtle information.

Consistency is also an issue—experts are not consistent in their interpretation of multi-variable data even when presented with the exact same information on separate occasions.  Models, however they are developed are, by definition,  always consistent. I would also highlight that there is ample peer-reviewed research demonstrating that humans lack the configural thinking abilities needed to integrate multiple variables simultaneously, except under the most ideal conditions.  In contrast this is a task that is easily handled by quantitative models.

You wrote the book: “Evidence-Based Technical Analysis”, what are the challenges of identifying potentially profitable technical trading rules using conventional or even state of the art statistical significance tests alone?

Standard statistical significance tests are fine when evaluating a single hypothesis.  In the context of developing a trading system this would be the case when the developer predefines all indicators parameter values, rules, etc. and this is never tweaked and retested.  The challenge lies with trying to evaluate trading systems “discovered” after many variants of the system have been tested and best performing one is selected.  This search, often called data mining renders standard significance tests useless.  Data mining is not a bad thing, in and of itself.  We all do it either manually or in an automated fashion.  The error is in failing to realize that specialized evaluation methods are required.

Another issue worth pointing out that standard predictive modeling methods are guided by a criterion based on minimizing prediction errors, such as mean squared error and these are not optimal for predictive modes intended to be used for trading financial markets.  It is possible for a model to have poor error reduction across the entire range of its forecasts while being profitable for trading because when its forecasts are extreme they carry useful information.  It is more appropriate to use financial measures such as the profit factor which are all included as objective functions within TSSB.

Yet a 3rd issue is the multiple hypothesis problem is encountered when building systems. Typically there is a search for the best indicators from an initial large set of candidates, a search for the best values of various tuning parameters, perhaps even a search for the best systems to include in a portfolio of trading systems. These searches are typically conducted via guided search where what is learned at step N is used to guide what is searched at step N+1.  Standard approaches to this problem  such as  White’s Reality Check and the one I discussed in Evidence Based Technical Analysis (Wiley 2006) fail for guided search.  Genetic algorithms and genetic programming, in fact all forms of machine learning that build multi-indicator trading systems use guided search.  One of the unique features of the TSSB software is that we have Permutation Training that does work for guided search machine learning.

Which methods that most quantitative research analysts use are potentially the most dangerous/least likely to work based upon your research? Which methods that most technical analysis gurus use that are potentially the most dangerous/least likely to work?

Now that the statistical tools are so easy to use and there is so much free code (ie R etc) there is a lot of over-fitting and a lot of backtests that look great but don’t generalize on out-of-sample data going forward. Because empirical research on financial market data has only one set of historical data, and it is easy to abuse almost any type of methodology including walk-forward. Assuming that you use software such as TSSB it is easier to avoid these issues. That said, there is no substitute for common sense or logic in selecting indicators to use or building intelligent model architecture. In my opinion, the way to differentiate or uncover real opportunities currently lie in the clever engineering of new features-  such as better indicators.

Why are the TSSB indicators superior to the conventional indicators that most traders tend to look at? What advantages do the TSSB indicators have that are unique?

Many of the indicators included in the TSSB indicator library, which number over 100, have been transformed or re-scaled for consistency across markets.  This is crucial for cross-sectional analysis. Some utilize non-linear fitting methods on the underlying variables to produce unique outputs. We have also included a wide variety of unique indicators like Morlet wavelets, some proprietary third-party indicators such as FTI (Follow-Through-Index developed by Khalsa), as well as some indicators that we have seen published like the financial turbulence indicator by Kritzman that we found to be unique or valuable.

Thank You David.

Adaptive Portfolio Allocations

May 6, 2014

adaptI wrote a paper with a colleague- Jason Teed- for the NAAIM competition. The concept was to apply basic machine-learning algorithms to generate adaptive portfolio allocations using traditional inputs such as returns, volatility and correlations. In contrast to the seminal works on Adaptive Asset Allocation (Butler,Philbrick, Gordillo) which focused on creating allocations that adapted to changing historical inputs over time, our paper on Adaptive Portfolio Allocations (APA) focuses on how to  adaptively integrate these changing inputs versus using an established theoretical framework. The paper can be found here: Adaptive Portfolio Allocations Paper. A lot of other interesting papers were submitted to the NAAIM competition and the rest of them can be found here. The method of integration of these portfolio inputs by APA into a final set of portfolio weights is not theory or model driven like MPT, but instead is based upon learning how they interact to produce optimal portfolios from a sharpe ratio perspective. The results show that a traditional mean-variance/Markowitz/MPT framework under-performs this simple framework in terms of maximizing the sharpe ratio. The data further implies that traditional MPT makes far too many trades and takes on too many extreme positions as a function of how it is supposed to generate portfolio weights.  This occurs because the inputs- especially the returns- are very noisy and may also demonstrate non-linear or counter-intuitive relationships. In contrast, by learning how the inputs map historically to optimal portfolios at the asset level, the resulting allocations drift in a more stable manner over time.  This simple learning framework proposed can be substantially extended with a more elegant framework to produce superior results to those in the paper. The methodology for multi-asset portfolios was limited to an aggregation of pairwise portfolio allocations for purposes of simplicity for readers. The paper didn’t win (or even place for that matter), but like many contributions made on this blog it was designed to inspire new ideas rather than sell cookie-cutter solutions or sound too official or academic. At the end of the day there is no simple ABC recipe or trading system that can survive indefinitely in the ever-changing nature of the markets. There is no amount of rigor,simulation, or sophistication that is ever going to change that. As such, the hope was to provide insight into how to harness a truly adaptive approach for the challenging task of making portfolio allocations.

Probabilistic Absolute Momentum (PAM)

March 3, 2014

In the last post on Probabilistic Momentum we introduced a simple method to transform a standard momentum strategy to a  probability distribution to create confidence thresholds for trading. The spreadsheet used to replicate this method can be found here. This framework is intellectually superior to a binary comparison  between two assets because the tracking error of choosing one versus the other is not symmetric across momentum opportunities. The opportunity cost of choosing one asset versus another is embedded in this framework, and using a confidence threshold that is greater than 50% will help to standardize the risk of momentum decisions across diffferent pairings (for example using momentum with stocks and bonds is more risky than with say the S&P500 and the Nasdaq).

The same concept can be used for creating an absolute momentum methodology–this concept was introduced by Gary Antonacci of Optimal Momentum in a paper here. The general idea for those that are not familiar, is that you can use the relative momentum between a target asset-say equities- and some low-risk asset such as t-bills or short-term treasurys (cash) to generate switching decisions between the target and cash. This can be used instead of applying a simple moving average strategy to the underlying asset. In this case we can apply the same approach with Probabilistic Momentum with a short-term treasury ETF such as SHY with some target asset to create a Probabilistic Absolute Momentum strategy (PAM). In this case, I created an example with the Nasdaq (QQQ) and 1-3 year treasurys (SHY) and used the maximum period of time when both had history available (roughly 2800 bars).  I chose 60% as the confidence threshold to switch between QQQ and SHY. The momentum lookback window chosen was 120-days. We did not assume any trading costs in this case–but that would favor PAM even more. Here is a chart of the historical transitions of using the probabilistic approach (PAM) versus a simple absolute momentum approach:

PAM 1

 

Here is the performance breakdown of applying this strategy:

PAM2

 

Here we see that Probabilistic Absolute Momentum reduces the number of trades by over 80% from 121 to 23. The raw performance is improved by almost 2%, and the sharpe ratio is improved by roughly 15%.  More importantly, from a psychological standpoint using PAM is much easier to use and stick with as a discretionary trader or even as a quantitative portfolio manager. It eliminates a lot of the costly whipsaws that result from trying to switch between being invested and being in cash.  It also makes it easier to overlay an absolute momentum strategy on a standard momentum strategy since there is less interference from the cash decision.

Spreadsheet Correction- Probabilistic Momentum

February 12, 2014

The spreadsheet below is missing a minus sign in the formula which is  highlighted below in red. The formula in cell F3 should read:

=IF(E7>0,1-TDIST(E7,COUNT(C2:C61),1),TDIST(-E7,COUNT(C2:C61),1))
This error was not built into our code, it was copied incorrectly into excel. The corrected sheet can be found here:
probabilistic momentum worksheet

Probabilistic Momentum Spreadsheet

February 12, 2014

In the last post, I introduced the concept of viewing momentum as  a probability of one asset outperforming the other versus a binary decision driven by whichever return is greater between a pair of assets. This method incorporates the joint distribution between two assets that factors in their variance and covariance. The difference in the two mean returns are compared to the tracking error between two assets to compute the information ratio. This ratio is then converted to a probability via the t-distribution to provide a more intelligent confidence-based buffer to avoid costly switching.  A good article by Corey Hoffstein at Newfound discusses a related concept here.  Many readers have inquired about a spreadsheet example for probabilistic momentum which can be found here : probabilistic momentum worksheet.

Are Simple Momentum Strategies Too Dumb? Introducing Probabilistic Momentum

January 28, 2014

Momentum

Momentum remains the most cherished and frequently used strategy for tactical investors and quantitative systems. Empirical support for momentum as a ubiqutous anomaly across global financial markets is virtually iron-clad– supported by even the most skeptical high priests of academic finance. Simple momentum strategies seek to buy the best performers by comparing the average or compound return between two assets or a group of assets. The beauty of this approach is its inherent simplicity– from a quantitative standpoint this increases the chances that a strategy will be robust and work in the future. The downside to this approach is that it does not capture some important pieces of information that can lead to: 1) incorrect preferences 2) make the system more susceptible to random noise, and  3) also dramatically magnify trading costs.

Consider the picture of the two horses above. If we are watching a horse race and try to determine which horse is going to take the lead over some time interval (say the next 10 seconds) our simplest strategy is to pick the horse that is currently winning now. For those of you who have observed a horse race, often two horses that are close will frequently shift positions in taking the lead. Sometimes they will alternate (negatively correlated) and other times they will accelerate and slow down at the same time (correlated). Certain horses tend to be less consistent and are prone to bursts of speed followed by a more measured pace (high volatility), while others are very steady (low volatility). Depending on how the horses are paired together, it may be difficult to accurately pick which one will lead just by simple momentum alone. Intuitively, the human eye can notice that one horse will lead the other with a consistent performance- and despite shifting positions occasionally, these shifts are small and and the leading horse is clearly gaining a significant lead. Ultimately, we must acknowledge that to determine whether one horse or one stock is outperforming the other, we need to capture the relationship between the two and also their relative noise in addition to just a simple measure of distance versus time.

In terms of investing, what we really want to know is how to determine the probability or confidence that one asset is going to outperform the other. Surely if the odds of outperformance are only 51% for example, this is not much better than flipping a coin. It is unlikely that two assets are statistically different from one another in that context. But how do we determine such a probability as it relates to momentum? Suppose we have assets A and B. We want to determine the probability that A will outperform B. This implies that B will serve as an index or benchmark to A. In standard finance curriculum, we know that the Information Ratio is an easy way to capture the relative returns in relation to the risk versus some benchmark. It is calculated as:

information ratio

 

Where Rp= return on the portfolio or asset in question and

Ri= return on the index or benchmark

Sp-i= the tracking error of the portfolio versus the benchmark

The next question is how do we translate this to a probability? Typically one would use a normal distribution to find the probability using the information ratio (IR) as an input. However, the normal distribution is only appropriate with a large sample size. For smaller sample sizes that are prevalent with momentum lookbacks it is more appropriate to use a t-distribution. Thus

Probabilistic Momentum (A vs B)= Tconf (IR)

Probabilistic Momentum (B vs A)= 1-Probabilistic Momentum (A vs B)

This number for A vs B is subtracted from 1 if the information ratio is positive and kept as is if the information ratio is negative. The degrees of freedom is equal to the number of periods in the lookback minus one. In one neat calculation we have compressed the momentum measurement into a probability– one that incorporates the correlation and relative volatility of the two assets as well as their momentum. This allows us to make more intelligent momentum trades while also avoiding excessive transaction costs. The next aspect of probabilistic momentum is to make use of the concept of hysteresis.  Since markets are noisy it is difficult to tell whether one asset is outperforming the other. One effective filter is to avoid switching in between two boundaries. This implies switching assets only when the confidence of one being greater than the other is greater than a certain threshold. For example, if I specify a confidence level of 60%, I will switch only when each asset has a 60% probability of being greater than the other.  This leaves a buffer zone of 20% ( 2x(60%-50%)) to avoid noise in making the switch. The result is a smooth transition from one asset to the other. Lets first look at how probabilistic momentum appears versus a simple momentum scheme that uses just the relative return to make the switch between assets.

Probabilistic Momentum 1

 

Notice that the switch between trading SPY and TLT (S&P500 and Treasurys) using probabilistic momentum are much smoother than using simple momentum. The timing of the trades also appears superior in many cases. Now lets look at a backtest of using probablistic momentum with a 60-day lookback versus a simple momentum system on both SPY and TLT with a confidence level of 60%.

Probabilistic Momentum 2

 

As you can see, using probabilistic momentum manages to: 1) increase return 2) dramatically reduce turnover 3) increases the sharpe ratio of return to risk.  This is accomplished gross of trading costs, comparisons net of a reasonable trading cost are even more compelling. From a client standpoint, there is no question that fewer trades (especially avoiding insignificant trades that fail to capture the right direction) also is highly appealing, putting aside the obvious tax implications of more frequent trading. Is this concept robust? On average across a wide range of pairs and time frames the answer is yes. For example here is a broad sample of lookbacks for SPY vs TLT:

Probabilistic Momentum

 

In this example, probabilistic momentum outperforms simple momentum over virtually all lookbacks with an incredible edge of over 2% cagr.  Turnover is reduced by an average of almost 70%. The sharpe ratio is on average roughly .13 higher for probabilistic versus simple. While this comparison is by no means conclusive, it shows the power of using this approach. There are a few caveats: 1) the threshold for confidence is a parameter that needs to be determined–although most work well. using larger thresholds creates greater lag and fewer trades, and vice versa and this tradeoff needs to be determined. As a guide for shorter lookbacks under 30 days a larger threshold  (75% or as high as 95% works for very short time frames)  is more appropriate. For longer lookbacks a confidence level between 55% and 75% tends to work better. 2)  the trendier one asset is versus the other, the smaller the advantage of using a large confidence level– this makes sense since perfect prediction would imply no filter to switch. 3) distribution assumptions– this is a long and boring story for another day.

This method of probabilistic momentum has a lot of potential extensions and applications. It also requires some additional complexity to integrate into a multi-asset context. But it is conceptually and theoretically appealing, and preliminary testing shows that even in its raw form there is substantial added value especially when transaction costs are factored in.

NAAIM Wagner Award 2014

January 16, 2014

To all readers, I would encourage you to consider entering into this year’s NAAIM Wagner Award. The prestigious prize has been awarded to several well-recognized industry practitioners in the past and has served to boost the career of many new entrants in the field. Details for submission can be found below:

The National Association of Active Investment Managers, or NAAIM, is holding its sixth annual Wagner Award competition for advancements in active investment management.  The competition is open to readers like yours who are academic faculty and graduate students, investment advisors, analysts and other financial professionals.

The first place winner will receive a $10,000 prize, plus an invite to present their winning paper at NAAIM’s national conference in May (free conference attendance, domestic air travel and hotel accommodations will also be provided).  Second and third place winners could also be eligible for monetary prizes of $3,000 and $1,000 respectively.

To find out more about NAAIM or to apply for the Wagner Award competition, visit NAAIM’s website, http://www.naaim.org  and look for the Wagner Award page in the Resources section.

 

For more information:

http://www.naaim.org/resources/wagner-award-papers/

 

To download the application:

http://www.naaim.org/wp-content/uploads/2013/10/Call-for-Papers-2014_FNL.pdf

 

 

All the best,

Greg Morris, NAAIM 2014 Wagner Award Chairman

Tel. 888-261-0787

info@naaim.org

http://www.naaim.org

 

Follow

Get every new post delivered to your Inbox.

Join 520 other followers