Skip to content

Interview With a Pioneer: David Aronson

May 12, 2014

aronson

David Aronson is considered by many serious quants to be one of the first authors to seriously address the subject of data-mining bias in trading system development. His popular book “Evidence-Based Technical Analysis” is a must read for system developers. One of the interesting things that most people do not know is that David was a pioneer in the use of machine-learning in financial markets going back over four decades. Over that time he has become an expert in developing predictive models using trading indicators.

Recently he released the book: Stastically Sound Machine Learning for Algorithmic Trading of Financial Instruments as a companion to the TSSB software that implements all of the applications found in the book. The TSSB software is incredibly powerful and the book does a good job of explaining the myriad of applications that are possible.  After reading the book, it became readily apparent that this software was the product of hundreds or even thousands of hours of very meticulous work that could only be shaped by a lifetime of experience working with machine-learning. I had a chance to speak with David recently at length to discuss a variety of different topics:

 

What was the motivation behind TSSB?

The initial purpose for creating TSSB was as internal software for Hood River Research in our consulting work in applying predictive analytics to financial markets. The bulk of our consulting work has been developing performance boosting signal filters to existing trading systems. There was no software available  that dealt successfully with data manning bias and over fitting problems associated with the application of machine learning to financial market data.  We decided to sell a guided tutorial for its use (Statistically Sound Machine Learning for Algorithmic Trading of Financial Instruments) to raise funds for its additional development.  TSSB is made available for free.

What is it that got you interested in predictive modelling versus using traditional trading systems?

I started as a stock broker with Merrill lynch in the 70’s. I wanted to promote the money management services of a Netcom a CTA—but Merill wouldn’t permit that at the time. So I left to go out on my own and began analyzing the performance of all CTAs registered with the CFTC as of 1978.  I started reading about statistical pattern recognition (what is now known as predictive analytics) after a prospect of mine from the aerospace industry suggested it might be valuable to apply to financial markets. Only two CTAs in my survey were using such methods- and so I thought there would be an edge to trading systems based on this approach. But in the late 1970’s affordable computers at the time were not quite up to the task. A precursor to Hood River Research was Raden Research Group. We developed an early predictive analytics platform for financial market data software called Prism. The software used a machine learning technique called kernel regression (GRNN) and this predated the use of neural networks and the publication of papers on neural nets in the 1980’s. However, like NN, some of the early methods had the problem of over-fitting the data—and few appreciated the statistical inference issues involved.  Later I joined forces with Dr. Timothy Masters who is a statistician and TSSB was developed.

Why do you think conventional technical analysis is flawed from a statistical or theoretical standpoint?

The quality of the indicators uses as inputs to a predictive model or a trading system are very important. Even the best conventional technical indicators have only small amount predictive information. The vast majority is noise. Thus the task is to model that tiny amount of useful information in each indicator and it with useful information in other indicators.  Rules defined by a human analyst often miss potentially useful but subtle information.

Consistency is also an issue—experts are not consistent in their interpretation of multi-variable data even when presented with the exact same information on separate occasions.  Models, however they are developed are, by definition,  always consistent. I would also highlight that there is ample peer-reviewed research demonstrating that humans lack the configural thinking abilities needed to integrate multiple variables simultaneously, except under the most ideal conditions.  In contrast this is a task that is easily handled by quantitative models.

You wrote the book: “Evidence-Based Technical Analysis”, what are the challenges of identifying potentially profitable technical trading rules using conventional or even state of the art statistical significance tests alone?

Standard statistical significance tests are fine when evaluating a single hypothesis.  In the context of developing a trading system this would be the case when the developer predefines all indicators parameter values, rules, etc. and this is never tweaked and retested.  The challenge lies with trying to evaluate trading systems “discovered” after many variants of the system have been tested and best performing one is selected.  This search, often called data mining renders standard significance tests useless.  Data mining is not a bad thing, in and of itself.  We all do it either manually or in an automated fashion.  The error is in failing to realize that specialized evaluation methods are required.

Another issue worth pointing out that standard predictive modeling methods are guided by a criterion based on minimizing prediction errors, such as mean squared error and these are not optimal for predictive modes intended to be used for trading financial markets.  It is possible for a model to have poor error reduction across the entire range of its forecasts while being profitable for trading because when its forecasts are extreme they carry useful information.  It is more appropriate to use financial measures such as the profit factor which are all included as objective functions within TSSB.

Yet a 3rd issue is the multiple hypothesis problem is encountered when building systems. Typically there is a search for the best indicators from an initial large set of candidates, a search for the best values of various tuning parameters, perhaps even a search for the best systems to include in a portfolio of trading systems. These searches are typically conducted via guided search where what is learned at step N is used to guide what is searched at step N+1.  Standard approaches to this problem  such as  White’s Reality Check and the one I discussed in Evidence Based Technical Analysis (Wiley 2006) fail for guided search.  Genetic algorithms and genetic programming, in fact all forms of machine learning that build multi-indicator trading systems use guided search.  One of the unique features of the TSSB software is that we have Permutation Training that does work for guided search machine learning.

Which methods that most quantitative research analysts use are potentially the most dangerous/least likely to work based upon your research? Which methods that most technical analysis gurus use that are potentially the most dangerous/least likely to work?

Now that the statistical tools are so easy to use and there is so much free code (ie R etc) there is a lot of over-fitting and a lot of backtests that look great but don’t generalize on out-of-sample data going forward. Because empirical research on financial market data has only one set of historical data, and it is easy to abuse almost any type of methodology including walk-forward. Assuming that you use software such as TSSB it is easier to avoid these issues. That said, there is no substitute for common sense or logic in selecting indicators to use or building intelligent model architecture. In my opinion, the way to differentiate or uncover real opportunities currently lie in the clever engineering of new features-  such as better indicators.

Why are the TSSB indicators superior to the conventional indicators that most traders tend to look at? What advantages do the TSSB indicators have that are unique?

Many of the indicators included in the TSSB indicator library, which number over 100, have been transformed or re-scaled for consistency across markets.  This is crucial for cross-sectional analysis. Some utilize non-linear fitting methods on the underlying variables to produce unique outputs. We have also included a wide variety of unique indicators like Morlet wavelets, some proprietary third-party indicators such as FTI (Follow-Through-Index developed by Khalsa), as well as some indicators that we have seen published like the financial turbulence indicator by Kritzman that we found to be unique or valuable.

Thank You David.

3 Comments leave one →
  1. May 13, 2014 5:42 am

    Great interview with a true pioneer on the issues of data mining of financial time series. This issue of data mining and more specifically “insidious over-fitting” is, from my observations, rampant in many otherwise credible data analysis problems – especially in finance but also in many other areas of business, often leading to bad predictions and bad business (or investment) decisions.

    Thanks for posting! 🙂

Trackbacks

  1. Daily Wrap for 5/12/2014 | The Whole Street
  2. Tuesday links: inflection points | Abnormal Returns

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: