CAGR and Performance Measurement
CAGR- or the compounded annualized growth rate- often gets a bad rap in performance evaluation. Truthfully for systems without leverage, the CAGR is actually one of the best performing optimization goal functions in terms of out-of-sample performance–better than the Sharpe Ratio based on our own testing across multiple markets. This observation is consistent with several backtests that show how a ROC-based (ie CAGR) relative strength strategy tends to outperform more complex risk-adjusted CAGR measures. Furthermore, high CAGR systems often have a built in bias towards rewarding simplicity—it is very difficult to achieve a high CAGR with multiple rules that winnow down the observation set.
Putting aside concerns about the lack of a risk-adjusted component, the CAGR measure itself is fraught with instability. For the experienced system developer that has tried Monte Carlo testing, they would recognize that the CAGR can vary widely depending on the window of measurement. If you start measuring even 10-20 days forward or backward from the start of a 10-year backtest, the CAGR can vary as much as 3-5% depending on the system. This is because of the effects of daily measurement intervals and compounding. The implication of this noise factor embedded in CAGR measurement is that qualitative comparison of strategies can draw incorrect conclusions, and walk-forward testing will not produce the desired results. A more robust way to measure CAGR is to use an average of smaller compounded return samples such as the monthly CAGR. This measurement is considerably smoother, and will produce similar results regardless of your start/end dates. While lag is introduced using this method, utilizing a measure of the delta or acceleration in the moving average of CAGR can produce the best of both worlds. I will leave these suggestions for researchers to investigate.