# Introducing Composite Mean Reversion and Trend Following Measures: The Aggregate “M” Indicator

**Note:** *Apparently there were some questions about how to calculate Aggregate M, please email us at **dvindicators@gmail.com** to receive spreadsheet later this weekend. Note we use a cleaner version of data than Yahoo Finance adjusted for splits and dividends with verified closing prices. I have been made aware that the backtest results using Yahoo are different from ours (note we calculated the same values as our readers using Yahoo Finance), and that is because their data is not entirely accurate.*

The Aggregate M indicator is based on the concept that in the long term the market trends, while in the short-term the market is noisy, and has a tendency to mean-revert. Why not combine the two concepts to keep life simple? The Aggregate M is supposed to reflect an adjusted median that is filtered for short term noise. The median is a far more accurate measure of central tendency than a simple average especially with noisy data. Taking a superior measure of trend and filtering out some of the noise by adjusting for short-term mean reversion creates an even better median. The Aggregate M is now both trend and mean-reversion rolled into one. In the example below the aggregate M is simply the average of 1) 252 day PERCENTRANK of High, Low and Close values and 2) 10 day (1-PERCENTRANK) of High, Low and Close values. This average is smoothed using a .6 weight on today and .4 weight on yesterday. Is this robust? The S&P500 results over the last 4000 bars speak for themselves–high accuracy, good gains per trade and a nice equity curve. In a separate multimarket test with 20 markets going back to 1984, the Aggregate M did 27% CAGR through 2009. Is it the best method…….probably not—I certainly didn’t spend any time optimizing or digging as I came up with it just yesterday. Furthermore the structure can be substantially improved. But pretty damn good for two common sense parameters!

looks too good to be true! ponzi?

Are the mathematical (programming) details available anywhere? (C#, Java, Metastock,. . .) The ability to study more detail would help in obtaining a fuller comprehension.

hi peter……i don’t have the programming details in those platforms. I simply did this in Microsoft Excel. The Aggregate M and other indicators will be available on my site http://www.dvindicators.com for free coded for tradestation when I launch (shortly).

cheers

dv

Thanks David. In comment #21 you offer a spreadsheet. I would appreciate a copy: petergum at world.oberlin dot edu

Thank you. Nice system. -Pete

Hello David,

the result looks great. I of course wanted to reproduce this in my own environment. However i get significantly more trades (+101.) Would you please be so kind and look at these few lines of code of the “Aggregate M” Indicator… i can also send you an XLS if you want.

Thank you so much!

Frank

————————

h_rank_long = PERCENTRANK(High,252);

l_rank_long = PERCENTRANK(Low,252);

c_rank_long = PERCENTRANK(Close,252);

h_rank_short = 1-PERCENTRANK(High,10);

l_rank_short = 1-PERCENTRANK(Low,10);

c_rank_short = 1-PERCENTRANK(Close,10);

value = ((h_rank_long + l_rank_long + c_rank_long) + (h_rank_short + l_rank_short + c_rank_short))/6;

AggregateM = (value[1]*0.4) + (value*0.6);

hi frank, thanks a lot…..please see Ramon’s response which is correct……you are taking the PERCENTRANK of each individual column while I take the percentrank of the entire array including the High Low and Close values.

cheers

dv

Questions:

– I assume the percentrank of the high,low,open,close is just that – a percentrank across all four columns?

– The weighting confuses me – “This average is smoothed using a .6 weight on today and .4 weight on yesterday.” – so it’s not a straight average?

hi Damia, it is the percentrank of the 3 columns of the H, L, C array of price. I think the wording is a little confusing, the aggregate measure of central tendency of the two percentiles is averaged (smoothed) with this weighting. Practically speaking, it was already “averaged” by taking .5 times one percentile and .5 times the other. That is why I call it a smoothing.

just semantics!

cheers

dv

David – this looks rather interesting and novel (to me at least) concept…

A question on your charts: how do you generate them – is it from a special software/platform. It looks really neat (especially with the 4 plots on one chart)?

Thanks

Jez

Hi Jez, Corey my partner in crime generates these charts using Visual Basic code in Excel etc………don’t ask me how he does it because I certainly can’t work this kind of wizardry myself!

thanks

cheers

dv

Frank, this is my understanding:

rank_Long = PERCENTRANK(HLC 252 period Array, Close);

rank_Short = PERCENTRANK(HLC 10 period Array, Close);

***Note the HLC array is simply three columns of data in stead of one, i.e. =PERCENTRANK(D9:F260,F260)

Value = (rank_Long + rank_Short)/2

AggregateM = (value[1]*0.4) + (value*0.6);

Hope this helps

Cheers Ramon

thanks ramon as always!

cheers

dv

Hey Ramon – I was looking doing this in AB – the issue is that the PercentRank function you so nicely created can’t handle multiple arrays…any thoughts on that?

Hi Damian, yes you are right – need to tweak that percentRank function a little. I will post some amibroker code I baked earlier here shortly . . .

Can’t wait to see it Ramon. Thanks!

Excellent, as usual. Jeff

I am using NDO on SPY (actually MOC is not a whole lot different) with Yahoo data (Date, O, H, L, C in columns a-e). Formulas are

H=PERCENTRANK(C21:E272,E272) (hard-coded to 252 for the time being)

I=1-PERCENTRANK(C263:E272,E272) (hard-coded to 10 for the time being)

J=(H272+I272)/2

=(J272*$K$19)+(J271*$K$18) – where K19=0.6 and k18=0.4

I get a CAGR of ~12

hi john, did you also go short? Did you go back to 1993? i can assure you the results i have are correct.

cheers

dv

David, I am shorting and my data does go back to 1993 but due to the lead of 252 days my first trade is a short on 3/17/94. I can send you my ss if you want (if you don’t I understand!)

I am sure I am doing what Ramon is doing…

hi if you send me an email i can forward you a spreadsheet to match up with Corey’s

cheers

dv

Hi David,

Can u send me a copy of your spreadsheet as well, it looks very interesting.

my email is lazyboy1628 a t yahoo d ot com

Hi David,

i received the XLS, Thanks. Backtesting the strategy/indicator on my own platform will result in a different performance. Largely due to the fact that i’ve got different data.

What’s YOUR data source / provider?

Regards,

Frank

Ok guys, the appropriate High-Low-Close PercentRank function for Amibroker is below. Please note that this function ranks whatever is provided in the ‘Data3’ argument, so an example function call would that ranks the close would be:

Value1 = PercentRankHLC(High,Low,Close,252);

Enjoy!

cheers

Ramon

function PercentRankHLC( Data1,Data2,Data3,Periods )

{

Count = 0;

for ( i = 0; i Ref( Data1, -i ), 1, 0 );

Count = Count + IIf ( Ref( Data3, 0 ) > Ref( Data2, -i ), 1, 0 );

Count = Count + IIf ( Ref( Data3, 0 ) > Ref( Data3, -i ), 1, 0 );

}

return 100 * Count / (Periods*3-1);

}

Hi Ramon,

Thanks for your function! Seems like you lost a couple of lines when pasting, guess it should be like this?

function PercentRankHLC( Data1,Data2,Data3,Periods )

{

Count = 0;

for (i = 0; i Ref( Data1, -i ), 1, 0 );

Count = Count + IIf ( Ref( Data3, 0 ) > Ref( Data2, -i ), 1, 0 );

Count = Count + IIf ( Ref( Data3, 0 ) > Ref( Data3, -i ), 1, 0 );

}

return 100 * Count / (Periods*3-1);

}

Regards

js

Paste error, happened to me as well trying again. Regards js

function PercentRankHLC( Data1,Data2,Data3,Periods )

{

Count = 0;

for (i = 0; i Ref( Data1, -i ), 1, 0 );

Count = Count + IIf ( Ref( Data3, 0 ) > Ref( Data2, -i ), 1, 0 );

Count = Count + IIf ( Ref( Data3, 0 ) > Ref( Data3, -i ), 1, 0 );

}

`return 100 * Count / (Periods*3-1);`

}

…i Ref( Data1, -i ), 1, 0 );

function PercentRankHLC(Data1, Data2, Data3, Periods)

{

Count = 0;

for (i = 0; i < Periods + 1; i++)

{

Count = Count + IIf(Ref(Data3, 0) > Ref(Data1, -i), 1, 0);

Count = Count + IIf(Ref(Data3, 0) > Ref(Data2, -i), 1, 0);

Count = Count + IIf(Ref(Data3, 0) > Ref(Data3, -i), 1, 0);

}

return 100 * Count / (Periods*3-1);

}

hmm, looks like the paste of the code didn’t work properly, if you want the code email me at ramon a t minkcapital d ot com

U da man Ramon! Thanks…email on its way…

Hello Ramon,

can you please repost the full formula !

Oh god …i am too late..

thank you

*sorry for trouble

Hi David,

I note that the results are critically dependent on the data quality. Also, on whether the instrument is tradeable. I once found a simple system with incredible results on the ^DJI index before being disappointed to find that it did not work on the ETF. This was due to the highs and lows on the index calculated as the mean of the component highs and lows without accounting for the time they were made.

Anyway, could you give a hint on what you mean by improving the structure? Is it to do with taking more trend and countertrend components or something to do with the averaging?

Thanks

Kevin

hi kevin, improving the indicator is something that can be done at many levels…….first the choice of periods is somewhat arbitrary, as is the number of different periods to take into account. The weighting placed on each period is also arbitraray and not per se optimal. Furthermore the assumption of mean reversion on a short term window may or may not hold true for a given instrument at a given time.

i could go on, but i can’t give away all the secrets! :o)

cheers

dv

I have run this strategy over my data, (I use Norgate), and I get approximately 12% CAGR. As best as I can tell, there are at least several reasons for the discrepancy between DV’s results and mine.

1. DV is using dividend adjusted data. When dividends are re-invested and compounded, it is going to make a noticeable difference in returns.

2. The start date of testing. Unless you have specifically programmed your backtester to use 4000 bars, your results may be different. Also, several market days have passed since this article is published, and so now you may need to use slightly more than 4000 bars to replicate the results.

3. DV hasn’t covered this, but it is worth noting. For these types of setups, I require a cross of the AggM above 50. What this means is that when the testing starts, without requiring a cross, your platform may see that the AggM value is over 50, and therefore initiate a long trade. In real-time, you wouldn’t have taken the trade on that day. If you are not requiring a cross, this will also affect the beginning of the backtest and may cause your results to be slightly different from others.

4. Some platforms will make you wait until there are enough bars to calculate the averages, while other platforms, if not specifically told not to, will look back further than 4,000 bars so that the averages will be calculated. This too will cause discrepancies.

Whoops, messed up #2.

What I meant to say was that since some market days have passed, you’ll have to set your backtester to test from 12/20/1993-11/04/09, or else new data will be included.

Hi David,

Thanks, that’s very useful. I will do some walk forward optimization and see if I get anything worth sending over. FWIW I get 11% CAGR also using Yahoo data on the SPY.

Kevin

In the algorithm the treatment of the short-term “noise”/mean reversion is “=1-PERCENTRANK(C2:E11,E2)”. David, why the “1-” formulation>

Hi DV–A question regarding your testing: Did you model the buys to take place at the open Or the close of the day AFTER the signal switch? (Hopefully NOT on the same day at the close–right?)

Can you list the 20 markets that you used in the separate test ?

Hi Jack, The markets traded in the separate test were:

(all Futures) S&P 500, Nasdaq 100, Gold, Silver, Copper, Oil, Heating Oil, Natural Gas, Gasoline, Corn, Wheat, Soybeans, Cotton, Coffee, Cocoa, Live Cattle, Lean Hogs, Sugar, Australian Dollar, Pound, Yen, Euro, Swiss Franc.

Hey all

Looks like I’m late to the party on this one. I’m fascinated by this and would really like to play around with it. I’ve attempted to apply it to excel, but it’s not working for me. I was wondering if a kind soul had an excel sheet with the formula working that I could use as a base. My email is do d*t evans at yahoo d*t co d*t uk

Thanks in advance

Dave

hi dave, go to my site http://www.dvindicators.com and im sure there are a few people who can help you.

cheers

dv

David/Corey

I am not anywhere near you folks on math/computers etc. Simple question from a simple guy. I have been daily/manually updating AG M with Excel since I got it (about 7 weeks now), and I have had consistent reading >.5

Is this correct? Does that jibe with your readings?

Just checking.

Many, many thanks

Joe Marc

Could I get a copy of the XLS for he Aggregate “M” Indicator?

Thanks

Scott

David:

Could you send me your Agg M excel spreadsheet as well?

Thanks,

Kevin

David or Ramon, I need clarification on how the PercentRank works across 3 columns. The Excel help file only illustrate its use on one column of data.

It seems to me that if each ROW is looking for one value to rank, then the Close is superfluous as it always will fall within the values for High and Low.

Is Percent Rank averaging the three columns? i.e (H+L+C)/3 and then ranking that row as compared to other rows?

Thanks, keep up the great posts!

Hi Larry,

When you percentrank across three columns you are actually ranking all the data as if it were in one column or array – therefore the close will have an impact as it will increase the number of data points across which you are ranking.

Hope this makes sense!

Ramon

Hi David,

I discovered this website through Dr. Steenbarger’s Traderfeed. I found the works here very interesting and would like to receive a copy of the XLS spreadsheet if possible. My email is ymanyen@gmail.com

Thank you,

Yo-Man

Hi David,

I’m in the process of my thesis study which tests a number of publicly available trading strategies/indicators in Tradestation against the forex market (daily and intraday) and would love to be able to try this indicator out. I have a few questions/comments if I may:

1. Will Aggregate M work on other markets besides the S&P500?

2. Will it work intraday as well as it does daily?

3. If the spreadsheet is still available, I would greatly appreciate a copy. You can reach me at BrianLeip at G m-ail.

4. It would be preferable to understand some of the methodology behind the indicator (as with all the other strategies/indicators I am using) rather than just relegate it to a “black box”. Are there any blog posts/write-ups re: methodology that I could review?

Thanks in advance for your time. It looks quite impressive by the way

– Brian

hi brian, i will get back to you on this. please send me an email personally.

best

dv

Hi, thanks for the post. If the spread sheet is still available I would enjoy a copy to evaluate the methods to use with pairs and some other arbitrage techniques. If available please send to o.westerberg@gmail.com

Thanks, Omar

You could certainly see your enthusiasm within the work you

write. The world hopes for more passionate writers such as you who aren’t afraid to say how they believe. All the time go after your heart.