The most common method of position sizing uses a fixed percentage risk divided by volatility to dictate the fraction of the account size to invest. In generic terms this is:

P= F/V

where F= typically 1% and
V= daily volatility (non-annualized)
P= portfolio position size

example: if V=2% and F=1% then the P= 50%

Standard deviation represents a generic measurement of dispersion that is bi-directional and does not favor any portion of the distribution over the other. One standard deviation contains 68% of the distribution of returns, which in percentile terms (assuming normality) would equate the 84th and 16th percentiles. But what if certain percentiles contained more information than others for position sizing? In a past post (D-Var Position Sizing) I used the 5th percentile for position-sizing to capture tail risk. The logical notion was that to compound wealth, we want to avoid large losses and thus should size based on our empirical observation of extreme losses. But the 5th percentile is rather arbitrary–why not the worst loss (0) or the 2nd percentile? Perhaps the 25th or even the 65th percentile contain significant value. How do we capture the portions of the distribution that contain the most value or even more importantly, how do we combine this information?

What we need is an adaptive approach that considers the value of using different portions of the distribution simultaneously that may also have systematic differences in magnitude- for example the 2nd percentile would have a lower P than the 25th percentile. Here is a method that addresses these issues in a fairly compact manner:

1) find the percentile values using 5% increments from 0 to 1 using a trailing lookback of say 60-days and compute a column array of these values for some index- say the S&P500/SPY for at least 1000 bars
2) compute P using a 1% as a default target and create a column array of P for each percentile interval
3) releverage the percentile values to a fixed target–say 100% so that they have the same scale
4) each day you would compute the average P over some lookback (60-252 days) and take 100%/average P and re-scale
5) the current value so that all percentiles have the same average P
6) compute equity curves that position size on the underlying index using P for each percentile interval
7) using a solver or mean-variance optimization, create a set of possible weights for each percentile interval that best maximizes portfolio sharpe or MAR (calmar ratio, or return/max dd). use a lookback of say 2-3 years
8) the weights times the P for each percentile interval create a weighted P which would be the adaptive percentile position size

1. March 8, 2012 9:15 pm

Thanks for sharing, David. I very much enjoy the logical/practical thinking behind the techniques mentioned in this blog.

I’m going to be busy this weekend trying to build this technique, as well as the last blog entry into a spreadsheet. I may end up ditching the suggested sharpe ratio as a KPI, and opt for other measurements mentioned in the past on this blod (DVCFE and/or DVR; a DV cocktail of sorts). The main aspect I’d like to dive into is seeing to what extend upward volatility is punished using the above methodology.

Do you accept general inquiry via email from average schlep blog-goers?

• 