In the last post we introduced the Gini Coefficient as a measure of inequality and statistical dispersion. The primary benefit to using the Gini versus standard deviation is the proper consideration of abnormally large values in the cumulative distribution. There are many applications of the Gini within quantitative finance. One example is the Mean-Gini framework, which was presented as an alternative to classic Mean-Variance optimization. So what are the advantages of a Mean-Gini framework? Cheung et al. (2007) provide a very compelling argument:

“From a theoretical perspective, the mean-variance approach is appropriate
only when investment returns are normally distributed or investors’ preferences can be
characterized by quadratic functions.As the assumption of quadratic utility is known to be problematic on theoretical grounds, normality of investment returns becomes necessary for the mean-variance approach to hold. The validity of the assumption of normality or even near normality, however, is questionable when applied to ﬁnancial assets such as derivatives (which include various forms of options on stocks and other assets), stocks from emerging markets,and hedge funds.”

Given that we know financial data has a tendency to “mis-behave” (the 1 in 10 trillion events using normal distribution assumptions that happen every 10 years), the Mean-Variance framework is clearly more fragile than a Mean-Gini framework. That is the good news, unfortunately the bad news is that the use of the Gini Coefficient for optimization is complex and there is no efficient closed-form solution such as the case for the Markowitz Mean-Variance framework. In fact, the calculation and interpretation of the Gini Coefficient differs from the original statistic often quoted by economists. In a typical context, the Gini Coefficient would range between 0 and 1. In the case of portfolio management, we wish to compare a return stream to its cumulative distribution. While the mathematics of the Gini are beyond the scope of this article, the calculation is analogous to a measurement of absolute error that is more complex. In general terms, the absolute error is a measure of dispersion that calculates the average absolute difference of return values from their mean. In contrast, the standard deviation calculates squared errors. This makes the Gini more resistant to the outliers that plague the variance-covariance matrix estimation in a Mean-Variance setting.

1. February 25, 2012 8:54 am

Rather than solve for fat tails and the “normal” distribution as one problem, why not solve them as two problems. A set of hedges/exposures optimized for tails (melt downs and melt ups) and a set of exposures optimized for the normally distributed middle.

• February 25, 2012 11:25 am

That is an interesting idea, and related somewhat to a combined mean-variance/modified var framework—of course without explicitly considering the “melt-ups.” Is there any formal framework that you can suggest that addresses this in the manner you are describing?
best
david

• February 25, 2012 12:18 pm

The closest to a formalized framework that I have seen is by Jeff McGinn in “Tail Risk Killers” on page 326 where he has a simple graphic with coarse suggestions on what to hold across the bell curve.

I think the key is to find practical cutoffs on the tails… With simple counting of events (eg days with more than x% volatility, or weeks with total losses of more than x%, etc…) and identifying likelihoods of events (eg the volatililty issues occur most frequently when the market is below its 200 SMA), we can have a “observed” bell curve (OBC), not a theoretical on dependent on Central Limits…and by regime.

I don’t have much more than this notion. But it feels right. Kind of like why auto engineers use seat cushions nearer the passenger (normal curve?) and shocks, struts and bumpers the closer to the exterior (fat tails?). Each is optimized for its part of the environment.

• 2. 