By Marc Odo, Swan Global Investments
Historically, investors quantified risk in terms of standard deviation, more commonly referred to as volatility. Standard deviation is the most widely used measure of risk in the investing world. The omnipresent Sharpe ratio, which quantifies the risk-vs.-return trade-off, uses standard deviation as its measure of risk.
Using volatility as the sole definition of risk, however, holds a number of flaws and can often be misleading and at odds with investors’ understanding of risk. If capital preservation is the primary concern of an investor, other metrics, like the pain index, provide a better measure of risk than standard deviation.
Standard Deviation has Three Main Flaws:
It fails to distinguish between upside and downside risk.
By definition, standard deviation measures the volatility of individual returns around a mean return. Unfortunately, standard deviation makes no distinction between the “good” observations that fall above the mean and the “bad” returns that fall below the mean. Most investors would not punish a manager with a high standard deviation if a good portion of the volatility was upside volatility.
The observations are viewed as independent when they clearly are not.
The more significant failing of standard deviation is that it does not account for the timing of the negative returns. If, for example, a decade has half a dozen exceptionally bad months, standard deviation cannot differentiate whether or not these bad observations were randomly scattered throughout the decade or if they were all concentrated within a narrow time frame. Should the investor care about this flaw in standard deviation? Yes.
Between 1989 and 2013, seven of the worst months in the entire 25 year range of the S&P 500 happened within July 2007 and February 2009—less than two year timeframe. A further twelve of the worst months of the last 25 years occurred during the dot-com bubble and the subsequent bear market at the start of the new millennium.
Logically, this makes sense. In the midst of a crisis, the markets don’t hit a “reset button” and start afresh just because everyone flips the calendar ahead to a new month. A crisis will play out independent of a calendar, taking however long it will take. In the case of the S&P 500, compounding month after month of epic losses resulted in a maximum drawdown of over 50% between August 2007 and February 2009. And yet standard deviation treats those months as independent observations, each one distinct from the next.
Investors don’t think of risk in terms of standard deviation.
Most investors think of risk in terms of capital preservation—how much money they could potentially lose.
It’s unlikely that many financial advisors field calls from angry clients asking, “What was my volatility last month?” It’s more likely most angry calls are phrased, “How much money did I lose?”
Standard deviation is a classroom concept; capital preservation is a real-world issue.
This is where the pain index comes in.
Pain Index as a Better Measure of Risk
Developed by Dr. Thomas Becker and Aaron Moore of Zephyr Associates, the Pain Index is similar to other measures of risk like standard deviation, beta, tracking error, etc. Where it differs, however, is in its definition of risk.