One of the biggest challenges for managing tail-risk expectations is the limitation on clarity imposed by history.
For most markets, the post-World War II era provides the primary if not the only dataset. But expanding the opportunity set into some asset classes further reduces the available track record. Think junk bonds and emerging markets, for instance. How can we solve this challenge? Simulations are the first choice in the toolkit.
Artificially generating hypothetical returns provides an unlimited supply of historical results. The glitch is that modeling can take a wide number of paths and so not all simulations are created equal. In practice, simulating with several models to develop an average estimate has strong appeal since it’s never clear which model offers the best proxy for the real world. There are dozens of possibilities, but there’s an obvious place to start: resampling the historical data.
Simplicity is an attribute of resampling, which takes the existing return series and reshuffles the order. For an extra layer of randomness it’s advisable to allow for replacement – randomly using any one return multiple times, or not.
Since there’s no model here, there are no parameters to choose and therefore no chance of selecting the wrong distribution. As a simple example, let’s resample US stock market history in terms of maximum drawdown, which can be used as a proxy for tail risk.
For an illustration, we’ll use Vanguard Total US Stock Market (NYSEARCA:VTI) to represent equities and we’ll intentionally limit the ETF’s history to a start date of 2012. If this is all the data we had we could look to the historical record and find that the maximum drawdown for VTI was a steep 35% haircut in March 2020, during the coronavirus crash. It’s tempting to use that real-world tumble as an estimate of the worst-case scenario, but eight years of history is pretty thin.
As a first step in modeling a worst-case scenario for VTI drawdown we can turn to resampling. Using R to crunch the data, the first chart below shows the results of 1,000 simulations of maximum drawdown for the fund. The main takeaway: there’s a substantial possibility that peak-to-trough declines can get much deeper than the 35% haircut observed earlier this year (marked by the blue line).
The median drawdown estimate is -40.6% and the bulk of the sims fall within an interquartile range of -49% to -33.5%. The deepest estimate is a monster 74.6% crash – highly unlikely but not beyond the pale. For additional perspective, the chart above also shows how the simulated drawdown distribution would behave if it was normally distributed, which it clearly is not.
While we’re swimming in drawdown sims, there are other aspects of this tail-risk characteristic to consider, such as the total length of the drawdown episode. The longest stretch for VTI in the sample period under review is 242 trading days during the 2015 downturn. But as the next chart below advises, expecting even longer drawdown episodes is prudent.
Perhaps the two charts above are obvious, given the broad and deep analysis that’s been directed at the US equity market in recent decades. But running this type of analysis on other asset classes, and using other sim modeling applications — particularly for markets with relatively short histories — is crucial for managing expectations and understanding how assets can behave. History is a guide, of course, but it’s only one cut of results. Fortunately, there’s no reason to rely on this partial data set. Indeed, looking to history in isolation can be more than slightly misleading.
Previous articles in this series: