an overview of value-at-risk

How much a portfolio could lose in a given time frame with a certain confidence

Overview of VaR and market risk management

Originated around the 1980s by JPMorgan’s RiskMetrics largely from financial crises in the 1980s and the regulatory responses thereof setting capital requirements for banks based on the risk characteristics of various asset classes, Value-at-Risk (VaR) quickly became one of the most widely used risk management tools in finance. According to Jorion (2001), VaR is defined as the worst expected loss over a given horizon under normal market conditions at a given level of confidence. In other words, VaR is a conditional quantile of the asset return loss distribution. As such, the key inputs into estimating VaR are: (1) initial investment/portfolio value, (2) time horizon, and (3) confidence level. As such, the sections below will describe how each of the three components are obtained and used to calculate the overall VaR measure.

VaR is a measure most often used in financial risk management (e.g., Basel standards), but similar calculations can be applied to other areas. Typical confidence values used are 0.95 and 0.99, while under the Basel II standards require a holding period of 10 days with a confidence level of 0.99.

Approaches of VaR

On a high level, there are generally three main approaches to calculating VaR. Over the years, variations to each approach have emerged to address different specifications and enhance shortcomings of the three main ones. The three approaches are:

1.     Historical Simulation

2.     Parametric Approach

3.     Monte Carlo Simulation

Historical Simulation

The historical simulation approach essentially using daily market changes from historical data over a certain lookback period. (e.g., 500 days) to generate certain risk factors (explanatory variables) in estimating a hypothetical loss distribution and then re-valuing the portfolio based on the new distribution. Examples of some common risk factors used in VaR modeling include Delta, Gamma, Vega (“the Greeks”) as well as key rate durations, swap rates, and swaption volatilities.

As such, given its relative simplicity, historical simulation is one of the more commonly used methods. However, with its simplicity also comes certain shortcomings, one of which includes the reliance of past data to inform the future. Given that the approach does not specify any distributional assumptions but rather depends on the actual distribution of the historicals, it may or may not capture any new risk factors or trends. Additionally, the approach is also dependent on the choice of lookback window both with length and scope such that on one hand it is important to select a window of sufficient length but on the other hand not too long as the approach is not as responsive to sudden shocks or drastic changes across different economic regimes and/or economic cycles. Some modifications to the historical approach include dynamic weighting of probabilities (Boudoukh, Richardson, and Whitelaw 1998) or weights (Hull and White 1998) or Filtered Historical Simulation approach (Barone-Adesi et al. 1999), as well as parametric approaches discussed in the next section.

Parametric Approach

As mentioned in the above, some alternatives to the historical simulation approach are parametric (and semi-parametric approaches). For parametric approaches, the VaR is directly estimated from the standard deviation of portfolio returns. A parametric approach is the variance-covariance approach first pioneered by RiskMetrics (Morgan 1996). For this method, the underlying  is assumed to be normally distributed, and as such the risk factor returns are also normally distributed.

Given portfolio returns as linear combinations of the risk factors, thus the portfolio returns themselves can be assumed to be normally distributed as well. Additionally, in using EWMA to model the variances, the model also implicitly assumes the variances are nonstationary in what is referred to as the IGARCH model (Nelson 1990, Lumsdaine 1995). These assumptions make this approach particularly easy to calculate any risk measures of interest, but however, breaks down with nonlinearities or “fat-tailed”, skewed distributions or with modeling the persistence of volatility.

Monte Carlo Simulation

Another approach to modeling VaR is by Monte Carlo simulation. It is similar to the historical simulation approach but instead of using historical data this approach specifies some distributional assumption for the risk factor changes to generate portfolio losses. In other words, the Monte Carlo approach is essentially a hybrid approach using historical data “more intelligently” (than historical simulation does) while also using some parametric approach in estimating the loss function. According to Jorion (2007), Monte Carlo approach is by far the most powerful method to compute VaR with both the flexibility to capture volatility in returns, fat tails, and extreme scenarios, as well as nonlinearities, options, Vega risk, complex pricing models, and other scenarios where normality and/or nonstationary conditions are questionable. That said, however, some drawbacks of the approach include the limit of the simulations to the underlying data. That the model is only as good as the data itself. Additionally, as the number simulations increase, so can the approach get computationally intensive. 

VaR using filtered historical simulation

As mentioned above, a shortcoming of a historical/nonparametric approach include the limitation on the responsiveness to sudden shocks in the market. Given that the regular historical simulation approach generated risk factors is applied equally throughout the entire time horizon when re-valuing the portfolio, it may not be the case where the risk factors would necessarily apply in the same way. To address the shortcoming, one might choose to scale the historical returns based on some estimate of the current conditions so to “weight” certain periods (i.e., more recent periods) more than those of other periods. Some common filtering “weighting” schemes include a simple weighting of volatilities (Hull and White 1998), or an exponentially weighted moving average (EWMA) approach.

 

References

[1]     Abad, Pilar, Benito, Sonia, López, Carmen. A comprehensive review of Value at Risk methodologies. Spanish Review of Financial Economics. Vol. 12, Issue 1, pp. 15-32. June 2013. doi: 10.1016

[2]     Barone-Adesi, Giovanni, Giannopoulos, Kostas, Vosper, Les. VaR without correlations for portfolios of derivative securities. Journal of Futures Markets. Vol. 19, Issue 5, pp. 583-602. January 1999.

[3]     Boudoukh, Jacob, Richardson, Matthew, Whitelaw, Robert F (1998). The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk.

[4]     Damodaran, Aswath. Value at Risk (VaR). https://pages.stern.nyu.edu/~adamodar/pdfiles/papers/VAR.pdf

[5]     Duffie, Darrell & Pan, Jun. An Overview of Value at Risk. January 1997.

[6]     Embrechts, Paul, Furrer, Hansjörg, Kaufmann, Roger. Different Kinds of Risk. Handbook of Financial Time Series. pp. 734-751. 2009.

[7]     Haugh, Martin. IEOR E4602: Quantitative Risk Management. “Basic Concepts and Techniques of Risk Management.” Spring 2016. Columbia University.

[8]     Hull, John & White, Alan. Incorporating volatility updating into the historical simulation method for VaR. Journal of Risk. September 1998.

[9]     Jorion, Philippe (2001). Value at Risk: The New Benchmark for Managing Financial Risk. 2nd edition. McGraw-Hill.

[10]  RiskMetrics Technical Document. 4th edition. JPMorgan. December 2016.

[11]  RiskMetrics – VaR Statistics. JP Morgan. https://help.riskmetrics.com/RiskManager3/Content/Statistics_Reference/VaR.htm