Value at risk (VAR)

Автор: Пользователь скрыл имя, 14 Февраля 2013 в 19:29, реферат

Описание работы

“What is the most I can lose on this investment?” This is a question that almost every investor who has invested or is considering investing in a risky asset asks at some point in time. Value at Risk tries to provide an answer, at least within a reasonable bound.
In fact, it is misleading to consider Value at Risk, or VaR as it is widely known, to be an alternative to risk adjusted value and probabilistic approaches.

Работа содержит 1 файл

VAR.docx

— 24.78 Кб (Скачать)

 

VALUE AT RISK (VAR)

“What is the most I can lose on this investment?” This is a question that almost every investor who has invested or is considering investing in a risky asset asks at some point in time. Value at Risk tries to provide an answer, at least within a reasonable bound.

In fact, it is misleading to consider Value at Risk, or VaR as it is widely known, to be an alternative to risk adjusted value and probabilistic approaches. After all, it borrows liberally from both. However, the wide use of VaR as a tool for risk assessment, especially in financial service firms, and the extensive literature that has developed around it.

What is Value at Risk?

In its most general form, the Value at Risk measures the potential loss in value of a risky asset or portfolio over a defined period for a given confidence interval. Thus, if the VaR on an asset is $ 100 million at a one-week, 95% confidence level, there is a only a 5% chance that the value of the asset will drop more than $ 100 million over any given week. In its adapted form, the measure is sometimes defined more narrowly as the possible loss in value from “normal market risk” as opposed to all risk, requiring that we draw distinctions between normal and abnormal risk as well as between market and nonmarket risk.

While Value at Risk can be used by any entity to measure its risk exposure, it is used most often by commercial and investment banks to capture the potential loss in value of their traded portfolios from adverse market movements over a specified period; this can then be compared to their available capital and cash reserves to ensure that the losses can be covered without putting the firms at risk.

Taking a closer look at Value at Risk, there are clearly key aspects:

1. To estimate the probability of the loss, with a confidence interval, we need to define the probability distributions of individual risks, the correlation across these risks and the effect of such risks on value. In fact, simulations are widely used to measure the VaR for asset portfolio.

2. The focus in VaR is clearly on downside risk and potential losses. Its use in banks reflects their fear of a liquidity crisis, where a low-probability catastrophic occurrence creates a loss that wipes out the capital and creates a client exodus. The demise of Long Term Capital Management, the investment fund with top pedigree Wall Street traders and Nobel Prize winners, was a trigger in the widespread acceptance of VaR.

3. There are three key elements of VaR – a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR can be specified for an individual asset, a portfolio of assets or for an entire firm.

4. While the VaR at investment banks is specified in terms of market risks – interest rate changes, equity market volatility and economic growth – there is no reason why the risks cannot be defined more broadly or narrowly in specific contexts. Thus, we could compute the VaR for a large investment project for a firm in terms of competitive and firm-specific risks and the VaR for a gold mining company in terms of gold price risk.

In the sections that follow, we will begin by looking at the history of the development of this measure, ways in which the VaR can be computed, limitations of and variations on the basic measures and how VaR fits into the broader spectrum of risk assessment approaches.

A Short History of VaR

While the term “Value at Risk” was not widely used prior to the mid 1990s, the origins of the measure lie further back in time. The mathematics that underlie VaR were largely developed in the context of portfolio theory by Harry Markowitz and others, though their efforts were directed towards a different end – devising optimal portfolios for equity investors. In particular, the focus on market risks and the effects of the comovements in these risks are central to how VaR is computed.

The impetus for the use of VaR measures, though, came from the crises that beset financial service firms over time and the regulatory responses to these crises. The first regulatory capital requirements for banks were enacted in the aftermath of the Great Depression and the bank failures of the era, when the Securities Exchange Act established the Securities Exchange Commission (SEC) and required banks to keep their borrowings below 2000% of their equity capital. In the decades thereafter, banks devised risk measures and control devices to ensure that they met these capital requirements. With the increased risk created by the advent of derivative markets and floating exchange rates in the early 1970s, capital requirements were refined and expanded in the SEC’s

Uniform Net Capital Rule (UNCR) that was promulgated in 1975, which categorized the financial assets that banks held into twelve classes, based upon risk, and required different capital requirements for each, ranging from 0% for short term treasuries to 30% for equities. Banks were required to report on their capital calculations in quarterly statements that were titled Financial and Operating Combined Uniform Single (FOCUS) reports.

The first regulatory measures that evoke Value at Risk, though, were initiated in 1980, when the SEC tied the capital requirements of financial service firms to the losses that would be incurred, with 95% confidence over a thirty-day interval, in different security classes; historical returns were used to compute these potential losses. Although the measures were described as haircuts and not as Value or Capital at Risk, it was clear the SEC was requiring financial service firms to embark on the process of estimating one month 95% VaRs and hold enough capital to cover the potential losses. At about the same time, the trading portfolios of investment and commercial banks were becoming larger and more volatile, creating a need for more sophisticated and timely risk control measures. Ken Garbade at Banker’s Trust, in internal documents, presented sophisticated measures of Value at Risk in 1986 for the firm’s fixed income portfolios, based upon the covariance in yields on bonds of different maturities. By the early 1990s, many financial service firms had developed rudimentary measures of Value at Risk, with wide variations on how it was measured. In the aftermath of numerous disastrous losses associated with the use of derivatives and leverage between 1993 and

1995, culminating with the failure of Barings, the British investment bank, as a result of unauthorized trading in Nikkei futures and options by Nick Leeson, a young trader in Singapore, firms were ready for more comprehensive risk measures. In 1995, JPMorgan provided public access to data on the variances of and covariances across various security and asset classes, that it had used internally for almost a decade to manage risk, and allowed software makers to develop software to measure risk. It titled the service “RiskMetrics” and used the term Value at Risk to describe the risk measure that emerged from the data. The measure found a ready audience with commercial and investment banks, and the regulatory authorities overseeing them, who warmed to its intuitive appeal. In the last decade, VaR has becomes the established measure of risk exposure in financial service firms and has even begun to find acceptance in non financial service firms.

The problem of risk measurement is an old one in statistics, economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well. Retrospective analysis has found some VaR-like concepts in this history. But VaR did not emerge as a distinct concept until the late 1980s. The triggering event was the stock market crash of 1987. This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival.

 

The crash was so unlikely given standard statistical models, that it called the entire basis of quant finance into question. A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, which overwhelmed the statistical assumptions embedded in models used for trading, investment management and derivative pricing. These affected many markets at once, including ones that were usually not correlated, and seldom had discernible economic cause or warning (although after-the-fact explanations were plentiful). Much later, they were named "Black Swans" by Nassim Taleb and the concept extended far beyond finance. If these events were included in quantitative analysis they dominated results and led to strategies that did not work day to day. If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result.

VaR was developed as a systematic way to segregate extreme events, which are studied qualitatively over long-term history and broad market events, from everyday price movements, which are studied quantitatively using short-term data in specific markets. It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial.

Abnormal markets and trading were excluded from the VaR estimate in order to make it observable. It is not always possible to define loss if, for example, markets are closed as after 9/11, or severely illiquid, as happened several times in 2008. Losses can also be hard to define if the risk-bearing institution fails or breaks up. A measure that depends on traders taking certain actions, and avoiding other actions, can lead to self reference.

This is risk management VaR. It was well established in quantative trading groups at several financial institutions, notably Bankers Trust, before 1990, although neither the name nor the definition had been standardized. There was no effort to aggregate VaRs across trading desks.

The financial events of the early 1990s found many firms in trouble because the same underlying bet had been made at many places in the firm, in non-obvious ways. Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk. J. P. Morgan CEO Dennis Weatherstone famously called for a “4:15 report”  that combined all firm risk on one page, available within 15 minutes of the market close.

Risk measurement VaR was developed for this purpose. Development was most extensive at J. P. Morgan, which published the methodology and gave free access to estimates of the necessary underlying parameters in 1994. This was the first time VaR had been exposed beyond a relatively small group of quants. Two years later, the methodology was spun off into an independent for-profit business now part of RiskMetrics Group.

In 1997, the U.S. Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity. Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements.

Worldwide adoption of the Basel II Accord, beginning in 1999 and nearing completion today, gave further impetus to the use of VaR. VaR is the preferred measure of market risk, and concepts similar to VaR are used in other parts of the accord.

Measuring Value at Risk

There are three basic approaches that are used to compute Value at Risk, though there are numerous variations within each approach. The measure can be computed analytically by making assumptions about return distributions for market risks, and by using the variances in and covariances across these risks. It can also be estimated by running hypothetical portfolios through historical data or from Monte Carlo simulations.

 

Variance-Covariance Method

Since Value at Risk measures the probability that the value of an asset or portfolio will drop below a specified value in a particular time period, it should be relatively simple to compute if we can derive a probability distribution of potential values. That is basically what we do in the variance-covariance method, an approach that has the benefit of simplicity but is limited by the difficulties associated with deriving probability distributions.

Extensions of VaR

The popularity of Value at Risk has given rise to numerous variants of it, some designed to mitigate problems associated with the original measure and some directed towards extending the use of the measure from financial service firms to the rest of the market.

There are modifications of VaR that adapt the original measure to new uses but remain true to its focus on overall value. Hallerback and Menkveld modify the conventional VaR measure to accommodate multiple market factors and computed what they call a Component Value at Risk, breaking down a firm’s risk exposure to different market risks. They argue that managers at multinational firms can use this risk measure to not only determine where their risk is coming from but to manage it better in the interests of maximizing shareholder wealth. In an attempt to bring in the possible losses in the tail of the distribution (beyond the VaR probability), Larsen, Mausser and Uryasev estimate what they call a Conditional Value at Risk, which they define as a weighted average of the VaR and losses exceeding the VaR. This conditional measure can be considered an upper bound on the Value at Risk and may reduce the problems associated with excessive risk taking by managers. Finally, there are some who note that Value at Risk is just one aspect of an area of mathematics called Extreme Value Theory, and that there may be better and more comprehensive ways of measuring exposure to catastrophic risks.

The other direction that researchers have taken is to extend the measure to cover metrics other than value. The most widely used of these is Cashflow at Risk (CFaR).

While Value at Risk focuses on changes in the overall value of an asset or portfolio as market risks vary, Cash Flow at Risk is more focused on the operating cash flow during a period and market induced variations in it. Consequently, with Cash flow at Risk, we assess the likelihood that operating cash flows will drop below a pre-specified level; an annual CFaR of $ 100 million with 90% confidence can be read to mean that there is only a 10% probability that cash flows will drop by more than $ 100 million, during the next year. Herein lies the second practical difference between Value at Risk and Cashflow at Risk. While Value at Risk is usually computed for very short time intervals – days or weeks – Cashflow at Risk is computed over much longer periods – quarters or years.

Why focus on cash flows rather than value? First, for a firm that has to make contractual payments (interest payments, debt repayments and lease expenses) during a particular period, it is cash flow that matters; after all, the value can remain relatively stable while cash flows plummet, putting the firm at risk of default. Second, unlike financial service firms where the value measured is the value of marketable securities which can be converted into cash at short notice, value at a non-financial service firm takes the form of real investments in plant, equipment and other fixed assets which are far more difficult to monetize. Finally, assessing the market risks embedded in value, while relatively straight forward for a portfolio of financial assets, can be much more difficult to do for a manufacturing or technology firm.

How do we measure CFaR? While we can use any of the three approaches described for measuring VaR – variance-covariance matrices, historical simulations and

Monte Carlo simulations – the process becomes more complicated if we consider all risks and not just market risks. Stein, Usher, LaGattuta and Youngen develop a template for estimating Cash Flow at Risk, using data on comparable firms, where comparable is defined in terms of market capitalization, riskiness, profitability and stock-price performance, and use it to measure the risk embedded in the earnings before interest, taxes and depreciation (EBITDA) at Coca Cola, Dell and Cignus (a small pharmaceutical firm). Using regressions of EBITDA as a percent of assets across the comparable firms over time, for a five-percent worst case; they estimate that EBITDA would drop by $5.23 per $ 100 of assets at Coca Cola, $28.50 for Dell and $47.31 for Cygnus. They concede that while the results look reasonable, the approach is sensitive to both the definition of comparable firms and is likely to yield estimates with error.

There are less common adaptations that extend the measure to cover earnings (Earnings at Risk) and to stock prices (SPaR). These variations are designed by what the researchers view as the constraining variable in decision making. For firms that are focused on earnings per share and ensuring that it does not drop below some prespecified floor, it makes sense to focus on Earnings at Risk. For other firms, where a drop in the stock price below a given level will give risk to constraints or delisting, it is SPaR that is the relevant risk control measure.

 

 

 

Conclusion

Value at Risk has developed as a risk assessment tool at banks and other financial service firms in the last decade. Its usage in these firms has been driven by the failure of the risk tracking systems used until the early 1990s to detect dangerous risk taking on the part of traders and it offered a key benefit: a measure of capital at risk under extreme conditions in trading portfolios that could be updated on a regular basis.

While the notion of Value at Risk is simple- the maximum amount that you can lose on an investment over a particular period with a specified probability – there are three ways in which Value at Risk can be measured.

The VaR risk measure is a popular way to aggregate risk across an institution. Individual business units have risk measures such as duration for a fixed income portfolio or beta for an equity business. These cannot be combined in a meaningful way. It is also difficult to aggregate results available at different times, such as positions marked in different time zones, or a high frequency trading desk with a business holding relatively illiquid positions.

But since every business contributes to profit and loss in an additive fashion, and many financial businesses mark-to-market daily, it is natural to define firm-wide risk using the distribution of possible losses at a fixed point in the future.

In risk measurement, VaR is usually reported alongside other risk metrics such as standard deviation, expected shortfall and “greeks”  (partial derivatives of portfolio value with respect to market factors).

VaR is a distribution-free metric, which is it does not depend on assumptions about the probability distribution of future gains and losses. The probability level is chosen deep enough in the left tail of the loss distribution to be relevant for risk decisions, but not so deep as to be difficult to estimate with accuracy.

VaR can be estimated either parametrically (for example, variance-covariance VaR or delta-gamma VaR) or nonparametrically (for examples, historical simulation VaR or resampled VaR).Nonparametric methods of VaR estimation are discussed in Markovich and Novak.

 


Информация о работе Value at risk (VAR)